Next Article in Journal
Extension of SEIR Compartmental Models for Constructive Lyapunov Control of COVID-19 and Analysis in Terms of Practical Stability
Next Article in Special Issue
Asymptotic Phase and Amplitude for Classical and Semiclassical Stochastic Oscillators via Koopman Operator Theory
Previous Article in Journal
Some New Oscillation Criteria of Even-Order Quasi-Linear Delay Differential Equations with Neutral Term
Previous Article in Special Issue
Trigonometric Embeddings in Polynomial Extended Mode Decomposition—Experimental Application to an Inverted Pendulum
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Nonlinear Systems Using the Infinitesimal Generator of the Koopman Semigroup—A Numerical Implementation of the Mauroy–Goncalves Method

1
Department of Mathematics, Faculty of Science, University of Zagreb, 10000 Zagreb, Croatia
2
Department of Mechanical Engineering and Mathematics, University of California, Santa Barbara, CA 93106, USA
3
AIMdyn, Inc., Santa Barbara, CA 93101, USA
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(17), 2075; https://doi.org/10.3390/math9172075
Submission received: 30 June 2021 / Revised: 10 August 2021 / Accepted: 17 August 2021 / Published: 27 August 2021
(This article belongs to the Special Issue Dynamical Systems and Operator Theory)

Abstract

:
Inferring the latent structure of complex nonlinear dynamical systems in a data driven setting is a challenging mathematical problem with an ever increasing spectrum of applications in sciences and engineering. Koopman operator-based linearization provides a powerful framework that is suitable for identification of nonlinear systems in various scenarios. A recently proposed method by Mauroy and Goncalves is based on lifting the data snapshots into a suitable finite dimensional function space and identification of the infinitesimal generator of the Koopman semigroup. This elegant and mathematically appealing approach has good analytical (convergence) properties, but numerical experiments show that software implementation of the method has certain limitations. More precisely, with the increased dimension that guarantees theoretically better approximation and ultimate convergence, the numerical implementation may become unstable and it may even break down. The main sources of numerical difficulties are the computations of the matrix representation of the compressed Koopman operator and its logarithm. This paper addresses the subtle numerical details and proposes a new implementation algorithm that alleviates these problems.

1. Introduction

Suppose that we have an autonomous dynamical system
x ˙ ( t ) = F ( x ( t ) ) ( F 1 ( x ( t ) ) F n ( x ( t ) ) ) , x ( t ) R n ,
that is accessible only through snapshots from a sequence of trajectories with different (possibly unknown) initial conditions. More precisely,
( x k , y k ) R n × R n , k = 1 , , K , where y k = φ t ( x k )
is the flow associated with (1). In a real application, t is a fixed time step, and it is possible that the time resolution precludes any approach based on estimating the derivatives; the dataset could also be scarce, sparsely collected from several trajectories/short bursts of the dynamics under study. The task is to identify F and express it analytically, using a suitably chosen class of functions. This is the essence of data-driven system identification, which is a powerful modeling technique in applied sciences and engineering—for a review see e.g., [1]. Approaches like that of SINDy [2] rely on numerical differentiation, which is a formidable task in cases of scarce and/or noisy data, and it requires special techniques such as e.g., total-variation regularization [3] or e.g., weak formulation [4]. With an appropriate ansatz (e.g., physics-informed) on the structure of the right hand side in (1), the identification process is computationally executed as sparse regression, see e.g., [2,5,6]. An alternative approach is machine learning techniques such as physics-informed neural networks, which proved a powerful tool for learning nonlinear partial differential equations [7].
Recently, Mauroy and Goncalves [8] proposed an elegant method for learning F from the data, based on the semigroup U t f = f φ t of Koopman operators acting on a space of suitably chosen scalar observables f F . In the case of the main method proposed in [8], F is the space L 2 ( X ) , where X R n is compact, forward invariant, and big enough to contain all data snapshots. The method has two main steps. First, a compression of U t onto a suitable finite dimensional but rich enough subspace F N of F is computed. On a convenient basis B = { 1 , , N } of F N , having only a limited number of snapshots (2), this compression is executed in the algebraic (discrete) least squares framework, yielding the matrix representation U N R N × N of U t .
It can be shown that U N is an approximation of the matrix exponential U N e L N t , or equivalently L N ( 1 / t ) log U N , where L N is a finite dimensional compression of the infinitesimal generator L defined by
L f = lim t 0 + U t f f t , f D ( L ) .
Note that the infinitesimal generator is well-defined (on its domain D ( L ) L 2 ( X ) ) since the Koopman semigroup of operators is strongly continuous in L 2 ( X ) (i.e., lim t 0 + U t f f = 0 , where · denotes the L 2 norm on X). We refer to [9] for more details on semigroups of operators and their properties, and to [10] for the theory and applications of the Koopman operator.
In the second step, the vector field is recovered by using the fact that L can also be expressed as (see e.g., [11])
L f = F · f = i = 1 n F i f x i , f D ( L ) .
If F i = k ϕ k i k , then the action of L to the basis’s vectors k can be computed, using (4), by straightforward calculus, and its matrix representation will, by comparison with ( 1 / t ) log U N , reveal the coefficients ϕ k i . Of course, in real applications, the quality of the computed approximations will heavily depend on the information content in the supplied data snapshots. Finer sampling (smaller time resolution) and more trajectories with different initial data are certainly desirable. If N > K , then a modification, called a dual method, is based on the logarithm of a K × K matrix U K . Mauroy and Goncalves [8] proved the convergence (with probability one as t 0 , N , K ) and illustrated the performances of the method (including the dual formulation) on a series of numerical examples.
In this paper, we consider numerical aspects of the method and study its potential as a robust software tool. Our main thesis is that a seemingly simple implementation based on the off-the-shelf software components has certain limitations. For instance, with the increased dimension that guarantees theoretically better approximation and ultimate convergence, the numerical implementation, due to severe ill-conditioning, may become unstable and eventually it may break down. In other words, it can happen that with more data the potentially better approximation does not materialize in the computation. This is an undesirable chasm between analytical properties and numerical finite precision realization of the method. Hence, more numerical analysis work is needed before the method becomes mature enough to be implemented as a robust and reliable software tool.

Contributions and Organisation of the Paper

We identify, analyze and resolve the two main sources of numerical instability of the original method [8]. First, for a given time lag t, numerical computation of the matrix representation U N of a compression of the Koopman operator U t on a (finite) N-dimensional subspace may not be computed accurately enough. Secondly, even if computed accurately, the matrix U N may be so ill-conditioned as to preclude stable computation of the logarithm. Both issues are analyzed in detail, and we propose a new numerical algorithm that implements the method.
This material can be considered a numerical supplement to [8], and as an instructive case study for numerical software development. Moreover, the techniques developed here are used in the computational part of the recently developed framework [12]. The infinitesimal generator approach has also been successfully used for learning stochastic models from aggregated trajectory data [13,14], as well as for inverse modelling of Markov jump processes [15]. The stochastic framework is not considered in this paper, but its results apply to systems defined by stochastic differential equations as described in [8]. The stochastic setting contains many challenging problems and requires sophisticated tools, based e.g., on the Kolmogorov backward equation [8], the forward and adjoint Fokker–Planck equations [16,17,18], or the Koopman operator fitting [19].
The rest of paper is organized as follows. In Section 2 we set the stage and setup a numerical linear algebra framework for the analysis of subtle details related to numerical implementation of the method. This is standard, well known material and it is included for the reader’s convenience and to introduce necessary notation. In particular, we give detailed description of a finite dimensional compression (in a discrete least squares sense) of the Koopman operator (Section 2.1), and we review the basic properties of the matrix logarithm (Section 2.3). Then, in Section 3, we review the details of the Mauroy–Goncalves method, including a particular choice of the monomial basis B and in Section 3.2 the details on the corresponding matrix representation of the generator (4). A case study example that illustrates the problem of numerical ill-conditioning is provided in Section 4. In Section 5, we propose a preconditioning step that allows for a more accurate computation of the logarithm log U N in the case K N . In Section 6 we consider the dual method for the case N > K , and formulate it as a compression of U onto a particular K dimensional subspace of F N . This formulation is then generalized in Section 7, where we introduce a new algorithm that (out of a given N) selects a prescribed number of (at most K) basis functions that are most linearly independent, as seen on the discrete set of data snapshots. The proposed algorithm, designated as basis pruning, can be used in both the dual and the main method, and it can be combined with the preconditioning introduced in Section 5.

2. Preliminaries: Finite Dimensional Compression of U and Its Logarithm

To set up the stage, in Section 2.1 we first describe matrix representation U N of the Koopman operator compressed to a finite dimensional subspace F N F . Some details of the numerical computation of U N are discussed in Section 2.2. In Section 2.3, we briefly review the matrix logarithm from the numerical linear algebra perspective.

2.1. Compression of U t and the Anatomy of Its Matrix Representation

Given U t : F F and an N-dimensional subspace F N F , we want to compress U t to F N and to work with a finite dimensional approximation Φ N U | F N t : F N F N , where Φ N : F F N is an appropriate projection. The subspace F N contains functions that are simple for computation, but it is rich enough to provide good approximations for the functions in F ; it will be materialized through an ordered basis B = { 1 , , N } . If an f F N is expressed as f = i = 1 N f i i , then the coordinates of f in the basis B are written as [ f ] B = ( f 1 , , f N ) T . The ambient space F is equipped with the Hilbert space structure.

2.1.1. Discrete Least Squares Projection Φ N : F F N

Since in a data-driven setting the function is known only at the points x k , an operator compression will be defined using discrete least squares projection. For g F , the projection Φ N g = i = 1 N g ^ i i F N of g is defined so that the g ^ i ’s solve the problem
1 K k = 1 K ω k 2 ( Φ N g ) ( x k ) g ( x k ) 2 2 = 1 K k = 1 K ω k 2 i = 1 N g ^ i i ( x k ) g ( x k ) 2 2 min g ^ i ,
where ω k 0 is a weight attached to each sample x k , and · 2 is the Euclidean norm. This is a L 2 residual with respect to the empirical measure δ K defined as the sum of the Dirac measures concentrated at the x k ’s, δ K = ( 1 / K ) k = 1 K δ x k . The weighting will be important in the case of noisy data; it can also be used in connection with a quadrature formula so that (5) mimics a continuous norm (defined by an integral) approximation in F . In the unweighted case ω k = 1 for all k. The objective function (5) can be written as
W 1 2 [ ( 1 ( x 1 ) N ( x 1 ) 1 ( x K ) N ( x K ) ) ( g ^ 1 g ^ N ) ( g ( x 1 ) g ( x K ) ) ] 2 2 W 1 2 [ O X ( g ^ i ) i = 1 N ( g ( x k ) ) k = 1 K ] 2 2 ,
where W = diag ( ω k 2 ) k = 1 K , ( O X ) i j = j ( x i ) . More generally, W can be a suitable positive definite (e.g., inverse of the noise covariance) matrix. In a numerical computation the positive definite square root W 1 / 2 is replaced, equivalently, with the Cholesky factor: if W = L L T is the Cholesky factorization with the unique lower triangular factor L, then Φ = H 1 / 2 L T must be orthogonal and when we replace W 1 / 2 with Φ L T , we can omit Φ because the norm in the above objective function is invariant under orthogonal transformations.
At this point we assume that the K × N matrix O X is of full column rank N; this requires K N . (The rank deficient case will be discussed later.) This full column rank assumption yields the unique least squares solution
( g ^ 1 g ^ N ) = ( O X T W O X ) 1 O X T W ( g ( x 1 ) g ( x K ) ) = [ Φ N g ] B ,
which defines the projection Φ N . If g F N , then [ Φ N g ] B = [ g ] B . In the unweighted case, W = I K , we have [ Φ N g ] B = O X ( g ( x i ) ) i = 1 K .

2.1.2. Matrix Representation of Φ N U | F N t : F N F N

To describe the action of U t in F N , we first consider how it changes the basis vectors: U t i can be split as a sum of a component belonging to F N , and a residual,
( U t i ) ( x ) = i ( φ t ( x ) ) = j = 1 N u j i j ( x ) + ρ i ( x ) .
Given the limited information (only the snapshot pairs ( x k , y k ) ), the coefficients u j i are determined so that the residual ρ i is small over the data x k . Since φ t ( x k ) = y k , we have
ρ i ( x k ) = i ( y k ) j = 1 N u j i j ( x k ) , k = 1 , , K ,
and we can select the u j i ’s to minimize
k = 1 K ω k 2 j = 1 N u j i j ( x k ) i ( y k ) 2 2 = W 1 2 [ ( 1 ( x 1 ) N ( x 1 ) 1 ( x K ) N ( x K ) ) ( u 1 i u N i ) ( i ( y 1 ) i ( y K ) ) ] 2 2 .
The solution is the projection (see e.g., ([20], §2.1.3, §2.7.4))
( u 1 i u N i ) = ( O X T W O X ) 1 O X T W ( i ( y 1 ) i ( y K ) ) ( O X T W O X ) 1 O X T W ( ( U t i ) ( x 1 ) ( U t i ) ( x K ) ) = [ Φ N U t i ] B .
Then, for any f = i = 1 N f i i F N , we have
g ( x ) = U t f ( x ) = i = 1 N f i [ j = 1 N u j i j ( x ) + ρ i ( x ) ] = j = 1 N j ( x ) i = 1 N u j i f i + i = 1 N f i ρ i ( x ) = j = 1 N j ( x ) g j + i = 1 N f i ρ i ( x ) , where g j = i = 1 N u j i f i , i . e . , ( g 1 g N ) = U N ( f 1 f N ) .
Hence, Φ N U | F N t : F N F N is on the basis B represented by the matrix
[ Φ N U | F N t ] B = O X O Y U N ,      
Φ N U | F N t ( 1 ( x ) N ( x ) ) ( f 1 f N ) = ( 1 ( x ) N ( x ) ) ( U N ( f 1 f N ) ) ,
where we use W = I K for the sake of technical simplicity, and O X is the Moore–Penrose generalized inverse. If W I K , then [ Φ N U | F N t ] B = ( O X T W O X ) 1 O X T W O Y U N .

2.1.3. When Is U N Nonsingular?

If both O X and O Y are of full column rank N, then the rank of U N depends on the canonical angles between the ranges of O X and O Y . Indeed, if O X = Q X R X , O Y = Q Y R Y are the thin QR factorizations, then U N = R X 1 Q X T Q Y R Y , where the singular values of Q X T Q Y can be written in terms of the canonical angles 0 θ 1 θ N π / 2 as cos θ 1 cos θ N . Hence, in the case of full column rank of O X and O Y , nonsingularity of U N is equivalent to cos θ N > 0 . To visualize this condition, U N will be nonsigular if none of the ranges of O X and O Y contain a direction that is orthogonal to the other one. If the basis function and the flow map are well behaved and the sampling time is reasonable, it is reasonable to expect θ N < π / 2 .

2.1.4. Relations with the DMD

In the DMD framework, 1 , , N are the scalar components of a N × 1 vector valued observable evaluated at the sequence of snapshots x 1 , , x K , and it is customary to arrange them in a N × K matrix, i.e., O X T . Similarly, the values of the observables at the y k ’s are in the matrix O Y T . The Exact DMD matrix is then A = O Y T ( O X T ) = U N T . For more details on this connection we refer to [21,22].

2.2. On the Numerical Solution of O X U N O Y F m i n

In general, the least squares projection O X U N = O X O X O Y is uniquely determined, but, unless O X is of full column rank N, the solution U N of the problem O X U N O Y F min is not unique—each of its columns is from a linear manifold determined by the null-space of O X and we can vary them independently by adding to them arbitrary vectors from the null space of O X . Furthermore, even when O X is of rank N but ill-conditioned, a typical numerical least squares solver will detect the ill-conditioning by revealing that the matrix is close to singular matrices and it will treat it as numerically rank deficient. Then, the computed solution U N becomes non-unique, the concrete result depends on the least squares solution algorithm, and it may be rank deficient. This calls for caution when computing U N and log U N numerically.
We discuss this in Section 2.2.1, and illustrate the problems in practice using a case study example in Section 4.

2.2.1. Least Squares Solution in Case of Numerical Rank Deficiency

If O X is not of full column rank N, then O X O Y is one of infinitely many solutions to the least squares problem for the matrix U N that is used to represent the operator compression. Furthermore, since it is necessarily singular, its logarithm does not exist and identifying a matrix approximation of the infinitesimal generator is not feasible. This is certainly the case when K < N (recall that in this case a dual form of the method is used; see Section 6), and in the case K N , considered here, the matrix O X can be numerically rank deficient and the software solution will return a solution that depends on a particular algorithm for solving the least squares problem.
Let O X = Φ Σ Ψ T be the SVD of O X and let r be the rank of O X such that r < min ( K , N ) . Let Σ r = diag ( σ i ) i = 1 r , where σ 1 σ r > 0 are the nonzero singular values. Partition the singular vector matrices as Φ = ( Φ r , Φ 0 ) , Ψ = ( Ψ r , Ψ 0 ) , where Φ r and Ψ r have r columns, so that O X = Φ r Σ r Ψ r T , O X = Ψ r Σ r 1 Φ r T . Recall that the columns of the N × ( N r ) matrix Ψ 0 are an orthonormal basis for the null space of O X .
In the rank deficient case, the solution set for the least squares problem O X U N O Y F min U N is a linear manifold—in this case of the form
N = { O X O Y + Ψ 0 Ξ , Ξ R ( N r ) × N } .
Clearly O X ( O X O Y + Ψ 0 Ξ ) = O X O X O Y = Φ r Φ r T O Y .
The particular choice U N = O X O Y = Ψ r Σ r 1 Φ r T O Y , as (8), is distinguished by being of minimal Frobenius norm, because O X O Y + Ψ 0 Ξ F 2 = O X O Y F 2 + Ξ F 2 . The minimality of the Euclidean norm is one criterion to pinpoint the unique solution and uniquely define the pseudo-inverse. Such minimal norm solution, which is of rank at most r, may indeed be desirable in some applications, but here it is useless if r < N , because we cannot proceed with computing log U N .
It remains an interesting question whether in the rank deficient cases we can explore the solution set (10) and with some additional constraints define a meaningful matrix representation. In general, a matrix representation should as much as possible reproduce the behaviour of the operator in that subspace. For example, if O X is of full column rank N (i.e., we have sufficiently large K and well selected observables) and the first basis function is constant, 1 ( x ) 1 , then the first columns of O X and O Y are e = ( 1 , , 1 ) T and simple argument implies that in (8) the first column of U N is the first canonical vector e 1 = ( 1 , , 0 ) T and
[ Φ N U | F N t ] B [ 1 ] B U N e 1 = e 1 = 1 · [ 1 ] B ,
which corresponds to U t 1 = 1 · 1 . If r < N (so that O X has a nontrivial null space), then we do not expect U N e 1 = e 1 , but we can show that N contains a matrix U ^ N such that U ^ N e 1 = e 1 .
Proposition 1.
If 1 ( x ) 1 , then we can choose an U ^ N N such that U ^ N e 1 = e 1 .
Proof. 
To satisfy ( O X O Y + Ψ 0 Ξ ) e 1 = e 1 , Ξ must be such that Ψ 0 Ξ e 1 = e 1 U N e 1 . Note that e 1 U N e 1 is in the null-space of O X :
O X ( e 1 O X O Y e 1 ) = O X e 1 O X O X O Y e 1 = e O X O X e = e e = 0 .
Hence, Ξ ( : , 1 ) = Ψ 0 T ( e 1 U N e 1 ) = Ψ 0 T e 1 , and Ξ ( : , 2 : n ) can be set e.g., to zero to obtain Ξ of minimal Frobenius norm. Here also Ξ = Ψ 0 T is an interesting choice, but we omit the details because, in this paper, following [8], we treat the rank deficiency by a form of dual method as described in Section 6 and Section 7. □
Remark 1.
The global non-uniqueness in form of the additive term Ψ 0 Ξ is non-essential when instead of U N we use its compression in a certain subspace. For instance, if r = K < N the Rayleigh quotient of U N with respect to the range of O X T remains unchanged, see Section 6.
In practice, the least squares solution using the SVD and the formula for the pseudo-inverse are often replaced by a more efficient method based on the column pivoted (rank revealing) QR factorization. For instance, the factorization [23] uses a greedy matrix volume maximizing scheme to determine a permutation Π such that in the QR factorization
O X Π = Q R , Q R K × N , Q T Q = I N , R R N × N upper   triangular ,
the triangular factor R has a strong diagonal dominance of the form
| R i i | k = i j | R k j | 2 , 1 i j N .
If O X is of rank r < N , then R ( 1 : r , 1 : r ) is nonsingular and R ( r + 1 : N , r + 1 : N ) = 0 , and the least squares solution of O X z b 2 min z is computed as
z = Π ( R ( 1 : r , 1 : r ) 1 Q ( 1 : K , 1 : r ) T b 0 N r , 1 ) .
In a nearly rank deficient (i.e., ill-conditioned) case, the index r is determined so that setting R ( r + 1 : N , r + 1 : N ) to zero introduces error below a tolerance that is of order of machine precision. This is the solution returned by the backslash operator in Matlab. Hence, in the numerical rank deficient cases, the solution will have at least N r zero components, and it will be in general different from the solution obtained using the pseudo-inverse. Here too, some additional constraints can be satisfied under some conditions, as shown in the following:
Proposition 2.
Let O X Π = Q R be the column pivoted QR factorization in the first step of the backslash operator when solving the least squares problem O X U N O Y F min , where r = rank ( O X ) < N and O X ( : , 1 ) = O Y ( : , 1 ) . If the first column of O X is selected among the leading k pivotal columns in the permutation matrix Π, then U N ( : , 1 ) = e 1 .

2.3. Computing the Logarithm log O X O Y

The key element of the Mauroy–Goncalves method is that U N e L N t , where L N is a matrix representation of a compression of the infinitesimal generator (3), that is computed as L N = ( 1 / t ) log U N . Hence, to use the matrix L N as an ingredient of the identification method, it is necessary that U N = O X O Y is nonsingular; otherwise log U N does not exist. Furthermore, to achieve the primary value of the logarithm (as a primary matrix function, i.e., the same branch of the logarithm used in all Jordan blocks), the matrix must not have any real negative eigenvalues. Only under those conditions we can obtain a real (assuming real U N ) logarithm as the primary function.
For the reader’s convenience, we summarize the key properties of the matrix logarithm and refer to [24] for proofs and more details.
Theorem 1.
Let A be real nonsingular matrix. Then A has real logarithm if and only if A has an even number of Jordan blocks of each size for every negative eigenvalue.
Theorem 2.
Let A be real nonsingular matrix. Then A has a unique real logarithm if and only if all eigenvalues of A are positive real and no eigenvalue has more than one Jordan block in the Jordan normal form of A.
Theorem 3.
Suppose that the n × n complex A has no eigenvalue on ( , 0 ] . Then a unique logarithm of A can be defined with eigenvalues in the strip { z C : π < ( z ) < π } . It is called the principal logarithm and denoted by log A . If A is real, then its principal logarithm is real as well.
In an application, the matrix U N = O X O Y may be difficult to compute accurately and it may be so severely ill-conditioned that in the finite precision computations it could appear numerically rank deficient (recall the discussion in Section 2.2). Thus, from the numerical point of view, the most critical part of the method is computing U N and its logarithm. For a detailed analysis of numerical methods for computing the matrix logarithm we refer the reader to ([25], Chapter 11).

3. Identification Method

To introduce the new numerical implementation, we need more detailed description of the method [8] and its concrete realization. In Section 3.1, we select the subspace F N as the span of the monomials in n variables. The key idea of the method, to explore the connection U t = e L t in the framework of finite dimensional compressions of U t and L , is reviewed in detail in Section 3.2. This is a rather challenging step, both in terms of theoretical justification of the approximation (including convergence when N , K , t 0 ) and the numerical realization. As an interesting detail, we point out in Section 3.3.1 that a structure aware reconstruction/identification can be naturally formulated for e.g., the important class of quadratic systems. This generates an interesting structured least squares problem.

3.1. The Choice of the Basis B —Monomials

For a concrete application, the choice of a suitable basis depends on the assumption on the structure of F . The monomial basis is convenient if F is a polynomial field, or if it can be well approximated by polynomials. We assume that
F ( x ) = ( k = 1 N F ϕ k 1 x 1 s 1 ( k ) x 2 s 2 ( k ) x n s n ( k ) k = 1 N F ϕ k n x 1 s 1 ( k ) x 2 s 2 ( k ) x n s n ( k ) ) = ( F 1 ( x ) F n ( x ) ) , F j ( x ) = k = 1 N F ϕ k j x s ( k ) ,
where x s ( k ) = x 1 s 1 ( k ) x 2 s 2 ( k ) x n s n ( k ) are monomials written in multi-index notation and have a total degree of at most m F . In that case, F N is chosen as the space of polynomials up to some total degree m, m m F i.e., F N = span ( B ) , where
B = { x 1 s 1 x 2 s 2 x n s n : s i N 0 , s 1 + s 2 + + s n m } , N = ( n + m n ) N F .
To facilitate automatic and relatively simple matrix representation of linear operators acting on F N , we choose graded lexicographic ordering (grlex) of B , which is one of the standard procedures in the multivariate polynomial framework. Grlex orders the basis so that it first divides the monomials in groups with same total degree; the groups are listed with increasing total degree and inside each group the monomials are ordered so that their exponents s = ( s 1 , , s n ) N 0 n are lexicographically ordered. For example, if n = 3 , m = 2 , we have the order as follows (read the tables in (16) column-wise; each column corresponds to the monomials of the same total degree, ordered lexicographically):
[ 1 ( 000 ) x 3 ( 001 ) x 3 2 ( 002 ) x 2 ( 010 ) x 2 x 3 ( 011 ) x 1 ( 100 ) x 2 2 ( 020 ) x 1 x 3 ( 101 ) x 1 x 2 ( 110 ) x 1 2 ( 200 ) ] ( s 1 , , s n ) k [ 1 2 5 3 6 4 7 8 9 10 ] .
If we want to emphasize that s = ( s 1 , , s n ) is at the kth position in this ordering, we write s ( k ) = ( s 1 ( k ) , , s n ( k ) ) , and the corresponding monomial is written as x s ( k ) x 1 s 1 ( k ) x 2 s 2 ( k ) x n s n ( k ) . An advantage of grlex in our setting is that it allows simple extraction of the operator compression to a subspace spanned by monomials of lower total degree.
It should be noted that the dimension N grows extremely fast with increased n and m, which is the source of many computational difficulties, in particular when combined with the requirement K N which is a necessary condition for the non-singularity of U N . (This difficulty is alleviated by the dual method.)
Even though polynomial basis is not always the best choice, it serves well for the purposes of this paper because, with increased total degree, it generates highly ill-conditioned numerical examples that are good stress test cases for development, testing and analysis of numerical implementation.

3.2. Compression of L in the Monomial Basis

Consider now the action of L to the vectors of the monomial basis B . It is a straightforward and technically tedious task to identify the columns of the corresponding matrix [ L N ] B , whose approximation can also be computed as 1 t log U N . Since we are interested only in the coefficients ϕ k j in (14), it is enough to compute only some selected columns of  [ L N ] B .
Let be the index of x j in the grlex ordering, i.e., ( x ) = x j ; = n + 2 j (see the scheme (16)). Then the application of (4) to reads
( L ) ( x ) = ( F · ) ( x ) = i = 1 n F i ( x ) x i ( x ) = F j ( x ) F n + 2 ( x ) , i . e . , L = F n + 2 .
If L N = Φ N L | F N , then also L N = F j (because of the assumption (14) F j F N ). Hence, in the basis B we have
[ L N ] B ( : , ) = [ Φ N L ] B = [ F j ] B = ( ϕ 1 j ϕ 2 j ϕ N F j 0 N N F ) , where j = n + 2 , = 2 , , n + 1 .
In other words, the coordinates of F j are encoded in [ L N ] B ( : , n + 2 j ) . Finally the entries of [ L N ] B can be obtained with the compression U N computed from data. Indeed, it is shown in [8] that
lim t 0 + Φ N L f ( 1 / t ) log Φ N U t f = 0 , f F N .
Hence, provided that B contains independent basis functions, we have
[ L N ] B = lim t 0 + 1 t [ log Φ N U | F N t ] B = lim t 0 + 1 t log [ Φ N U | F N t ] B ,
and it follows that, for t small enough,
[ L N ] B 1 t log U N 1 t log O X O Y .
For each F j , its coefficients in the expansion (14) are simply read off from the corresponding column of [ L N ] B . Alternatively, we can identify additional columns and determine the coefficients by solving a least squares problem. For more details we refer to [8].

3.3. Imposing the Structure in the Reconstruction of F

Using (17), the values of F j at the x k ’s can be approximated using the values
( F j ( x 1 ) ˜ F j ( x K ) ˜ ) = ( 1 ( x 1 ) N ( x 1 ) 1 ( x K ) N ( x K ) ) ( ϕ 1 j ϕ 2 j ϕ N F j 0 N N F ) = O X · [ L N ] B ( : , n + 2 j ) , j = 1 , , n .
These can be used for expressing F j using another suitable dictionary of functions q i j (e.g., rational) by decoupled least squares fitting for the functions F j : with an ansatz F j = i = 1 N F j φ i j q i j , the coefficients φ i j are determined to minimize
( F j ( x 1 ) ˜ F j ( x K ) ˜ ) ( q 1 j ( x 1 ) q 2 j ( x 1 ) q N F j j ( x 1 ) q 1 j ( x K ) q 2 j ( x K ) q N F j j ( x K ) ) ( φ 1 j φ N F j j ) ,
where · is an appropriate (possibly weighted) norm.
Remark 2.
Note that (20) is slightly more general than in [8]—we can use separate dictionaries for each coordinate function F j , which allows fitting variables of different (physical, thus mathematical) nature separately, with most appropriate classes of basis functions.
In [8], it is recommended to solve the regression problem (20) with a sparsity promoting method, thus revealing the underlying structure. In many cases, the sparsity is known to be specially structured, and we can exploit that information. We illustrate our proposed approach using the quadratic systems.

3.3.1. Quadratic Systems

Suppose we know that the system under study is quadratic, i.e., F ( x ) = A x + G ( x x ) . Quadratic systems are important class of dynamical systems, with many applications and interesting theoretical properties, see e.g., [26].
With the approximate field values F ( x 1 ) ˜ , , F ( x K ) ˜ , we can seek A R n × n and G R n × n 2 to achieve
( F ( x 1 ) ˜ F ( x K ) ˜ ) A ( x 1 x K ) + G ( x 1 x 1 x K x K ) .
If we set F ˜ = ( F ( x 1 ) ˜ F ( x K ) ˜ ) , X = ( x 1 x K ) , then the identification of the coefficient matrices A and G reduces to solve the matrix least squares problem
( X T ( X X ) T ) ( A T G T ) F ˜ T F min A , G ,
where X X = ( x 1 x 1 x K x K ) R n 2 × K is the Khatri–Rao product. Here too, one can add a sparsity promoting regularization, with the implicitly defined underlying quadratic structure.

4. Numerical Implementation—A Case Study Analysis

When it comes to turning a numerical algorithm into software, it is often straightforward to write a few lines in Matlab, Python, Octave or some other software package and have a running implementation of a sophisticated procedure obtained by composition of building blocks (subroutines). However, one should keep in mind that the final numerical computation is in finite precision (machine) arithmetic, and that in some cases development of robust numerical software requires a more careful approach. In this section, we use a case study example that reveals difficulties from the numerical software point of view, and that motivates modifications to alleviate them.

4.1. An Example: Lorenz System

A good way to test robustness of a numerical algorithm is to push it to its limits. In this case, we choose a difficult test case and let the dimensions of the data matrices grow by increasing the total degree m of the polynomial basis (and thus the dimension N) and matching that with increased K so that K > N . The main goal is to provide a case study example.
Example 1.
Consider the Lorenz system
( x ˙ 1 x ˙ 2 x ˙ 3 ) = ( 10 10 0 28 1 0 0 0 8 / 3 ) ( x 1 x 2 x 3 ) + ( 0 x 1 x 3 x 1 x 2 ) .
The exact coefficients, ordered to match the grlex ordering of the monomial basis are
1 x 3 x 2 x 1 x 3 2 x 2 x 3 x 2 2 x 1 x 3 x 1 x 2 x 1 2 F 1 : 0 0 1 . 0000 × 10 1 1 . 0000 × 10 1 0 0 0 0 0 0 F 2 : 0 0 1 . 0000 × 10 0 2 . 8000 × 10 1 0 0 0 1 . 0000 × 10 0 0 0 F 3 : 0 2 . 6667 × 10 0 0 0 0 0 0 0 1 . 0000 × 10 0 0 .
To collect data, we ran simulations with 55 random initial conditions and from each trajectory we randomly (independently) selected 55 points, giving a total of K = 3025 pairs ( x k , y k ) . The simulations were performed in Matlab, using the ode45() solver in the time interval [ 0 , 0.2 ] with the time step δ t = 10 3 . In the key Formula (18), we computed the logarithm in Matlab in two ways, as logm (pinv  ( O X ) O Y ) and as logm  ( O X \ O Y ) and obtained nearly the same matrix. Of course, using the pseudoinverse explicitly is not recommended. We will use it in this section for illustrative purposes; recall the discussion in Section 2.2. The computed approximations of the coefficients of (23), with m = 3 , N = 20 and m F = 2 , are
1 x 3 x 2 x 1 x 3 2 x 2 x 3 x 2 2 x 1 x 3 x 1 x 2 x 1 2 F 1 : 3.2 × 10 5 2.5 × 10 6 1 . 0000 × 10 1 1 . 0000 × 10 1 7.5 × 10 7 5.2 × 10 6 1.5 × 10 7 7.4 × 10 6 1.6 × 10 6 8.0 × 10 7 F 2 : 3.4 × 10 5 8.7 × 10 6 1 . 0000 × 10 0 2 . 8000 × 10 1 4.7 × 10 6 1.3 × 10 5 8.7 × 10 9 1 . 0001 × 10 0 6.1 × 10 6 6.1 × 10 6 F 3 : 2.8 × 10 4 2 . 6667 × 10 0 5.5 × 10 6 6.9 × 10 6 8.3 × 10 6 2.9 × 10 6 1.6 × 10 8 1.0 × 10 5 1 . 0000 × 10 0 5.7 × 10 6 .
The nonzero coefficients are matched to five digits of accuracy, and the remaining coefficients are below O ( 10 5 ) . Nearly the same result is obtained if the samples from all trajectories have the same time stamps. Given the difficulty of the Lorenz example and the fact that the results are obtained by a low dimensional approximation of a nontrivial (numerically simulated) dynamics, the results are good. Analytical properties of the method indicate that increasing dimensions provides increased accuracy, ultimately yielding convergence.
Example 2.
Next, we illustrate the reconstruction of the function F . We simulated the system in the interval [ 0 , 4 ] with δ t = 0.01 , and used the monomials with the total degree up to m = 3 . In the discrete time grid, we randomly selected 9 positions at which we took three consecutive values from each trajectory; in the second experiment 27 randomly selected values of x k are taken from each trajectory. The total number of trajectories (with random initial conditions) was set to 150.
F is approximated using (19). For each x k , we compute the approximation error as
ϵ k = max i = 1 , 2 , 3 | F i ( x k ) ˜ F i ( x k ) | F ( x k ) .
The sampling points and the values of ϵ k are shown in Figure 1.
Example 3.
Now we use the data snapshots from Example 1, and increase the total degree to m = 9 , thus increasing N from N = 20 to N = 220 . Recall that K = 3025 . Surprisingly, the computed coefficients are all complex, and are completely off; their absolute values are (the euclidean norms of the real and the imaginary parts of the vector of the computed coefficients are O ( 10 6 ) .
1 x 3 x 2 x 1 x 3 2 x 2 x 3 x 2 2 x 1 x 3 x 1 x 2 x 1 2 F 1 : 4.6 × 10 3 1.6 × 10 6 3 . 6 × 10 5 3 . 0 × 10 4 1.5 × 10 4 1.2 × 10 4 4.3 × 10 3 1.0 × 10 4 9.1 × 10 3 1.0 × 10 4 F 2 : 3.0 × 10 4 1.2 × 10 6 2 . 8 × 10 5 1 . 0 × 10 4 9.6 × 10 3 1.2 × 10 4 7.6 × 10 3 2 . 1 × 10 4 9.5 × 10 3 5.5 × 10 3 F 3 : 6.8 × 10 3 2 . 4 × 10 5 5.5 × 10 4 2.2 × 10 3 1.9 × 10 3 2.6 × 10 3 1.6 × 10 3 4.5 × 10 3 2 . 0 × 10 3 1.1 × 10 3 .
The approximation of F is also bad, see Figure 2. Increasing the number of trajectories and samples per trajectory to obtain K = 24,025 did not bring any improvement.
Although increasing N, with correspondingly large K, should be the step toward better approximation, the numerical implementation breaks down. In situation like this one, it is important to find out and understand the sources of the problem.

4.2. What Went Wrong

It is clear that in this computation the most sensitive part is computation of the matrix logarithm, and that this is the most likely source of problems. Indeed, in the last run in Example 3 a warning was issued by the logm() function:
  • Warning: the principal matrix logarithm is not defined for A with nonpositive real eigenvalues. A non-principal matrix logarithm is returned.
A closer inspection of the eigenvalues of U N , shown in Figure 3 confirms that U N indeed has problematic (real negative) eigenvalues.
Although Figure 3 and Figure 4 show numerically computed eigenvalues, by a backward stability analysis argument one can argue that U N is certainly close to matrices with eigenvalues that preclude computing the logarithm in finite precision arithmetic. Hence, computation of the logarithm must be ill-conditioned as the ill-conditioning is essentially measured as the inverse distance to singularity [27].
Another look at U N reveals that its first column is not what it should be. In this example we use monomials and the first basis vector is the constant. Since U t 1 = 1 · 1 , we expect [ Φ N U | F N t ] B [ 1 ] B U N e 1 = e 1 = 1 · [ 1 ] B , which clearly follows from the solution of the least squares problem (7), if the coefficient matrix is of full column rank. On the other hand, the first column of U N is computed as shown in Figure 5.
Two conclusions are immediate. First, O X is considered (by the software) numerically rank deficient. Secondly, two different software solutions of the same problem return two different solutions. For the data shown in Figure 3 and Figure 4, we computed U N as pinv ( O X ) O Y . This is in general not a good idea for solving least squares problems; nevertheless, it can be often seen in the literature and software implementations as a textbook-style numerical method for least squares problems, albeit a wrong one. We included it here for instructive purposes. If we repeat the experiment with the backslash operator, O X \ O Y , the results are, unfortunately, not better. One conspicuous difference is that, instead of the cluster of absolutely small eigenvalues of pinv ( O X ) O Y (see Figure 4), O X \ O Y has a zero eigenvalue of multiplicity 45. This multiple zero eigenvalue is a consequence of the sparsity structure of O X \ O Y , see Figure 6.
In the course of solving the least squares problem, in both methods (explicit use of the pseudoinverse or the backslash operator), the coefficient matrix O X is truncated (small singular values are set to zero if pinv ( O X ) is used; the upper triangular factor is truncated in the case of backslash, which uses a rank revealing QR factorization) and the computed U N , due to rounding errors, is small a perturbation of a rank-deficient matrix with a number of eigenvalues in the vicinity of zero. As a result, the matrix logarithm function fails.
The data shown in Figure 7 are instructive. The condition number of O X and the distribution of its singular values are nicely revealed by the column norms of O X . The LS solver automatically truncates small singular values that originate from small columns (small basis function values over the snapshots x k ) and not necessarily from collinearity in the sense of small angles between basis functions.
Look at the singular values σ 1 ( U N ) σ N ( U N ) in the right panel, in particular the gap σ 176 ( U N ) 1.4058 × 10 6 σ 177 ( U N ) 3.9261 × 10 10 . The numerical rank of U N is determined as 176 because σ 176 ( U N ) is the smallest singular value that is above the threshold N · eps · σ 1 ( U N ) 4.6543 × 10 8 . The index 176 originates in the numerical rank of O X which is determined in Matlab ( rank ( O X ) returns 176) by the default tolerance. Note that there is no such visible gap (cliff) in the ordered singular values of O X .
Remark 3.
It should be also mentioned here that the cosines of the canonical angles between the ranges of O X and O Y are between 0.988 and one (cf. Section 2.1.3), and that we expect the spectrum of U N not to be too far away from the unit circle. The intuition is that (in the process of computation of O X O Y ) the truncated part of O X did not (could not) cancel the corresponding part in O Y , which resulted in the problematic cluster of eigenvalues near zero. (In Figure 3, the number of eigenvalues in the cluster around zero is 44, which is the numerical rank deficiency, because the numerical rank is determined (from the singular values) as 176.)
Remark 4.
In the computations pinv  ( O X ) O Y and O X \ O Y , the truncation is conducted based on the SVD and the rank revealing QR factorization, respectively, of O X , independent of O Y . It is more appropriate to solve each LS problem (7) separately (for the corresponding column of U N ), and use the truncation strategy following Rust [28]. Note that this is a more general issue because the matrix O X O Y is often used in the Koopman/DMD setting. (See [29] for a detailed numerical analysis of the DMD and its variations.) Furthermore, the problem can become even more difficult if we have weighting matrix W with scaling factors that spread over several orders of magnitude. (In this paper, we work with W = I K for the sake of brevity.)
Remark 5.
In the case of noisy data, it might be advantageous to replace the least squares fit with the total least squares ([20], §2.7.6). This is an important issue and we leave it for our future work.

5. Computing [ L N ] B with Preconditioning

Numerical examples in Section 4 show that implementation of the method in state of the art packages such as Matlab which is straightforward and effortless: the key computation is coded as logm ( O X \ O Y ) or as logm ( pinv ( O X ) O Y ) . However, the results are not always satisfactory, and the problems are both in the computation of U N (solving the least squares problem) and in computing the matrix logarithm. In this section, we use the structure of the least squares problem solver (reviewed in Section 2.2.1) to introduce a simple modification and then to construct preconditioned computations of log U N . These techniques can be combined with the dual method that is analyzed in detail in Section 6 and Section 7.

5.1. A Simple Modification

The first attempt to avoid some of the problems illustrated in Section 4 is to prevent truncation when computing O X O Y . So we simply run the procedure used in the backslash solver (see Section 2.2.1), but without truncation. More precisely, using the column pivoted QR factorization O X Π = Q R , U N is computed as U N = Π R 1 Q T O Y .
With m = 9 and same setting as in Example 1, the computation of the logarithm of U N was successful. (Matlab function logm() computed a real valued logarithm, without an error/warning message) and the coefficients of (23) are recovered to four digits of accuracy, which is satisfactory. The remaining coefficients are below O ( 10 4 ) and are discarded in the Formula (19) for reconstructing F . The first column of U N was e 1 up to an error of the order of the roundoff.
The results shown in Figure 8 are encouraging, because they show that the data contain information that can be turned into more accurate output, provided that the algorithm successfully curbs the ill-conditioning. Furthermore, test with m = 10 ( κ 2 ( U N ) 2.9 × 10 20 ) showed nearly the same four digit accuracy; with m = 11 ( κ 2 ( U N ) 2.2 × 10 24 ) the accuracy dropped to two digits (but the logarithm was successfully computed); with m = 12 ( κ 2 ( U N ) 7.7 × 10 26 ) the logarithm is still computed as real matrix, but the accuracy of the computed coefficients drops to O ( 10 1 ) and the reconstruction of F fails. However, if we increase K to 24,025 (by sampling more points from more trajectories) the accuracy rebounds to at least for digits both in the coefficients and the reconstruction of F . At m = 16 ( N = 969 and K = 24,025, or K = 3025 ) the accuracy is lost. The condition number of U N is greater than 10 37 .
Our adversarial testing is used to expose potential weaknesses of the computational steps in a software implementation of the algorithm. Of course, all these outcomes may vary, depending on the number of trajectories, initial conditions, sampling resolution; in some examples it is possible that the computation of the matrix logarithm breaks down even for smaller values of m. ( Moreover, numerical accuracy, i.e., the conditioning of the problem, heavily depends on the underlying dynamics). In any case, at this point we have to conclude that using the log O X O Y is a source of nontrivial numerical difficulties that preclude efficient and robust deployment of the Mauroy–Goncalves method. The theoretical convergence (as N , K , t 0 ) is not matched by numerical convergence of a straightforward implementation of the method in finite precision computation.

5.2. Preconditioning the Logarithm of O X O Y

The discussion in Section 4.2 and Section 5.1 is only one step toward more robust computation—it merely identifies the problem and shows how small changes in an implementation substantially change the final result. Clearly, sufficiently accurate computation of U N is among the necessary conditions for successful computation of the matrix algorithms. However, this is not sufficient; even if we compute U N accurately, computation of the logarithm may fail if the matrix is ill-conditioned.
We now introduce a new scheme that uses functional calculus-based preconditioning. If S is any nonsingular matrix, then
log ( O X O Y ) = S log ( S 1 ( O X O Y ) S ) S 1 S log ( ( O X S ) O Y S ) S 1 .
Note that replacing O X O Y with the similar matrix S 1 ( O X O Y ) S corresponds to changing the basis for matrix representation of the compressed Koopman operator. With the experience from Section 4.2, it is clear that the key is to compute the preconditioned matrix S 1 ( O X O Y ) S without first computing O X O Y . (Once we compute O X O Y explicitly in floating point arithmetic and store it in the machine memory, it may be then too late even for exact computation.)
The conditions on S are:
(i)
Tt should facilitate more accurate computation of the argument S 1 ( O X O Y ) S = ( O X S ) O Y S for the matrix logarithm;
(ii)
It should have preconditioning effect for computing the logarithm of S 1 ( O X O Y ) S ;
(iii)
The application of S and S 1 should be efficient and numerically stable.
Example 4.
To test the concept, we use the same data as in Example 1 with m = 9 , and for the matrix S we take S = diag ( 1 / O X ( : , i ) 2 ) i = 1 N and compute
L N = 1 t S log ( ( O X S ) ( O Y S ) ) S 1 .
Contrary to the failure of the formula L N = ( 1 / t ) log ( O X O Y ) , (26) computes the real logarithm of the explicitly computed ( O X S ) ( O Y S ) , and recovers the coefficients with an O ( 10 5 ) relative error. That is, we scale O X and O Y by diag ( 1 / O X ( : , i ) 2 ) i = 1 N , and then proceed by solving the least squares problem with the thus scaled matrices. To understand the positive result, we first note that the condition number of O X S is κ 2 ( O X S ) 1.3 × 10 9 , so the least squares solution is computed without truncation. The condition number of ( O X S ) ( O Y S ) was κ 2 ( ( O X S ) ( O Y S ) ) 6.4 × 10 5 . (The condition number of O X O Y is O ( 10 20 ) so that the matrix is considered numerically rank deficient.) With m = 12 we had κ ( U N ) 6.9 × 10 61 and κ 2 ( ( O X S ) ( O Y S ) ) 1.4 × 10 13 ; the coefficients are recovered to three accurate digits and approximation of F is with error slightly larger than the one in the right panel of Figure 8.
However, as m increases, the diagonal scaling cannot cope with the increased condition number; already at m = 15 , a complex non-principal value of the logarithm is computed, with some eigenvalues whose imaginary parts are equal π / δ t . The computed U N has a cluster of eigenvalues around zero. For the record, κ 2 ( U N ) 2.2 × 10 220 , κ 2 ( ( O X S ) ( O Y S ) ) 6.7 × 10 78 . Surprisingly, the approximate coefficients, although complex, have small imaginary parts (of the order of the roundoff) and their real parts still provide reasonably good approximations of the true coefficients.

5.2.1. Scaled QR Factorization Based Preconditioner

To develop a stronger preconditioner, we start with the following observation: No matter how ill-conditioned O X and O Y may be (in the sense of badly scaled columns), the distance between the ranges of O X and O Y , as measured by the canonical angles between the subspaces, should not be too big. (Intuitively, O Y contains the observables evaluated at the states y k downstream in time δ t from the x k ’s used in O X . Recall Remark 3.)
Hence, if we compute the QR factorization of O X , the inverse of its triangular factor will have a preconditioning effect on O Y by postmultiplication. This leads to Algorithm 1.
Algorithm 1  [ L N ] = Inf _ Generator _ QRSC ( O X , O Y , T )
Input: 
O X C K × N , O Y C K × N , T > 0
  1:
S = diag ( 1 / O X ( : , i ) 2 ) i = 1 N
  2:
[ Q X , R ^ X ] = qr ( O X S ) {QR factorization}
  3:
U ^ N = Q X T ( O Y S ) R ^ X 1 { U ^ N is similar to O X O Y .}
  4:
L ^ N = log ( U ^ N )
  5:
L N = ( 1 / T ) S ( R ^ X 1 L ^ N R ^ X ) S 1
Output: 
L N . { L N = ( 1 / T ) log ( O X O Y ) }
Example 5.
To test this algorithm, we use the data from Example 1, take m = 15 and increase the number of snapshots to K = 24,025. The matrix U ^ N is well conditioned, κ 2 ( U ^ N ) 1.2 × 10 2 , and computing the matrix logarithm is successful. The coefficients are recovered to four digits of accuracy, and the reconstruction of F is slightly better than the one shown in the right panel in Figure 8.
This example, as a test of the proposed approach, is encouraging. Our next task is to further develop the method along the lines of Algorithm 1, and to provide a robust method with accompanying numerical analysis, and finally to implement it as a reliable software toolbox.

5.2.2. Pivoted QR Factorization Based Preconditioner

Since in this approach the matrix logarithm is the most critical and numerically most difficult computational task, the preprocessing/preconditioning aims to ensure successful completion of that particular step in the method. The back application of the similarity is also an important step. In Algorithm 1, the main preconditioning is performed by an upper triangular factor from the QR factorization, and by its inverse. For a numerically robust computation, it is important that the QR factorization is computed accurately even in the case of wildly scaled data, and, moreover, that the resulting triangular factor is rank revealing and well structured. These goals can be accomplished by pivoting. In this subsection we outline the main principles along which the idea of QR factorization-based preconditioned computation of the matrix L N (matrix representation of a compression of the infinitesimal generator) can be further pursued.
The column pivoting has the rank revealing property and the triangular factor is diagonally dominant in a very strong sense, see e.g., [23,30] and (13).
In the case of Businger–Golub pivoting, we know that R X = Δ X T X , where Δ X = diag ( | ( R X ) i i | ) i = 1 N and T X is well conditioned. Hence, R X 1 = T X 1 Δ X 1 , and after Line 3 in Algorithm 2 one can insert another preconditioning with Δ X . We omit the details for the sake of brevity. Instead, we conclude this theme with few remarks that should be useful for further study and implementation.
Algorithm 2  [ L N ] = Inf _ Generator _ QRCP ( O X , O Y , T )
Input: 
O X C K × N , O Y C K × N , T > 0
  1:
Reorder the snapshots by simultaneous row permutation of O X and O Y ; see Remark 6.
  2:
[ Q X , R X , Π X ] = qr ( O X ) {Rank revealing QR factorization with column pivoting}
  3:
U ^ N = Q X T ( O Y Π X ) R X 1 { U ^ N is similar to O X O Y .}
  4:
L ^ N = log ( U ^ N )
  5:
L N = ( 1 / T ) Π X ( R X 1 L ^ N R X ) Π X T
Output: 
L N . { L N = ( 1 / T ) log ( O X O Y ) }
Remark 6.
For the numerical accuracy of the QR factorization, an additional row pivoting may be needed to obtain that the rows are ordered so that their norms are decreasing, see [31,32]. If Ψ is a permutation matrix that encodes the row pivoting, then ( Ψ O X ) = O X Ψ T , so that ( Ψ O X ) ( Ψ O Y ) = O X O Y . This means that using the additional row pivoting in the QR factorization in Algorithm 2 is equivalent to a particular ordering of the data snapshots. The column pivoting corresponds to reordering the basis’ functions. Both reorders of the data are allowed operations and can thus be used to enhance numerical robustness of the computation.
Remark 7.
If K 2 N then it pays off to change the coordinates by computing the QR factorization
( Q X Q Y ) = Q ( R 0 ) ,
and use the corresponding columns of R instead of O X and O Y . This follows the idea of the QR-compressed DMD [29].
Remark 8.
The key assumption in the above described method is that K N , i.e., that both O X and O Y are tall matrices; their columns are in a high dimensional space and with suitable transformation S in the column spaces ( O X O X S , O Y O Y S ) we can improve the condition numbers. By the (variational) monotonicity principle, supplying more snapshots (increasing K) moves the singular values of O X and O Y to the right, thus improving the condition numbers of both matrices. Since, by the underlying continuity, the canonical angles between the ranges of O X and O Y are expected to be away from π / 2 , and U N = O X O Y is going to be nonsingular. Moreover, the overall condition number of the computation can be controlled using our proposed modifications that are designed to ensure stable computation of the matrix logarithm.

6. Dual Method

It is clear that the values of N linearly independent functions 1 , , N over the discrete set x 1 , , x K of K < N snapshots (that is, with only the tabulated values in the K × N matrices O X , O Y ) contain redundancy. On the other hand, increasing the space dimension N is a way to lift the data to higher dimensional space; more observables improve both the DMD and KMD analyses. Note that, in the case N > K , both the compression U N = [ Φ N U | F N t ] B = O X O Y and the EDMD matrix ( U N T ) are rank deficient; unfortunately, the infinitesimal generator identification framework cannot work in that setting because the matrix logarithm is not defined. It should be stressed here that, e.g., in the DMD setting, the action of the operator given by the data is restricted to an at most K-dimensional subspace in the N dimensional space, and that the approximations of the Koopman modes are obtained by a Rayleigh–Ritz extraction. Hence, any operator (matrix) function of U N only makes sense and has a practical usability in the context of an approximation from the subspace defined by the data.
In [8], a dual method is proposed, which instead of U N = O X O Y works with the logarithm of U K = O Y O X . In this section we first provide, in Section 6.1, a detailed linear algebra description of the dual method that will facilitate a more general formulation that allows for modifications which may lead to better numerical algorithms. In fact, we show in Section 7 that the dual method of [8] is but a special case of subspace projection methods, and we show how to exploit this for design of numerically better schemes.

6.1. A Rayleigh Quotient Formulation

In the dual formulation, the transition from U N to U K can be formulated as another compression of U t onto a particular K-dimensional subspace of F N .
Proposition 3.
If we define ( ψ 1 ( x ) ψ K ( x ) ) = ( 1 ( x ) N ( x ) ) O X , then (assuming O X is of full row rank K), B K = { ψ 1 , , ψ K } is a basis of the K dimensional subspace F K F N , and U K = O Y O X is the matrix representation of the Rayleigh quotient Φ K U | F K t in which Φ K : F F K is the least squares projection as in Section 2.1.1. The matrix U K is the matrix Rayleigh quotient of U N with respect to the range of O X , i.e., U K = O X U N O X and U N O X = O X U K . Furthermore, the Rayleigh quotient with respect to O X is the same for all matrices from the set N (see (10)):
O X ( O X O Y + Ψ 0 Ξ ) O X = O Y O X . ( N o t e   h e r e   t h a t ( O X ) = O X . )
Proof. 
First, note that the basis functions ψ 1 , , ψ K evaluated at x 1 , , x K can be tabulated as the matrix
O X , K = ( ψ 1 ( x 1 ) ψ 2 ( x 1 ) ψ K ( x 1 ) ψ 1 ( x K ) ψ 2 ( x K ) ψ K ( x K ) ) = O X O X = I K ,
and that for a g F K , its representation in the basis B K is [ g ] B K = ( g ( x 1 ) , , g ( x K ) ) T ; see (6), where we take W = I for the sake of simplicity. Similarly, ( U t ψ 1 , , U t ψ K ) evaluated at x 1 , , x K yields the matrix O Y , K = O Y O X . Now, as in Section 2.1.2, U K = O X , K O Y , K = O Y O X . Relation (27) is easily checked. □
If U K is nonsingular then the identification scheme should be recast in terms of log U K . In the basis B K we have then the following representations of the Rayleigh quotient Φ K U | F K t and its logarithm.
Corollary 1.
The matrix representations of the compressed U t and its logarithm (if defined) are as follows: For any ( g 1 , , g K ) T C K ,
Φ K U | F K t ( ( ψ 1 ( x ) ψ K ( x ) ) ( g 1 g K ) ) = ( ψ 1 ( x ) ψ K ( x ) ) ( U K ( g 1 g K ) ) , log ( Φ K U | F K t ) ( ( ψ 1 ( x ) ψ K ( x ) ) ( g 1 g K ) ) = ( ψ 1 ( x ) ψ K ( x ) ) ( log ( U K ) ( g 1 g K ) ) .
Next, we consider function evaluation at y k = φ t ( x k ) , k = 1 , , K .
Proposition 4.
Let f = i = 1 N f i i F N . Then
( U t f ( x 1 ) U t f ( x K ) ) = ( f ( y 1 ) f ( y K ) ) = O Y O X ( f ( x 1 ) f ( x K ) ) + O Y ( I N O X O X ) ( f 1 f N ) .
If I N O X O X , then the second term on the right hand side in (28) is zero if and only if the f i ’s are of the form ( f i ) i = 1 N = O X ( g k ) k = 1 K with arbitrary g k ’s. Furthermore, if g = i = 1 K g i ψ i F K F N , then g i = g ( x i ) and
( U t g ( x 1 ) U t g ( x K ) ) = O Y O X ( g ( x 1 ) g ( x K ) ) , ( log ( Φ K U | F K t ) g ( x 1 ) log ( Φ K U | F K t ) g ( x K ) ) = log ( O Y O X ) ( g ( x 1 ) g ( x K ) ) .
Proof. 
For (28), it suffices to write
( f ( y 1 ) f ( y K ) ) = O Y ( f 1 f N ) = O Y ( O X O X + I N O X O X ) ( f 1 f N ) .
For the first relation in (29), note that g = i = 1 K g i ψ i evaluated at y 1 , , y K reads
( g ( y 1 ) g ( y K ) ) = O Y O X ( g 1 g K ) , where g i = g ( x i ) because   O X , K = I K .
 □
Hence, using the last relation and L = F · ( 1 / t ) log ( Φ K U | F K t ) in F K (evaluated at the snapshots x i ), we have
( F · g ( x 1 ) F · g ( x K ) ) 1 t log ( O Y O X ) ( g ( x 1 ) g ( x K ) ) .
If we choose g ( x ) = x j , j = 1 , , n , respectively, (where x = ( x 1 , , x n ) T ) then F · g ( x ) = F j ( x ) and we obtain approximate filed values F ( x i ) ˜ defined as
( F ( x 1 ) ˜ T F ( x K ) ˜ T ) = 1 t log ( O Y O X ) ( x 1 T x K T ) .
Note however that (30), (31) require that g F K . Furthermore, the approximations of F are given only at the sequence x 1 , , x K .

Global Identification of F

After identifying the values F ( x k ) , the idea is to use the ansatz
F ( x ) = ( F 1 ( x ) F n ( x ) ) , F j ( x ) = k = 1 N F φ k j h j ( x ) ,
where h 1 , , h N F are chosen from a dictionary of functions, possibly different from the basis used to lift the data and identify the compression of the infinitesimal generator. Then we obtain the sequence of least squares problems
( h 1 ( x 1 ) h N F ( x 1 ) h 1 ( x 2 ) h N F ( x 2 ) h 1 ( x K ) h N F ( x K ) ) ( φ 1 j φ 2 j φ N F j ) ( F ˜ j ( x 1 ) F ˜ j ( x 2 ) F ˜ j ( x K ) ) 2 2 min φ 1 j φ N F j , j = 1 , , n ,
that can also be equipped with a regularization factor that promotes sparse solution.

6.2. A Numerical Example

As in Section 4, a simple but difficult example can be used to explore the numerical feasibility of the derived scheme. It has been shown in [8] that the method works well for the Lorenz system on small time intervals. In the following example, we increase the time domain and test the accuracy of the reconstruction of F . An ill-conditioned problem is obtained by taking the total degree of the polynomials to be 12. Although this may seem artificial, it is useful because it provides numerically difficult cases that expose the weaknesses of the computational scheme and excellent case studies for better understanding and further development.
Example 6.
In this example we run simulation of the Lorenz system with time resolution δ t = 10 3 on the intervals [ 0 , 0.1 ] , [ 0 , 0.18 ] and [ 0 , 0.19 ] . The basis functions i are the monomials with total degree up to m = 12 . We generate 12 trajectories with random initial conditions, and from each trajectory we sample three consecutive snapshots at ten randomly selected positions; the matrices O X , O Y are thus 360 × 455 . In the reconstruction Formula (31), the matrix logarithm is computed in Matlab as logm(OY/OX); in all three runs, logm() issued a warning that a non-principal value of the algorithm was computed. The reconstruction error is measured component-wise and snapshot-wise as
ϵ k ( i ) = | F i ( x k ) ˜ F i ( x k ) | F ( x k ) , i = 1 , n ; k = 1 , , K .
In addition, we measure the total error of the tabulated values in the Frobenius norm as
τ = ( F i ( x k ) ˜ ) k , i = 1 K , n ( F i ( x k ) ) k , i = 1 K , N F / ( F i ( x k ) ) k , i = 1 K , N F .
In Figure 9, we show the errors ϵ k ( i ) for the intervals [ 0 , 0.1 ] and [ 0 , 0.18 ] ; in the case of the time interval [ 0 , 0.19 ] , the method broke down and the reconstructed values were computed as NaN’s.
The results depend on the initial conditions used to generate sample trajectories. In the above experiments, each initial condition is taken as a normally distributed 3 × 1 vector generated in Matlab using randn. If we generate the initial conditions as uniformly distributed inside the sphere of radius 0.1 , centered at the origin, then for all three test intervals the error τ was O ( 10 3 ) . With such initialization, the same level of accuracy is then maintained for larger time intervals, up to [ 0 , 0.4 ] ; for [ 0 , 0.41 ] the error increased to τ 0.1 and for [ 0 , 0.42 ] the result was NaN.
This example shows that numerical (software) implementation of the dual method requires additional analysis and modifications, similar to the main method. In the next section we explore numerical linear algebra techniques that could facilitate a more robust implementation.

7. Subspace Selection

The approximation (31) is based on a particular K-dimensional subspace of F N = span ( 1 , , N ) , where (in the case of monomial basis) the choice of the new basis functions precludes direct coefficient comparisons with (17), (19). Instead, the identification procedure follows the lines of (30)–(33).
We can set up a more general framework: independent of the ratio N / K (and independent of the formulation—original or dual) we can seek other suitable subspaces of F N , and not necessarily of dimension K. The selection criterion is numerical: both the subspace and its dimension N ^ should be determined with respect to the numerical conditioning of the matrix representations at the finite sequence x 1 , , x K . A basis of such a N ^ -dimensional subspace F N ^ of F N is written as ( ψ 1 , , ψ N ^ ) = ( 1 , , N ) S , where S is N × N ^ selection operator, i.e., matrix, of rank N ^ . The tabulated values of ( ψ 1 , , ψ N ^ ) at x 1 , , x K are easily computed as O X , S = O X S . Similarly, the tabulated values of ( U t ψ 1 , , U t ψ N ^ ) are O Y , S = O Y S .
Certainly, N ^ min ( K , N ) . If e.g., K < N , we aim at N ^ = K , but we will have option that a numerical algorithm decides whether that choice is feasible, depending on the level of ill-conditioning.
The subspace selection operator S should ensure that, if needed, the constructed basis of F N ^ contains < N ^ a priori selected functions i 1 , , i (recall the identification procedure outlined in Section 3.2) and that O X , S is well conditioned. This implicitly restricts to be at most K, and in practice K . Furthermore, the remaining N ^ basis functions should be selected among the j ’s. In other words, we seek S as a selection of N ^ columns of the identity I N . This is achieved by the following two step procedure:
  • (Optional) Define S 0 as the N × N permutation matrix that moves the selected functions to the leading positions, i.e., such that ( 1 , , N ) S 0 = ( i 1 , , i , ) .
  • Add to these functions a selection of N ^ functions that are most linearly independent in the orthogonal complement of span ( i 1 , , i ) as seen on the discrete points x 1 , , x K .
The second step, which can be designated as basis pruning, is based on the numerical rank revealing techniques.

7.1. Pruning the Basis ( 1 , , N )

Removing the basis functions j that carry numerically redundant information on the set x 1 , , x K can be automated using the rank revealing QR factorization [23] as follows. First, compute the QR factorization of the selected functions, with an optional column pivoting
( O X S 0 ) ( : , 1 : ) Π 1 = Q 1 ( R 11 0 ) = ( . . . . . . . . . . . . . . . . . . . . . . . . . ) ( * * 0 * 0 0 0 0 0 0 ) ,
and then apply Q 1 * to the remaining N columns to obtain
Q 1 * O X S 0 ( Π 1 I N ) = ( R 11 R ˜ 12 0 R ˜ 22 ) = ( * * × × × × × × 0 * × × × × × × 0 0 + + + + + + 0 0 + + + + + + 0 0 + + + + + + ) .
(The structure of the computed matrices is illustrated in (36), (37) for K = 5 , N = 8 , = 2 .) Now, the columns of R ˜ 22 are the coordinates of the projection of the trailing N columns of O X S 0 onto the K dimensional orthogonal complement of the span of the leading columns (of O X S 0 , at this moment assumed linearly independent). A well conditioned selection of N ^ columns can be computed by another column pivoted (rank revealing) QR factorization
R ˜ 22 Π 2 = ( + + + + + + + + + + + + + + + + + + ) Π 2 = Q 2 ( R 22 R 23 ) = Q 2 ( 🟉 🟉 🟉 x x x 0 🟉 🟉 x x x 0 0 🟉 x x x ) .
Altogether, we have the factorization
( I 0 0 Q 2 * ) Q 1 * O X S 0 ( Π 1 0 0 I N ) ( I 0 0 Π 2 ) = ( R 11 R 12 R 13 0 R 22 R 23 ) = ( * * × × × × × × 0 * × × × × × × 0 0 🟉 🟉 🟉 x x x 0 0 0 🟉 🟉 x x x 0 0 0 0 🟉 x x x ) ,
and the leading N ^ columns of O X S 0 ( I Π 2 ) are the desired selection. Note that in this formula we have not used the permutation ( Π 1 I N ) of the first columns (here assumed independent so that R 11 is nonsingular), to respect the requested ordering i 1 , , i encoded in S 0

7.2. Well Conditioned Selection by Basis Pruning—General Case

In Section 7.1 we assumed that it was indeed possible to select N ^ linearly independent columns from O X . This may not be the case; moreover, the mere linear independence in finite precision numerical computation is not enough. We need well-conditioned selection of the columns of O X , i.e., well conditioned matrix O X , S , but also well conditioned O Y , S . The rank revealing pivoting (materialized in the permutation matrices Π 1 , Π 2 ) will provide relevant information.
If the initially selected functions are nearly linearly dependent on the supplied snapshots, but ^ < of them can be considered well conditioned, then on the diagonal of R 11 we will see that | ( R 11 ) ^ ^ | > tol | ( R 11 ) ^ + 1 , ^ + 1 | . In that case, we set ( R 11 ) ( ^ + 1 : , ^ + 1 : ) = 0 , and the submatrix R ˜ 22 is now defined as the subarray at the positions ( ^ + 1 : K , ^ + 1 : N ) . Since its first column is now zero, the pivoting Π 2 will eliminate it from further selections (unless the entire R ˜ 22 is zero). To illustrate, assume that in (37) the ( 2 , 2 ) position carries a value ϵ that is smaller than a prescribed threshold. Then, we have
Q 1 * O X S 0 ( Π 1 I N ) = ( R 11 R ˜ 12 0 R ˜ 22 ) = ( * * × × × × × × 0 ϵ + + + + + + 0 0 + + + + + + 0 0 + + + + + + 0 0 + + + + + + ) ( * * × × × × × × 0 0 + + + + + + 0 0 + + + + + + 0 0 + + + + + + 0 0 + + + + + + ) .
Altogether, this discussion can be summarized in Algorithm 3.
Algorithm 3  [ S , ^ , r ^ , Π 1 , Π 2 ] = Subspace _ Selection ( O X , S 0 , N ^ , tol )
Input: 
O X C K × N , S 0 , N ^ , tol
  1:
(Optional) Reorder the snapshots by simultaneous row permutation of O X and O Y ; see Remark 6.
  2:
Bring the selected functions forward to the leading positions: Q X = Q X S 0 . Implement S 0 as a sequence of swaps to avoid excess data movement (in the case of large dimensions).
  3:
[ Q 1 , R 1 , Π 1 ] = qr ( O X ( : , 1 : ) ) {Rank revealing QR factorization with column pivoting. Overwrite R 1 = ( R 11 T , 0 ) T over the leading ℓ columns of O X . See (36).}
  4:
Determine the numerical rank ^ of R 11 and in the case ^ < set R 11 ( ^ + 1 : , ^ + 1 : ) = 0 .
  5:
O X ( : , + 1 : N ) = Q 1 * O X ( : , + 1 : N ) .
  6:
[ Q 2 , R 2 , Π 2 ] = qr ( O X ( ^ + 1 : K , ^ + 1 : N ) ) . {Rank revealing QR factorization with column pivoting. R 2 = ( R 22 , R 23 ) overwrites O X ( ^ + 1 : K , ^ + 1 : N ) }
  7:
Determine the numerical rank r ^ of R 22 . Set N ˜ = ^ + r ^ .
  8:
S = ( S 0 ( Π 1 I N ) ( I ^ Π 2 ) ) ( : , 1 : min ( N ^ , N ˜ ) ) ;
Output: 
S , ^ , r ^ , Π 1 , Π 2 .

7.3. Implementation Details

Remark 9.
Note that the case ^ < triggers an exception if the selected ℓ functions are essential in the overall computation, as for example in (17), (19), (30), (31). In the examples used in this paper ^ = , so that no additional action is needed.
Remark 10.
The columns of O X can be severely ill-conditioned and the rank revealing QR factorization should be carefully implemented [33]. The thresholding strategy can vary from soft, mild, to hard, depending on the the concrete example, see [34]. Algorithm 3 can be used to determine the numerical rank but also to ensure that the condition number of the selected columns of O X is the below specified value. This can be efficiently implemented using incremental condition estimators tailored for triangular matrices.
Remark 11.
If the column dimension min ( N ^ , N ˜ ) of O ^ X = O X S and O ^ Y = O Y S is at most K, then we can also apply the original approach from Section 2. Furthermore, the pruning scheme can be also directly applied to the original method.
Remark 12.
Suppose that N K and that K is moderate or also big. Then computational complexity is an issue and the algorithm can be modified as follows:
In Line 3, ℓ is expected to be small or moderate compared to K, so that this step can be efficiently implemented using LAPACK (the functions xGEQRF and xGEQP3) or ScaLAPACK using PxGEQPF (but using the most recent implementation [35]). In the second part of the algorithm, we need a well-conditioned submatrix of a ( K ^ ) × ( N ^ ) submatrix of Q 1 * O X ( : , + 1 : N ) . To do that, we do not need to compute the whole matrix. We can apply the scheme from [36] and sample the columns of Q 1 * O X ( : , + 1 : N ) until we form a well-conditioned R 22 .

7.4. Numerical Experiments with the Dictionary Pruning Algorithm

Example 7.
(Continuation of Example 6) We use the same data as in Example 6, but instead of O X and O Y we use K-column submatrices O X , S , O Y , S , selected by Algorithm 3 with the requirement that 1 , , n + 1 must be kept in the subspace F K . More precisely, we set N ^ = K , and the numerical rank is determined with tol = 0 , so that N ˜ = N ^ = K . The results shown in Figure 10 show a significant improvement.
Example 8.
The purpose of this example is to illustrate the robustness of the proposed algorithm: we take the sampling interval as big as [ 0 , 30 ] or [ 0 , 50 ] with the resolution of the numerical simulation δ t = 0.01 and δ t = 0.1 , and the samples are taken, as before, at randomly selected time instances. For illustration, the time stamps of the snapshots are marked on the first generated trajectory (with δ t = 0.01 ) and shown in the first row in Figure 11.
The accuracy is satisfactory, given the length of the interval ( [ 0 , 30 ] ), the discretization step of the simulation ( δ t = 0.01 , δ t = 0.1 ) and the number of samples. Next, we increase the interval to [ 0 , 50 ] ; the results are in Figure 12.
Now, we reduce the number of samples—from each trajectory we sample at five positions (instead of ten), giving the total of 180 instead of 360 snapshots x k . The results are summarized in Figure 13.
In all examples, the rank revealing was conducted with a hard threshold, with no attempt in the direction of strong rank revealing, which could further improve the numerical accuracy. The details are omitted for the sake of brevity and will be available in our future work.

8. Concluding Remarks

In this work we provided the first steps towards a robust numerical software implementation of the method [8] for identification of dynamical systems using the infinitesimal generator of the Koopman semigroup. An adversarial testing using polynomial bases revealed critical numerical issues that we addressed in detail. We proposed two techniques: preconditioning and basis pruning. These are a basis for a new software implementation that will include other choices of basis functions, including data-driven/empirical constructions of the bases. This is subject of our ongoing and planned future work, including stochastic models.

Author Contributions

Conceptualization, Z.D., I.M. and R.M.; methodology, Z.D.; formal analysis, Z.D., I.M. and R.M; software, Z.D.; writing—original draft preparation, Z.D.; writing—review and editing, Z.D., I.M and R.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the DARPA contract HR0011-18-9-0033.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors thank Alexandre Mauroy (University of Namur, Belgium) for his remarks and insights that helped us to improve the presentation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Montáns, F.J.; Chinesta, F.; Gómez-Bombarelli, R.; Kutz, J.N. Data-driven modeling and learning in science and engineering. C. R. MÉCanique 2019, 347, 845–855. [Google Scholar] [CrossRef]
  2. Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. USA 2016, 113, 3932–3937. Available online: https://www.pnas.org/content/113/15/3932.full.pdf (accessed on 20 June 2021). [CrossRef] [Green Version]
  3. Chartrand, R. Numerical differentiation of noisy, nonsmooth data. ISRN Appl. Math. 2011, 2011, 164564. [Google Scholar] [CrossRef] [Green Version]
  4. Messenger, D.A.; Bortz, D.M. Weak SINDy: Galerkin-based data-driven model selection. arXiv 2020, arXiv:2005.04339. [Google Scholar]
  5. Rudy, S.H.; Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Data-driven discovery of partial differential equations. Sci. Adv. 2017, 3. Available online: https://advances.sciencemag.org/content/3/4/e1602614.full.pdf (accessed on 20 June 2021). [CrossRef] [Green Version]
  6. Goyal, P.; Benner, P. Discovery of nonlinear dynamical systems using a Runge-Kutta inspired dictionary-based sparse regression approach. arXiv 2021, arXiv:2105.04869. [Google Scholar]
  7. Raissi, M.; Perdikaris, P.; Karniadakis, G. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  8. Mauroy, A.; Goncalves, J. Koopman-based lifting techniques for nonlinear systems identification. IEEE Trans. Autom. Control 2020, 65, 2550–2565. [Google Scholar] [CrossRef] [Green Version]
  9. Engel, K.J.; Nagel, R. One-Parameter Semigroups for Linear Evolution Equations; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1999; Volume 194. [Google Scholar]
  10. Budišić, M.; Mohr, R.; Mezi’c, I. Applied Koopmanism. Chaos Interdiscip. J. Nonlinear Sci. 2012, 22, 047510. [Google Scholar] [CrossRef] [Green Version]
  11. Lasota, A.; Mackey, M.C. Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics; Springer: Berlin/Heidelberg, Germany, 1994. [Google Scholar]
  12. Mohr, R.; Drmač, Z.; Mezić, I.; Fonoberova, M. Composition Operator for Static Data; AIMDyn Tech Report; AIMDyn Inc.: Santa Barbara, CA, USA, 2019. [Google Scholar]
  13. Niemann, J.H.; Klus, S.; Schütte, C. Data-driven model reduction of agent-based systems using the Koopman generator. PLoS ONE 2021, 16, e0250970. [Google Scholar] [CrossRef]
  14. Klus, S.; Nüske, F.; Peitz, S.; Niemann, J.H.; Clementi, C.; Schütte, C. Data-driven approximation of the Koopman generator: Model reduction, system identification, and control. Phys. D Nonlinear Phenom. 2020, 406, 132416. [Google Scholar] [CrossRef] [Green Version]
  15. Metzner, P.; Horenko, I.; Schütte, C. Generator estimation of Markov jump processes based on incomplete observations nonequidistant in time. Phys. Rev. E 2007, 76, 066702. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Siefert, M.; Kittel, A.; Friedrich, R.; Peinke, J. On a quantitative method to analyze dynamical and measurement noise. EPL (Europhys. Lett.) 2007, 61, 466. [Google Scholar] [CrossRef] [Green Version]
  17. Friedrich, R.; Peinke, J.; Sahimi, M.; Reza Rahimi Tabar, M. Approaching complexity by stochastic methods: From biological systems to turbulence. Phys. Rep. 2011, 506, 87–162. [Google Scholar] [CrossRef]
  18. Callaham, J.L.; Loiseau, J.C.; Rigas, G.; Brunton, S.L. Nonlinear stochastic modelling with Langevin regression. Proc. R. Soc. A Math. Phys. Eng. Sci. 2021, 477, 20210092. [Google Scholar] [CrossRef]
  19. Riseth, A.N.; Taylor-King, J.P. Operator fitting for parameter estimation of stochastic differential equations. arXiv 2017, arXiv:1709.05153. [Google Scholar]
  20. Björck, A. Numerical Methods in Matrix Computations; Springer: Berlin/Heidelberg, Germany, 2015; Volume 59. [Google Scholar] [CrossRef]
  21. Williams, M.O.; Kevrekidis, I.G.; Rowley, C.W. A data–driven approximation of the Koopman operator: Extending Dynamic Mode Decomposition. J. Nonlinear Sci. 2015, 25, 1307–1346. [Google Scholar] [CrossRef] [Green Version]
  22. Mezić, I.; Drmač, Z.; Črnjarić Žic, N.; Maćešić, S.; Fonoberova, M.; Mohr, R.; Avila, A.M.; Manojlović, I.; Andrejčuk, A. A Koopman operator-based prediction algorithm and its application to COVID-19 pandemic. Nat. Commun. submitted.
  23. Businger, P.A.; Golub, G.H. Linear least squares solutions by Householder transformations. Numer. Math. 1965, 7, 269–276. [Google Scholar] [CrossRef]
  24. Culver, W.J. On the existence and uniqueness of the real logarithm of a matrix. Proc. Am. Math. Soc. 1966, 17, 1146–1151. [Google Scholar] [CrossRef]
  25. Higham, N.J. Functions of Matrices: Theory and Computation; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2008; p. xx+425. [Google Scholar]
  26. Kinyon, M.; Sagle, A. Quadratic dynamical systems and algebras. J. Differ. Equations 1995, 117, 67–126. [Google Scholar] [CrossRef] [Green Version]
  27. Demmel, J.W. The geometry of ill-conditioning. J. Complex. 1987, 3, 201–229. [Google Scholar] [CrossRef] [Green Version]
  28. Rust, B.W. Truncating the Singular Value Decomposition for ill–Posed Problems; Technical Report NISTIR 6131; Mathematical and Computational Sciences Division, National Institue of Standards and Technology, U.S. Department of Commerce, NIST: Gaithersburg, MD, USA, 1998.
  29. Drmač, Z.; Mezić, I.; Mohr, R. Data driven modal decompositions: Analysis and enhancements. SIAM J. Sci. Comput. 2018, 40, A2253–A2285. [Google Scholar] [CrossRef]
  30. Chandrasekaran, S.; Ipsen, I.C.F. On rank–revealing factorizations. SIAM J. Matrix Anal. Appl. 1994, 15, 592–622. [Google Scholar] [CrossRef] [Green Version]
  31. Powell, M.J.D.; Reid, J.K. On applying Householder transformations to linear least squares problems. In Information Processing 68, Proceedings of the International Federation of Information Processing Congress, Edinburgh, UK, 5–10 August 1968; IFIP: Amsterdam, The Netherlands, 1969; pp. 122–126. [Google Scholar]
  32. Cox, A.J.; Higham, N.J. Stability of Householder QR factorization for weighted least squares problems. In Numerical Analysis 1997, Proceedings of the 17th Dundee Biennial Conference, Dundee, UK, 24–27 June 1997; Griffiths, D.F., Higham, D.J., Watson, G.A., Eds.; Pitman Research Notes in Mathematics; Pitman: London, UK, 1998; Volume 380, pp. 57–73. [Google Scholar]
  33. Drmač, Z.; Bujanović, Z. On the failure of rank revealing QR factorization software—A case study. ACM Trans. Math. Softw. 2008, 35, 1–28. [Google Scholar] [CrossRef]
  34. Drmač, Z.; Šain Glibić, I. New numerical algorithm for deflation of infinite and zero eigenvalues and full solution of quadratic eigenvalue problems. arXiv 2019, arXiv:1904.05418. [Google Scholar]
  35. Bujanović, Z.; Drmač, Z. New robust ScaLAPACK routine for computing the QR factorization with column pivoting. arXiv 2019, arXiv:1910.05623. [Google Scholar]
  36. Drmač, Z.; Gugercin, S. A new selection operator for the iscrete empirical interpolation method—improved a priori error bound and extensions. SIAM J. Sci. Comput. 2016, 38, A631–A648. [Google Scholar] [CrossRef]
Figure 1. (Example 2, m = 3 .) First row: samples of x k from the first three trajectories, using two different sampling schemes. Second row: the corresponding values of log 10 ϵ k defined in (24) for 12,000 randomly selected points in the box [ 20 , 20 ] × [ 20 , 20 ] × [ 0 , 50 ] .
Figure 1. (Example 2, m = 3 .) First row: samples of x k from the first three trajectories, using two different sampling schemes. Second row: the corresponding values of log 10 ϵ k defined in (24) for 12,000 randomly selected points in the box [ 20 , 20 ] × [ 20 , 20 ] × [ 0 , 50 ] .
Mathematics 09 02075 g001
Figure 2. (Example 3, m = 9 .) The values of log 10 ϵ k defined in (24) for 12,000 randomly selected points in the box [ 20 , 20 ] × [ 20 , 20 ] × [ 0 , 50 ] . Left panel: with m = 3 as in Example 1. Right panel: with m = 9 .
Figure 2. (Example 3, m = 9 .) The values of log 10 ϵ k defined in (24) for 12,000 randomly selected points in the box [ 20 , 20 ] × [ 20 , 20 ] × [ 0 , 50 ] . Left panel: with m = 3 as in Example 1. Right panel: with m = 9 .
Mathematics 09 02075 g002
Figure 3. (Example 3, m = 9 .) Left panel: the (computed) eigenvalues of the matrix representation of the computed compression U N of U t . The red cross at the origin indicates a cluster of eigenvalues. Right panel: zoomed neighborhood of the origin, showing many (in this case 44) absolutely small eigenvalues, quite a few of which are negative real. The matrix U N is computed as ( pinv ( O X ) O Y ) . If it is computed as ( O X \ O Y ) , instead of the cluster around zero, the eigenvalue zero appears with multiplicity 45.
Figure 3. (Example 3, m = 9 .) Left panel: the (computed) eigenvalues of the matrix representation of the computed compression U N of U t . The red cross at the origin indicates a cluster of eigenvalues. Right panel: zoomed neighborhood of the origin, showing many (in this case 44) absolutely small eigenvalues, quite a few of which are negative real. The matrix U N is computed as ( pinv ( O X ) O Y ) . If it is computed as ( O X \ O Y ) , instead of the cluster around zero, the eigenvalue zero appears with multiplicity 45.
Mathematics 09 02075 g003
Figure 4. (Example 3, m = 9 .) The (computed) eigenvalues of the matrix L N . Note that some of them are at the boundary of the strip { z C : π / δ t < ( z ) < π / δ t } , i.e., the eigenvalues of log U N are at the boundary of { z C : π < ( z ) < π } , cf. Theorem 3. The right panel shows the distribution of the eigenvalues closer to the origin. Compare with Figure 3.
Figure 4. (Example 3, m = 9 .) The (computed) eigenvalues of the matrix L N . Note that some of them are at the boundary of the strip { z C : π / δ t < ( z ) < π / δ t } , i.e., the eigenvalues of log U N are at the boundary of { z C : π < ( z ) < π } , cf. Theorem 3. The right panel shows the distribution of the eigenvalues closer to the origin. Compare with Figure 3.
Mathematics 09 02075 g004
Figure 5. (Example 3, m = 9 .) Left panel: the first column of U N , computed in Matlab as pinv ( O X ) O Y . Its norm is U N ( : , 1 ) 2 5.0350 × 10 4 . Right panel: the first column of U N = O X \ O Y , with norm U N ( : , 1 ) 2 5.9611 × 10 4 . The true value of U N ( : , 1 ) should be e 1 = ( 1 , 0 , , 0 ) T .
Figure 5. (Example 3, m = 9 .) Left panel: the first column of U N , computed in Matlab as pinv ( O X ) O Y . Its norm is U N ( : , 1 ) 2 5.0350 × 10 4 . Right panel: the first column of U N = O X \ O Y , with norm U N ( : , 1 ) 2 5.9611 × 10 4 . The true value of U N ( : , 1 ) should be e 1 = ( 1 , 0 , , 0 ) T .
Mathematics 09 02075 g005
Figure 6. (Example 3, m = 9 .) The sparsity structure of pinv ( O X ) O Y (numerical rank 176) and O X \ O Y (numerical rank 175). The backslash operator uses the rank revealing (column pivoted) QR factorization and, by truncation, returns a sparse solution.
Figure 6. (Example 3, m = 9 .) The sparsity structure of pinv ( O X ) O Y (numerical rank 176) and O X \ O Y (numerical rank 175). The backslash operator uses the rank revealing (column pivoted) QR factorization and, by truncation, returns a sparse solution.
Mathematics 09 02075 g006
Figure 7. The (computed) singular values of O X R 3025 × 220 and U N R 220 × 220 , and the column norms. The numerical rank of O X and U N is 176, which is considerably below the full rank 220. If U N is computed as O X \ O Y , the numerical rank is 175. The results are similar (with the numerical rank 163) if the number of snapshots is increased to 24,025 as explained in Example 3.
Figure 7. The (computed) singular values of O X R 3025 × 220 and U N R 220 × 220 , and the column norms. The numerical rank of O X and U N is 176, which is considerably below the full rank 220. If U N is computed as O X \ O Y , the numerical rank is 175. The results are similar (with the numerical rank 163) if the number of snapshots is increased to 24,025 as explained in Example 3.
Mathematics 09 02075 g007
Figure 8. Left panel: The (computed) eigenvalues of U N R 220 × 220 (×), and the eigenvalues of e x p ( δ t L N ) (). The maximal relative difference between the matching eigenvalues is computed as 8.1 × 10 9 . Here, U N is computed as U N = Π R 1 Q T O Y without any truncation of R. The diagonal entries of R span, in absolute value, the range between 1.9 × 10 16 and 1.0 × 10 1 . This reveals the condition number of U N which is computed as 7.4 × 10 16 by the Matlab function cond(). Right panel: The values of log 10 ϵ k are defined in (24) for 12,000 randomly selected points in the box [ 20 , 20 ] × [ 20 , 20 ] × [ 0 , 50 ] . Compare with Figure 3, Figure 4 and Figure 7.
Figure 8. Left panel: The (computed) eigenvalues of U N R 220 × 220 (×), and the eigenvalues of e x p ( δ t L N ) (). The maximal relative difference between the matching eigenvalues is computed as 8.1 × 10 9 . Here, U N is computed as U N = Π R 1 Q T O Y without any truncation of R. The diagonal entries of R span, in absolute value, the range between 1.9 × 10 16 and 1.0 × 10 1 . This reveals the condition number of U N which is computed as 7.4 × 10 16 by the Matlab function cond(). Right panel: The values of log 10 ϵ k are defined in (24) for 12,000 randomly selected points in the box [ 20 , 20 ] × [ 20 , 20 ] × [ 0 , 50 ] . Compare with Figure 3, Figure 4 and Figure 7.
Mathematics 09 02075 g008
Figure 9. The errors ϵ k ( i ) (34). Left panel: time interval [ 0 , 0.1 ] . The total error is τ = 2.2259 × 10 2 . Right panel: time interval [ 0 , 0.18 ] . The total error is τ = 4.6174 × 10 2 . Similar results are obtained using logm(Y∗pinv(X)). On the other hand, while on the interval [ 0 , 0.19 ]  logm(Y/X) fails producing NaN’s, logm(Y∗pinv(X)) completed without NaN exceptions, alas with the error τ = 5.1718 × 10 2 and with ϵ k ( i ) similar to that in the right panel above.
Figure 9. The errors ϵ k ( i ) (34). Left panel: time interval [ 0 , 0.1 ] . The total error is τ = 2.2259 × 10 2 . Right panel: time interval [ 0 , 0.18 ] . The total error is τ = 4.6174 × 10 2 . Similar results are obtained using logm(Y∗pinv(X)). On the other hand, while on the interval [ 0 , 0.19 ]  logm(Y/X) fails producing NaN’s, logm(Y∗pinv(X)) completed without NaN exceptions, alas with the error τ = 5.1718 × 10 2 and with ϵ k ( i ) similar to that in the right panel above.
Mathematics 09 02075 g009
Figure 10. (Example 7.) The errors ϵ k ( i ) (34). Left panel: time interval [ 0 , 0.1 ] . The total error is τ = 1.1516 × 10 2 ( 1.2406 × 10 2 for logm(Y∗pinv(X))). Right panel: time interval [ 0 , 0.18 ] . The total error is τ = 1.7873 × 10 2 ( 9.4987 × 10 2 for logm(Y∗pinv(X))). Compare this with Figure 9.
Figure 10. (Example 7.) The errors ϵ k ( i ) (34). Left panel: time interval [ 0 , 0.1 ] . The total error is τ = 1.1516 × 10 2 ( 1.2406 × 10 2 for logm(Y∗pinv(X))). Right panel: time interval [ 0 , 0.18 ] . The total error is τ = 1.7873 × 10 2 ( 9.4987 × 10 2 for logm(Y∗pinv(X))). Compare this with Figure 9.
Mathematics 09 02075 g010
Figure 11. (Example 8.) First row: the time stamps of x 1 , , x 360 , illustrated on the first out of 12 generated trajectories. Three consecutive snapshots, with time lag δ t = 0.01 , are taken at ten randomly selected and fixed time instances. Second row: The first plot shows the relative errors ϵ k ( i ) with δ t = 0.01 , and the second plot for δ t = 0.1 (sampled on another randomly selected times). The total errors are τ = 1.4893 × 10 1 and τ = 8.6613 × 10 1 , respectively.
Figure 11. (Example 8.) First row: the time stamps of x 1 , , x 360 , illustrated on the first out of 12 generated trajectories. Three consecutive snapshots, with time lag δ t = 0.01 , are taken at ten randomly selected and fixed time instances. Second row: The first plot shows the relative errors ϵ k ( i ) with δ t = 0.01 , and the second plot for δ t = 0.1 (sampled on another randomly selected times). The total errors are τ = 1.4893 × 10 1 and τ = 8.6613 × 10 1 , respectively.
Mathematics 09 02075 g011
Figure 12. (Example 8.) The first plot shows the relative errors ϵ k ( i ) with δ t = 0.01 , and the second plot for δ t = 0.1 (sampled on another randomly selected times in [ 0 , 50 ] ). The total errors are τ = 1.8672 × 10 1 and τ = 1.8278 × 10 0 , respectively. In the second plot, we used the real parts of the computed approximations F i ( x k ) ˜ , as a non-principal (non-real) value of the logarithm was computed.
Figure 12. (Example 8.) The first plot shows the relative errors ϵ k ( i ) with δ t = 0.01 , and the second plot for δ t = 0.1 (sampled on another randomly selected times in [ 0 , 50 ] ). The total errors are τ = 1.8672 × 10 1 and τ = 1.8278 × 10 0 , respectively. In the second plot, we used the real parts of the computed approximations F i ( x k ) ˜ , as a non-principal (non-real) value of the logarithm was computed.
Mathematics 09 02075 g012
Figure 13. (Example 8.) The first plot shows the five positions at a sample trajectory where three consecutive snapshots are taken. The matrix O X is 180 × 455 . The reduced dimension N ˜ is 64. The second plot shows the relative errors ϵ k ( i ) with δ t = 0.01 . The total error is τ = 4.4319 × 10 1 .
Figure 13. (Example 8.) The first plot shows the five positions at a sample trajectory where three consecutive snapshots are taken. The matrix O X is 180 × 455 . The reduced dimension N ˜ is 64. The second plot shows the relative errors ϵ k ( i ) with δ t = 0.01 . The total error is τ = 4.4319 × 10 1 .
Mathematics 09 02075 g013
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Drmač, Z.; Mezić, I.; Mohr, R. Identification of Nonlinear Systems Using the Infinitesimal Generator of the Koopman Semigroup—A Numerical Implementation of the Mauroy–Goncalves Method. Mathematics 2021, 9, 2075. https://doi.org/10.3390/math9172075

AMA Style

Drmač Z, Mezić I, Mohr R. Identification of Nonlinear Systems Using the Infinitesimal Generator of the Koopman Semigroup—A Numerical Implementation of the Mauroy–Goncalves Method. Mathematics. 2021; 9(17):2075. https://doi.org/10.3390/math9172075

Chicago/Turabian Style

Drmač, Zlatko, Igor Mezić, and Ryan Mohr. 2021. "Identification of Nonlinear Systems Using the Infinitesimal Generator of the Koopman Semigroup—A Numerical Implementation of the Mauroy–Goncalves Method" Mathematics 9, no. 17: 2075. https://doi.org/10.3390/math9172075

APA Style

Drmač, Z., Mezić, I., & Mohr, R. (2021). Identification of Nonlinear Systems Using the Infinitesimal Generator of the Koopman Semigroup—A Numerical Implementation of the Mauroy–Goncalves Method. Mathematics, 9(17), 2075. https://doi.org/10.3390/math9172075

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop