Next Article in Journal
Detection of Novel Objects without Fine-Tuning in Assembly Scenarios by Class-Agnostic Object Detection and Object Re-Identification
Previous Article in Journal
Integrating Machine Learning with Intelligent Control Systems for Flow Rate Forecasting in Oil Well Operations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cost Function Approach for Dynamical Component Analysis: Full Recovery of Mixing and State Matrix

1
Institute of Mathematics, Julius-Maximilians-Universität, Campus Hubland Nord Emil-Fischer-Straße 31, 97074 Würzburg, Germany
2
Center for Signal Analysis of Complex Systems, Ansbach University of Applied Sciences, Residenzstr. 8, 91522 Ansbach, Germany
*
Author to whom correspondence should be addressed.
Automation 2024, 5(3), 360-372; https://doi.org/10.3390/automation5030022
Submission received: 21 June 2024 / Revised: 26 July 2024 / Accepted: 28 July 2024 / Published: 1 August 2024

Abstract

:
A reformulation of the dynamical component analysis (DyCA) via an optimization-free approach is presented. The original cost function approach is converted into a numerical linear algebra problem, i.e., the computation of coupled singular-value decompositions. A simple algorithm is presented together with numerical experiments to document the feasability of the approach. This methodology is able to recover the mixing and state matrices of multivariate signals from high-dimensional measured data fully.

1. Introduction

An important task in signal processing is the decomposition of a multivariate signal for the analysis of measured or simulated data leading to the possible detection of the relevant subspace or the sources of the signal. Recently, a new method—dynamical component analysis (DyCA)—based on modeling the signal via two coupled systems of ordinary differential equations (ODE) was introduced. One system is governed by time-invariant linear dynamics, whereas the second one is defined by an unknown non-linear vector field, assumed to be smooth. Its derivation and its features have been presented in depth (see [1,2]). The presented algorithm was nearly as simple as principal component analysis (PCA) or certain independent component analysis (ICA) approaches. The results obtained via DyCA, however, yield deeper insight into the underlying dynamics of the data. Moreover, as demonstrated by several examples in [2], typically, neither ICA nor PCA approaches are able to capture the linear/non-linear character of the underlying dynamics.
The present work, in particular, is partially based on two conference papers [3,4]. Moreover, our objective is to reformulate the original cost function approach for DyCA— formerly leading to a generalized eigenvalue, or more generally, to an invariant eigenspace problem—into an inverse-problem-type formulation, which allows for the recovery of the state and mixing matrices from high-dimensional matrix-valued time series.
This paper is organized as follows. First, the general problem is briefly reviewed; the cost function is discussed in detail; and, in particular, the critical points are analyzed. Second, we formulate an optimization-free algorithm, mainly based on solving coupled singular-value decompositions. Finally, we present numerical experiments to support our approach.

2. Problem Formulation

Consider a signal Q = [ q ( t 1 ) , , q ( t T ) ] R N × T and its derivative with respect to time denoted by Q ˙ R N × T .
Let 1 m n N . Assume that Q and Q ˙ are of the form
Q = W X , Q ˙ = W X ˙ ,
where W R N × n is a constant matrix of rank n = rank ( W ) and X = [ x ( t 1 ) , , x ( t T ) ] , X ˙ = [ x ˙ ( t 1 ) , , x ˙ ( t T ) ] R N × T are samples of x : [ t 1 , t T ] R n , fulfilling the ODE
[ I m , 0 ] x ˙ ( t ) = A x ( t ) , [ 0 , I n m ] x ˙ ( t ) = f ( x ( t ) ) .
Here, A R m × n is some constant matrix and f : R n R n m is an unknown smooth function. Under these assumptions, we formulate the problem that will be addressed in the sequel.
Problem 1
(DyCA).  
  • Given a signal Q R N × T , its derivative Q ˙ R N × T with 1 m n N .
  • Find estimates in a least squares sense for A R m × n , W R N × n , and X , X ˙ R N × T according to the above assumptions.
Defining f ( X ) : = [ f ( x 1 ) , , f ( x T ) ] R n × T , we obtain the following via Equations (1) and (2):
Q = W X , Q ˙ = W A X f ( X ) .
We will propose a method by which to solve Problem 1, assuming exact data.

3. Cost Function

We approach the DyCA problem by minimizing a suitable cost function. Similar to [5], we fit the part of the data corresponding to the linear part of the ODE by minimizing the cost:
f : R N × n × R m × n R , ( W , A ) [ I m , 0 ] X ˙ A X F 2 .
Here, X and X ˙ depend implicitly on W.
To derive a more explicit expression for Equation (4), we rewrite Equation (3) by considering thin singular-value decompositions (SVDs) of Q and Q ˙ , respectively. In more detail, let
θ 0 Σ 0 Ξ 0 = Q = ( 3 ) W X
be a thin SVD of Q, where θ 0 St N , n , Σ 0 R n × n diagonal and Ξ 0 St T , n . Analogously, let
θ 2 Σ 2 Ξ 2 = Q ˙ = ( 3 ) W A X f ( X )
be a thin SVD of Q ˙ , where θ 2 St N , n , Σ 0 R n × n diagonal and Ξ 2 St T , n . As usual, (cf., e.g., [4]), the Stiefel manifold is denoted by
St a , b : = { X R a × b X X = I b } ,
i.e., a differentiable submanifold of the vector space R a × b with a b consisting of rectangular matrices with orthonormal columns. Exploiting θ i θ i = I k for i { 0 , 2 } , we obtain via Equation (5):
( θ 0 W ) X = Σ 0 Ξ 0 ,
while Equation (6) yields
( θ 2 W ) X ˙ = Σ 2 Ξ 2 .
For i { 0 , 2 } , we set
G i = G i ( W ) = G i ( θ i , W ) : = θ i W R n × n .
Via the assumptions imposed in the formulation of the DyCA problem, we have for i { 0 , 2 }
span ( W ) = span ( θ i ) ,
yielding G i = θ i W GL ( n ) ; i.e., G i is invertible.
Hence, solving Equation (5), as well as Equation (6) for X and X ˙ , respectively, and putting the result into Equation (4) yields the smooth cost
f : R N × n × R m × n R , ( W , A ) f ( W , A ) : = [ I m , 0 ] G 2 1 Σ 2 Ξ 2 A G 0 1 Σ 0 Ξ 0 F 2 ,
where G 0 = G 0 ( W ) and G 2 = G 2 ( W ) are given via Equation (10).
Remark 1.
Essentially, the cost function Equation (12) considered here is a reformulation of the one considered earlier (see [2,5] and several follow up papers, in particular, e.g., [3,6]). There is, however, an important difference; the reformulation here takes, in some sense, the inverse problem character explicitly into consideration.
Remark 2.
Strictly speaking, the cost f defined in Equation (12) is not defined on the whole space R N × n × R m × n but only on the subset given by U × R m × n , where
U = W R N × n θ 0 W GL ( n ) and θ 2 W GL ( n ) = ϕ 0 1 R { 0 } ϕ 2 1 R { 0 } ,
where ϕ i : R N × n R are the continuous functions defined for i { 0 , 2 } by
ϕ i ( W ) = det ( θ i W ) .
Note that U R N × n is open by the second equality in Equation (13) and the continuity of ϕ i . Thus, the domain of f, namely, U × R n × m , is an open subset of R N × n × R n × m .
Notation 1.
From now on, if not indicated otherwise, U × R m × n R N × n × R m × n denotes the domain of f as characterized in Remark 2.
In the sequel, through abuse of notation, we sometimes write f : R N × n × R m × n R instead of f : U × R m × n R .

4. Analysis of the Cost

4.1. Derivatives

Obviously, f : U × R m × n R is a smooth function. To obtain candidates for points ( W , A ) U × R m × n , where f attains a minimum, we search for critical points of f.
As a preparation, to compute the derivative of f, we recall the following well-known lemma.
Lemma 1.
The derivative of
inv : GL ( n ) GL ( n ) , A inv ( A ) = A 1 ,
evaluated at A GL ( n ) in direction B R n × n , is given by
D inv ( A ) B = A 1 B A 1 .
Lemma 2.
Let f : U × R m × n R be defined via Equation (12) and set
Ξ = Ξ 2 Ξ 0 R n × n .
Moreover, let ( W , A ) U × R m × n and ( w , a ) R N × n × R m × n . Then, the derivative of f ( · , A ) : R N × n R at W U in the direction w R N × n is given by
D 1 f ( W , A ) w = 2 tr G 2 1 θ 2 w G 2 1 Σ 2 2 G 2 T I m 0 0 0 + 2 tr G 2 1 θ 2 w G 2 1 Σ 2 Ξ Σ 0 G 0 A [ I m , 0 ] + 2 tr G 2 1 Σ 2 Ξ Σ 0 G 0 w θ 0 G 0 A [ I m , 0 ] 2 tr G 0 1 θ 0 w G 0 1 Σ 0 2 G 0 A A ,
and the derivative of f with respect to the second argument, i.e., of the derivative of the function f ( W , · ) : R m × n R at A R m × n in the direction a R m × n , reads
D 2 f ( W , A ) a = 2 tr G 2 1 Σ 2 Ξ Σ 0 G 0 a [ I m , 0 ] + 2 tr G 0 1 Σ 0 2 G 0 a A .
Proof. 
Expanding
f ( W , A ) = tr G 2 1 Σ 2 2 G 2 I m 0 0 0 2 tr G 2 1 Σ 2 Ξ 2 Ξ 0 Σ 0 G 0 A [ I m , 0 ] + tr G 0 1 Σ 0 2 G 0 A A
and using Lemma 1, we obtain Equations (18) and (19) via a tedious but straightforward calculation.    □
Using Lemma 2, we search for critical points of f. Obviously, ( W , A ) U × R m × n is a critical point of f iff the following two conditions, i.e.,
D 1 f ( W , A ) w = 0 , D 2 f ( W , A ) a = 0 ,
hold for all w R N × n and a R m × n .
Via Equation (18), we obtain D 1 f ( W , A ) w = 0 for all w R N × n iff
0 = G 2 1 Σ 2 1 G 2 I m 0 0 0 G 2 1 θ 2 + G 2 1 Σ 2 Ξ Σ 0 G 0 A [ I m , 0 ] G 2 1 θ 2 + G 0 1 Σ 0 Ξ Σ 2 G 2 I m 0 A G 0 1 θ 0 G 0 1 Σ 0 2 G 0 A A G 0 1 θ 0
is satisfied. Clearly, Equation (22) is equivalent to
G 2 1 Σ 2 Ξ Σ 0 G 0 A [ I m , 0 ] Σ 2 2 G 2 I m 0 0 0 G 2 1 θ 2 = G 0 1 Σ 0 2 G 0 A A Σ 0 Ξ Σ 2 G 2 I m 0 A G 0 1 θ 0 .
Similarly, we obtain via Equation (19) that D 2 f ( W , A ) a = 0 is fulfilled for all a R m × n iff
A G 0 1 Σ 0 2 G 0 = [ I m , 0 ] G 2 1 Σ 2 Ξ Σ 0 G 0
holds. Because G 0 , Σ 0 GL ( n ) are invertible, Equation (24) is equivalent to
A = [ I m , 0 ] G 2 1 Σ 2 Ξ Σ 0 1 G 0 .
Moreover, Equation (25) implies
A [ I m , 0 ] = G 0 Σ 0 1 Ξ Σ 2 G 2 I m 0 0 0 , I m 0 A = I m 0 0 0 G 2 1 Σ 2 Ξ Σ 0 1 G 0 , A A = G 0 Σ 0 1 Ξ Σ 2 G 2 I m 0 0 0 G 2 1 Σ 2 Ξ Σ 0 1 G 0 .
Plugging Equation (26) into Equation (23) yields
G 2 1 Σ 2 Ξ Σ 0 G 0 G 0 Σ 0 1 Ξ Σ 2 G 2 I m 0 0 0 Σ 2 2 G 2 I m 0 0 0 G 2 1 θ 2 = G 0 1 ( Σ 0 2 G 0 G 0 Σ 0 1 Ξ Σ 2 G 2 I m 0 0 0 G 2 1 Σ 2 Ξ Σ 0 1 G 0 Σ 0 Ξ Σ 2 G 2 I m 0 0 0 G 2 1 Σ 2 Ξ Σ 0 1 G 0 ) G 0 1 θ 0 = 0 ,
being equivalent to
G 2 1 Σ 2 Ξ Ξ Σ 2 G 2 I m 0 0 0 Σ 2 2 G 2 I m 0 0 0 G 2 1 θ 2 = G 2 1 Σ 2 Ξ Ξ I n Σ 2 G 2 I m 0 0 0 G 2 1 θ 2 = 0 .
Multiplying Equation (28) from the left by G 2 Σ 2 1 , as well as from the right by θ 2 G 2 , and using the orthonormality property θ 2 θ 2 = I n yields
( Ξ Ξ I n ) Σ 2 G 2 I m 0 0 0 = 0 .
Conversely, assume that Equation (29) is satisfied; then, Equation (28) holds. Thus, Equation (28) is equivalent to Equation (29).
The above discussion is summarized in the next Theorem.
Theorem 1.
Let ( W , A ) U × R m × n . Then, ( W , A ) is a critical point of f : U × R m × n R , defined in Equation (12), iff the following two equalities, i.e.,
( Ξ Ξ I n ) Σ 2 G 2 I m 0 0 0 = 0
and
A = [ I m , 0 ] G 2 1 Σ 2 Ξ Σ 0 1 G 0 ,
hold. Here, for convenience, we used G 0 : = θ 0 W .

4.2. Critical Points

In this section, the critical points of the DyCA cost are determined by using the characterization of Theorem 1, i.e., we solve the equation
( Ξ Ξ I n ) Σ 2 G 2 I m 0 0 0 = 0
for G 2 GL ( n ) . To this end, we define
U : = ( Ξ Ξ I n ) Σ 2 .
Then, Equation (32) is equivalent to
U G 2 I m 0 0 0 = 0 .
Set F = U G 2 and partition F = F 11 F 12 F 21 F 22 , where F 11 R m × m , F 12 R m × ( n m ) , F 21 R ( m n ) × m , and F 22 R ( n m ) × ( n m ) . Then, Equation (34) yields
F 11 F 12 F 21 F 22 I m 0 0 0 = F 11 0 F 21 0 = 0 ,
i.e., Equation (34) holds iff U G 2 = 0 F 12 0 F 22 is fulfilled. Next, partition
U = U 11 U 12 U 21 U 22 , G 2 = G ˜ 11 G ˜ 12 G ˜ 21 G ˜ 22 ,
where
U 11 , G ˜ 11 R m × m , U 21 , G ˜ 21 R ( n m ) × m , U 12 , G ˜ 12 R m × ( n m ) , U 22 , G ˜ 22 R ( n m ) × ( n m ) ,
and consider
F = U G 2 = U 11 U 12 U 21 U 22 G ˜ 11 G ˜ 12 G ˜ 21 G ˜ 22 = F 11 F 12 F 21 F 22 .
Clearly, through Equation (38), F 11 = 0 and F 21 = 0 are fulfilled iff
U 11 U 12 U 21 U 22 G ˜ 11 G ˜ 21 = 0
is satisfied; i.e.,
span G ˜ 11 G ˜ 21 ker ( U ) .
Because of G 2 GL ( n ) being equivalent to G 2 GL ( n ) , we obtain
rank ( G ˜ 11 G ˜ 21 ) = m .
Thus, Equation (32) admits a solution if dim ( ker ( U ) ) m .

4.3. Construction of a Critical Point

Assuming dim ker ( U ) m , we construct a solution of Equation (32). Let
U = R D Q = [ R 1 , R 2 ] D ˜ 0 0 0 Q 1 Q 2
be an SVD of U, where R 1 , Q 1 St n , n m , R 2 , Q 2 St n , m , fulfilling R 1 R 2 = 0 and Q 1 Q 2 = 0 . Moreover, D ˜ R ( n m ) × ( n m ) and D ˜ 0 0 0 R n × n are diagonal. Next, define G ˜ 11 G ˜ 21 = Q 2 . Then,
U G ˜ 11 G ˜ 21 = U Q 2 = 0
is satisfied because of Equation (42). Now, set
G 2 = [ Q 2 , Q 1 ] O ( n ) GL ( n ) ,
being equivalent to
G 2 = [ Q 2 , Q 1 ] O ( n ) .
Then, G 2 is a solution of Equation (32) via Equation (43) combined with Equation (39).
Remark 3.
 
  • Equation (32) has a solution iff dim ker ( U ) m .
  • Assume dim ( ker ( U ) ) = m and let G 2 = [ Q 2 , Q 1 ] O ( n ) be the solution constructed above. Then, every other solution of Equation (32) is of the form G ^ 2 = G 2 A 1 0 0 A 2 , where A 1 GL ( m ) and A 2 GL ( n m ) .

4.4. Recovering Mixing Matrix W and State Matrix A

Next, we show how W U can be recovered, assuming a G 2 GL ( n ) , satisfying Equation (32), is given. Recall that W U fulfills
G 2 = θ 2 W ,
where θ 2 St N , n . Thus, given G 2 O ( n ) , the matrix
W = θ 2 G 2
is a solution of Equation (46) because of
θ 2 W = θ 2 θ 2 G 2 = I n G 2 = G 2 .
Remark 4.
W is not unique since G 2 is not unique; moreover, let Y R N × n with θ 2 Y = 0 . Then, W ^ = θ 2 G 2 + Y R N × n also satisfies θ 2 W ^ = G 2 .
Once W is determined, we also obtain A via Equation (31), namely,
A = [ I m , 0 ] G 2 1 Σ 2 Ξ Σ 0 1 G 0 ,
where G 0 = θ 0 W according to Equation (10).

5. Algorithm

The analysis of the cost function above leads to the following Algorithm 1 for solving Problem 1.
Algorithm 1 DyCA
Input:  Q , Q ˙ R N × T , 1 m n N
  • Compute thin SVDs Q = θ 0 Σ 0 Ξ 0 and Q ˙ = θ 2 Σ 2 Ξ 2 .
  • Set Ξ = Ξ 2 Ξ 0 .
  • Compute U = Σ 2 ( Ξ Ξ I n ) .
  • Compute an SVD U = [ R 1 , R 2 ] D ˜ 0 0 0 Q 1 Q 2 .
  • Set G 2 = [ Q 2 , Q 1 ] .
  • Define W = θ 2 G 2 .
  • Define A = [ I m , 0 ] G 2 Σ 2 Ξ Σ 0 1 ( θ 0 W )
Output:  ( W , A ) St N , n × R m × n .

6. Applications

We now apply the proposed method to the Rössler attractor and the Lorenz system.

6.1. Rössler Attractor

We consider the Rössler attractor introduced in [7]. Consider the ODE
x ˙ 1 ( t ) = x 2 ( t ) x 3 ( t ) , x ˙ 2 ( t ) = x 1 ( t ) + a x 2 ( t ) , x ˙ 3 ( t ) = b + x 3 ( t ) ( x 1 ( t ) c ) ,
where a = 0.15 , b = 0.20 , and c = 10.0 . Accordingly, with
A = 0 1 1 1 a 0 R 2 × 3
and
f : R 3 R , x : = x 1 x 2 x 3 f ( x ) : = b + x 3 ( x 1 c ) ,
we rewrite Equation (50) as
x ˙ ( t ) = x ˙ 1 ( t ) x ˙ 2 ( t ) x ˙ 2 ( t ) = A x ( t ) f ( x ) ,
or, equivalently,
[ I 2 , 0 ] x ˙ ( t ) = A x ( t ) , x ˙ ( t ) = f ( x ( t ) ) .
Thus, Equation (50) is of the form of Equation (2), where n = 3 and m = 2 . Hence, we may apply Algorithm 1 to solve Problem 1 if the low-dimensional dynamics of the signal satisfies the ODE Equation (50).
To illustrate the application of Algorithm 1, we perform a numerical experiment using MATLAB 2024a.
Using the notation from Problem 1, a three-dimensional signal X = [ x ( t 1 ) , , x ( t T ) ] R 3 × T is generated by integrating Equation (50) using the MATLAB function ode45. By evaluating the right-hand side of Equation (50) at the time steps t i , the derivative X ˙ = [ x ˙ ( t 1 ) , , x ˙ ( t T ) ] R 3 × T is computed. The mixing matrix W R N × 3 , where N = 30 , is generated by uniformly distributed random numbers in the interval ( 0.5 , 0.5 ) . We then define Q = W X and Q ˙ = W X ˙ and apply Algorithm 1 to the signal Q R 30 × T and its derivative Q ˙ R 30 × T , where n = 3 and m = 2 .
Our results are illustrated in Figure 1, Figure 2 and Figure 3 below. Alongside the original trajectory, x ( t i ) , we also plot the reconstructed trajectory obtained via the DyCA, as well as a reconstruction of the signal by means of a thin SVD of Q.

6.2. Lorenz System

We also apply DyCA to the Lorenz system
x ˙ 1 ( t ) = a ( x 2 ( t ) x 1 ( t ) ) , x ˙ 2 ( t ) = x 1 ( t ) ( b x 3 ( t ) x 2 ( t ) ) , x ˙ 3 ( t ) = x 1 ( t ) x 2 ( t ) c x 3 ( t ) ,
where a = 10 , b = 28 , and c = 8 / 3 . Accordingly, by defining A = a a 0 and
f : R 3 R 2 , x 1 x 2 x 3 f ( x 1 , x 2 , x 3 ) : = x 1 ( b x 3 x 2 ) x 1 x 2 c x 3 ,
we rewrite Equation (55) as follows:
[ I 1 , 0 ] x ˙ ( t ) = A x ( t ) [ 0 , I 2 ] x ˙ ( t ) = f ( x ( t ) ) .
Thus, (55) is written in the form of Equation (2), where n = 3 and m = 1 . Hence, we may apply Algorithm 1 to solve Problem 1, where the low-dimensional dynamics of the signal satisfies the ODE Equation (55).
We also indicate this via another numerical experiment. Analogously to the Rössler system discussed above, we create a mixing matrix W R N × n , where N = 30 , and we generate the signal Q and its time derivative by integrating Equation (55) using the MATLAB function ode45. Similar to Figure 1, Figure 2 and Figure 3, we present the results obtained for the Lorenz system in Figure 4, Figure 5 and Figure 6 below.

7. Outlook and Discussion

In this paper, we have discussed a reformulation of the so-called DyCA problem, putting the original cost function approach into perspective with respect to an inverse problem formulation. It is certainly out of scope for this paper to discuss more advanced techniques from the vast area of numerics for inverse problems. In particular, to acknowledge the fact that one is ultimately interested in the inverse of the mixing matrix in the case of noise, a possibly ill-posed problem. For results in this direction, we refer the reader to forthcoming papers including real-world data, e.g., analyzing EEG data. We, however, have shown so far that for the two examples (Lorenz and Rössler) where data were generated artificially, our results are promising; in particular, for data corrupted only by a reasonable amount of noise, the algorithm works well.
Clearly, any questions related to scalability, usability, or complexity in the above context can be easily addressed via the vast body of existing literature on singular-value decomposition-based algorithms from the last 30 years, either from the numerical linear algebra community or from the pertinent signal processing literature.

Author Contributions

Conceptualization, K.H., M.S. and C.U.; Writing–original draft, K.H., M.S. and C.U.; Writing–review & editing, K.H., M.S. and C.U. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by the German Federal Ministry of Education and Research (BMBF-Projekt, funding numbers: 05M20WWA and 05M20WBA Verbundprojekt 05M2020—DyCA).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No data available.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DyCADynamical component analysis
ODEOrdinary differential equation
SVDSingular-value decomposition

References

  1. Korn, K.; Seifert, B.; Uhl, C. Dynamical Component Analysis (DYCA) and Its Application on Epileptic EEG. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1100–1104. [Google Scholar] [CrossRef]
  2. Uhl, C.; Kern, M.; Warmuth, M.; Seifert, B. Subspace Detection and Blind Source Separation of Multivariate Signals by Dynamical Component Analysis (DyCA). IEEE Open J. Signal Process. 2020, 1, 230–241. [Google Scholar] [CrossRef]
  3. Romberger, P.; Warmuth, M.; Uhl, C.; Hüper, K. Dynamical Component Analysis: Matrix Case and Differential Geometric Point of View. In Proceedings of the CONTROLO 2022, Caparica, Portugal, 6–8 July 2022; Brito Palma, L., Neves-Silva, R., Gomes, L., Eds.; Springer: Cham, Switzerland, 2022; pp. 385–394. [Google Scholar] [CrossRef]
  4. Schlarb, M.; Hüper, K. Optimization on Stiefel Manifolds. In Proceedings of the CONTROLO 2022, Caparica, Portugal, 6–8 July 2022; Brito Palma, L., Neves-Silva, R., Gomes, L., Eds.; Springer: Cham, Switzerland, 2022; pp. 363–374. [Google Scholar] [CrossRef]
  5. Seifert, B.; Korn, K.; Hartmann, S.; Uhl, C. Dynamical Component Analysis (DYCA): Dimensionality Reduction for High-Dimensional Deterministic Time-Series. In Proceedings of the 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), Aalborg, Denmark, 17–20 September 2018; pp. 1–6. [Google Scholar] [CrossRef]
  6. Paglia, C.; Stiehl, A.; Uhl, C. Identification of Low-Dimensional Nonlinear Dynamics from High-Dimensional Simulated and Real-World Data. In Proceedings of the CONTROLO 2022, Caparica, Portugal, 6–8 July 2022; Brito Palma, L., Neves-Silva, R., Gomes, L., Eds.; Springer: Cham, Switzerland, 2022; pp. 205–213. [Google Scholar] [CrossRef]
  7. Rössler, O. An equation for continuous chaos. Phys. Lett. A 1976, 57, 397–398. [Google Scholar] [CrossRef]
Figure 1. DyCA applied to a trajectory of the Rössler system: original signal.
Figure 1. DyCA applied to a trajectory of the Rössler system: original signal.
Automation 05 00022 g001
Figure 2. DyCA applied to a trajectory of the Rössler system: projection via DyCA.
Figure 2. DyCA applied to a trajectory of the Rössler system: projection via DyCA.
Automation 05 00022 g002
Figure 3. DyCA applied to a trajectory of the Rössler system: SVD-based projection.
Figure 3. DyCA applied to a trajectory of the Rössler system: SVD-based projection.
Automation 05 00022 g003
Figure 4. DyCA applied to a trajectory of the Lorenz system: original signal.
Figure 4. DyCA applied to a trajectory of the Lorenz system: original signal.
Automation 05 00022 g004
Figure 5. DyCA applied to a trajectory of the Lorenz system: projection via DyCA.
Figure 5. DyCA applied to a trajectory of the Lorenz system: projection via DyCA.
Automation 05 00022 g005
Figure 6. DyCA applied to a trajectory of the Lorenz system: SVD-based projection.
Figure 6. DyCA applied to a trajectory of the Lorenz system: SVD-based projection.
Automation 05 00022 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hüper, K.; Schlarb, M.; Uhl, C. Cost Function Approach for Dynamical Component Analysis: Full Recovery of Mixing and State Matrix. Automation 2024, 5, 360-372. https://doi.org/10.3390/automation5030022

AMA Style

Hüper K, Schlarb M, Uhl C. Cost Function Approach for Dynamical Component Analysis: Full Recovery of Mixing and State Matrix. Automation. 2024; 5(3):360-372. https://doi.org/10.3390/automation5030022

Chicago/Turabian Style

Hüper, Knut, Markus Schlarb, and Christian Uhl. 2024. "Cost Function Approach for Dynamical Component Analysis: Full Recovery of Mixing and State Matrix" Automation 5, no. 3: 360-372. https://doi.org/10.3390/automation5030022

APA Style

Hüper, K., Schlarb, M., & Uhl, C. (2024). Cost Function Approach for Dynamical Component Analysis: Full Recovery of Mixing and State Matrix. Automation, 5(3), 360-372. https://doi.org/10.3390/automation5030022

Article Metrics

Back to TopTop