Next Article in Journal
Ultrasonic Inspection for Welds with Irregular Curvature Geometry Using Flexible Phased Array Probes and Semi-Auto Scanners: A Feasibility Study
Next Article in Special Issue
Vehicle Impact on Tire Road Noise and Validation of an Algorithm to Virtually Change Tires
Previous Article in Journal
Silver Isotopes in Silver Suggest Phoenician Innovation in Metal Production
Previous Article in Special Issue
Gaussian-Based Machine Learning Algorithm for the Design and Characterization of a Porous Meta-Material for Acoustic Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uncertainty Analysis and Experimental Validation of Identifying the Governing Equation of an Oscillator Using Sparse Regression

1
Research Group System Reliability, Adaptive Structures, and Machine Acoustics SAM, Technical University of Darmstadt, 64287 Darmstadt, Germany
2
Fraunhofer Institute for Structural Durability and System Reliability LBF, 64289 Darmstadt, Germany
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(2), 747; https://doi.org/10.3390/app12020747
Submission received: 1 December 2021 / Revised: 20 December 2021 / Accepted: 28 December 2021 / Published: 12 January 2022
(This article belongs to the Special Issue Machine Learning for Noise and Vibration Engineering)

Abstract

:
In recent years, the rapid growth of computing technology has enabled identifying mathematical models for vibration systems using measurement data instead of domain knowledge. Within this category, the method Sparse Identification of Nonlinear Dynamical Systems (SINDy) shows potential for interpretable identification. Therefore, in this work, a procedure of system identification based on the SINDy framework is developed and validated on a single-mass oscillator. To estimate the parameters in the SINDy model, two sparse regression methods are discussed. Compared with the Least Squares method with Sequential Threshold (LSST), which is the original estimation method from SINDy, the Least Squares method Post-LASSO (LSPL) shows better performance in numerical Monte Carlo Simulations (MCSs) of a single-mass oscillator in terms of sparseness, convergence, identified eigenfrequency, and coefficient of determination. Furthermore, the developed method SINDy-LSPL was successfully implemented with real measurement data of a single-mass oscillator with known theoretical parameters. The identified parameters using a sweep signal as excitation are more consistent and accurate than those identified using impulse excitation. In both cases, there exists a dependency of the identified parameter on the excitation amplitude that should be investigated in further research.

1. Introduction

Mathematical model building, especially the construction of governing equations, is a significant problem for a large variety of dynamic systems, such as vibration systems [1], biological systems [2], and fluid mechanical systems [3]. These models can be used for prediction, simulation, condition monitoring, optimization, etc. [4]. The traditional methods to build a model are based on the basic knowledge in one or many categories. These types of model building are described as “bottom–up” [5]. However, these methods work only with a priori knowledge. In recent years, the rapid growth of computational capacity of computers has enabled researchers in several domains to build models with a “top–down” approach using acquired data instead of the domain knowledge [5]. Many topics are related to this approach, such as System Identification (SI) and the development or improvement of structural models using the input and output data from real measurements [6]. These methods are especially useful when the first principle knowledge is not discovered, for example, the macroscopic behavior of advanced materials [7], in aerospace engineering [8], in biology [2,9,10], in economy [11], in music [12,13], etc. The models of vibration systems can be identified with various approaches, such as iterative model updating [14], the Polynomial NonLinear State Space (PNLSS) model [15], Volterra series [16], the Wiener and Hammerstein model [17,18], artificial neural networks [19,20], and equation-free model building [21]. On the downside, these methods usually build “black-box” or “grey-box” models, which means the theoretical understanding of the system is totally or partially absent from the model [4].
In the category of sparse regression, the SINDy framework (Sparse Identification of Nonlinear Dynamical Systems) shows a bright perspective in terms of interpretability. It is a technique to build the differential governing equation by extracting a few relevant terms from a library of candidate functions [22]. The polynomial development of a time series x ( t ) is assumed as candidate functions and the regression method Least Squares method with Sequential Threshold (LSST, also called STLS [23]) is suggested for sparse regression [24]. In recent research, SINDy was applied to boundary value problems [25], and advanced materials [7]. Furthermore, SINDy is also applicable to Model Predictive Control (MPC) to reduce complexity and to enhance the interpretability of the integrated empirical model [26,27].
Many extensions have been adopted on the SINDy framework for different benefits. First, this framework can be modified using an enlarged library of candidate functions, such as trigonometric functions [24], partial differential equations [28,29], and rational functions [22]. Second, different methods of sparse regression can be applied. The methods, for example, Least Absolute Shrinkage and Selection Operator (LASSO) [30] and compressed sensing [31,32,33], have been used to enforce sparsity before the development of SINDy. Recently, a broad range of regularized regression methods have been reformulated and generalized in the framework of Sparse Relaxed Regularized Regression (SR3) [34]. Throughout this work, LSST and the Least Squares method Post-LASSO (LSPL) [35] will be discussed. Third, different signal processing methods are used to correct or transfer the measurement in order to make the solved problem better-conditioned, such as normalization [29], Hankel Alternative View Of Koopman (HAVOK) [36], and neural networks [37].
Since the governing equation of a discrete vibration system and SINDy share the same mathematical model structure, the SINDy framework gains attention for identifying the governing equations of discrete vibration systems. The traditional governing equation of a discrete vibration system is an Ordinary Differential Equation (ODE) using factors related to mass, m, stiffness, k, and damping ratio, d, which should be individually measured to build the governing equation. Nonlinear vibration systems can also be interpreted with similar concepts. For example, the Duffing oscillator has cubic stiffness in relation to displacement [38]. Thus, a SINDy model of a vibration system not only predicts the dynamic behavior but also is interpretable with fundamental concepts. Furthermore, the results of identification can contribute to a deeper theoretical understanding of an unknown system. In summary, SINDy shows potential for identifying the governing equation of discrete vibration systems in a “top–down” and interpretable manner without individually acquiring the parameters m, k, or d related to the domain knowledge of structural vibrations.
In this research, we explore the possibility of identifying the governing equation of an oscillator using the two sparse regression methods LSST and LSPL. We conduct numerical uncertainty analysis and experimental validation. The methods LSST and LSPL are introduced in detail in Section 2. In Section 3, we test SINDy on a numerical model of a single-mass oscillator. In order to simulate the true use case, noise signals at different levels are artificially added to the training data. At the same time, the highest polynomial order of the library varies from 1 to 5. The uncertainties of the results for both regression methods are obtained through Monte Carlo Simulation (MCS) with randomly generated parameters of the oscillator. LSST, the original sparse regression algorithm of SINDy, delivers less accurate sparsity and worse overfitting at increased noise levels. A similar statement was mentioned in a previous work, where the approximately noise-free measurement data is the requirement of SINDy [37]. To improve the robustness of SINDy against noise, we suggest the method LSPL to replace the original LSST method. After the numerical uncertainty analysis, the modified SINDy method, SINDy-LSPL, is tested on a real test bench in Section 4. In Section 4 and Section 5, we define a complete workflow from acquisition of the time series to identification and evaluation. Overall, in both numerical and experimental tests, we review how SINDy-LSPL performs at discovering the governing equation of a simple vibration system, which is fundamental for further research dealing with complex systems with more than one Degree of Freedom (DoF). Detailed conclusions and outlooks for this research are given in Section 6.

2. Methods

The SINDy framework [24] is based on differential polynomial and sparse regression. It is assumed that the equation of motion to be identified,
x ˙ ( t ) = f ( x ( t ) ) = ( θ ( x T ( t ) ) Ξ ) T ,
has a set of polynomial candidate functions θ ( x T ( t ) ) regarding the state vector x R n x and a static matrix of coefficient Ξ R n ξ × n x . ξ i R n ξ , the i-th column of the matrix Ξ , corresponds to the i-th dimension of the derived state vector x ˙ ( t ) . The n ξ -dimensional vector of candidate functions θ ( x T ( t ) ) includes the constant 1 and all powers of the vector x ( t ) up to a certain order [24]. For example, θ ( [ x 1 , x 2 ] ) | O r d = 2 = [ 1 , x 1 , x 2 , x 1 2 , x 1 x 2 , x 2 2 ] . These signals of x ˙ and θ ( x T ) from measurements and calculation can be written in the following matrices:
X ˙ = x ˙ T ( t 1 ) x ˙ T ( t 2 ) x ˙ T ( t n t ) , Θ ( X ) = θ ( x T ( t 1 ) ) θ ( x T ( t 2 ) ) θ ( x T ( t n t ) ) .
Thus, the solving process of Equation (1) can be cast into a regression task [24]:
X ˙ = Θ ( X ) Ξ .
The model is established by identifying the coefficients because in most physical systems only a small subset of all candidate functions are relevant. The coefficient matrix Ξ is sparse in the high-dimensional search space. In this situation, the Least-Squares (LS) method
ξ ^ i = argmin ξ i R n ξ { X ˙ * , i Θ ξ i 2 2 }
= ( Θ T Θ ) 1 Θ T X ˙ * , i
is not suitable for identification in practice because the parameters with the true value zero will be estimated as a non-zero value due to the noise in the signals. At the same time, overfitting occurs [39].
As one method of sparse regression, the LSST approach promotes sparseness by zeroing the estimated coefficients of the LS method below a fixed threshold λ > 0 . The candidate functions corresponding to non-zero coefficients are selected for the next iteration of LS regression [24]. After a certain number of iterations, usually 10 times as suggested by Brunton [24], the algorithm converges to a fixed point [40]. This process is presented in Figure 1a. The parameter λ needs to be small enough to let the correct candidate functions be selected and large enough to avoid noise being picked up [40]. However, a suitable threshold cannot be found if the noise is so intensive that a zero coefficient is estimated to be greater than a non-zero coefficient. This situation often appears when the coefficients are of significantly different orders of magnitude.
To overcome this issue, another sparse regression method was developed, which is based on LASSO. LASSO is an optimization method for estimating the coefficients ξ i in the objective function,
ξ ^ i = argmin ξ i R n ξ { 1 2 n y X ˙ * , i Θ ξ i 2 2 + λ ξ i 1 } .
Compared with the LS method, LASSO has additionally a penalty term ξ i 1 , i.e., the l 1 norm of the optimized vector. The l 1 norm not only brings sparsity but also keeps the objective function convex, so that the l 1 norm demands less computation time than the non-convex l 0 problem [41]. Following the standard of Matlab [42], the factor 1 2 n y adjusts the scale of l 2 norm the residuum and the hyperparameter λ balances the weighting of l 1 and l 2 norms. A bigger value of λ leads to more sparsity in the optimized vector ξ ^ i and an undesired increase of Mean Square Error (MSE), and vice versa. Thus, an appropriate value of λ needs to be determined, so that the sparsity is promoted as much as possible without much increase of MSE. In order to select the hyperparameter λ automatically, cross-validation [43] and the 1SE-principle [42] are used here. Cross-validation separates the whole data set into n c v small sets. In the training process, n c v 1 sets of data are used for training and one for testing. Each small data set is selected as the test set in turn, so that the distribution of MSE can be obtained to evaluate the quality of fitting [43]. This distribution of MSE depends on the hyperparameter λ of LASSO, which can be calculated by running LASSO with different values of λ . The biggest value of λ , whose mean value of MSE after cross-validation within one Standard Error (SE) of minimum mean value of MSE of all experimented hyperparameters stays, is selected to be the hyperparameter of LASSO, because it tends to give a sparse coefficient vector without introducing too much error of fitting. This process is thus called the 1SE-principle [42].
LASSO automates the construction of models with sparseness. However, the shrinkage of the estimated coefficients is not wanted, i.e., the absolute values of the coefficients are estimated to be smaller than the true values. To avoid this problem, the LS method after LASSO can be performed with selected candidate functions. This method is called LS-Post-LASSO (LSPL) [35]. The procedure of LSPL is shown in Figure 1b.

3. Numerical Uncertainty Analysis

Firstly, in order to explore the identification ability of the LSST and LSPL methods in the SINDy framework for vibrations, a simple vibration system was chosen for the validation. For this paper we use an oscillator with one DoF. These two methods were numerically compared in Monte Carlo Simulation (MCS) with known parameters of the oscillator. In each MCS the two methods identified 10 , 000 randomly generated oscillators that have the normal distributed parameters mass, m, damping coefficient, d, and stiffness, k, as shown in Table 1. The movement of an oscillator with the DoF q is governed by the following equation:
q ˙ q ¨ = 0 1 k m d m q q ˙ .
In this simulation, the minimum absolute value of all coefficients was approximately | d ¯ m ¯ | = 1 1 s . Accordingly, the threshold 0.5 was selected as the hyperparameter λ in the method LSPL, so that no real non-zero parameter is falsely filtered out. Of course, the predetermination of the value of λ is impossible for a real identification case. In this study, interest lies in the performance of the algorithm and not the process of finding a value for λ . A plausible threshold value can be found using the optimization method detailed in other research [1].

3.1. Generation of the Training and Test Data Sets

The training data can be generated by solving an Initial Value Problem (IVP). Given the governing Equation (7) and an initial state x ( 0 ) = [ q ( 0 ) , q ˙ ( 0 ) ] T , the signals of acceleration q ¨ ( t ) , velocity q ˙ ( t ) , and displacement q ( t ) can be calculated simultaneously with a differential equation solver, such as ODE45 from Matlab [44].
The noise in a real measurement was simulated by adding the normal distributed random value S i ( t ) into the generated signals q ¨ ( t ) , q ˙ ( t ) , and q ( t ) , as shown in the following equation:
S 1 ( t ) N ( 0 , ( ϵ N max ( q ( t ) ) ) 2 ) ,
S 2 ( t ) N ( 0 , ( ϵ N max ( q ˙ ( t ) ) ) 2 ) ,
S 3 ( t ) N ( 0 , ( ϵ N max ( q ¨ ( t ) ) ) 2 ) .
The noise factor ϵ N regulates the intensity of a noise vector S ( t ) . In Table 1, five levels of noise are selected. The intensities of these five noise levels can also be interpreted with a Signal-to-Noise Ratio (SNR) as { , 40.1 , 34.1 , 30.5 , 28.0 } dB.
For training purposes, time series with the initial state x ( 0 ) are generated. In the training data set, 10,000 examples of numerical oscillators were randomly generated for each noise level. Thus, the training data set contained 50,000 examples. Furthermore, two different initial states, x 1 * ( 0 ) and x 2 * ( 0 ) , were used to generate two test data sets. Similar to the training data set, each test data set also had 50,000 examples. The values of these initial states are shown in Table 1.
In order to test the effect of overfitting in the polynomial model, the test initial states were selected to bring varied amplitudes of vibrations into consideration. As shown in Figure 2, compared with x ( 0 ) , the vibrations that started with the initial states x 1 * ( 0 ) and x 2 * ( 0 ) have approximately doubled and grown 20-fold in amplitude according to velocity signals.

3.2. Evaluation of the Results

The order of an identified polynomial model was set from one to five, so that two methods of sparse regression (LSST and LSPL) are implemented with five different polynomial orders into data sets with five levels of noise intensity. For each method, 25 MCSs were executed in total for the evaluation. The identified model was analyzed regarding different aspects. First, the sparsity of the identified model was observed. The sparsity determines the complexity of a model. If a model is too complex or too simple, this model can be overfitted or underfitted, respectively [45]. Second, the convergence of the reconstructed time series indicates the usability of the identified model. The free vibration of a conservative system declines over time. An identified model can be discarded if the reconstructed signal diverges. Third, the eigenfrequency of a reconstruction characterizes the correctness of the numerical model of a vibration system. Lastly, the coefficient of determination R 2 denotes how well the reconstructed time series fits the measurement. In the following paragraphs, the results of MCSs will be evaluated.

3.2.1. Sparsity

Because the coefficients k m and d m to be identified appear only in the second row of Equation (7), the results corresponding to the second row of Equation (7) were selected for the evaluation. That means that the real NNZC equals two and the signal q ¨ ( t ) is used to deliver the spectrum for further evaluation.
To compare the NNZC with these two methods, the results from the data set O r d = 3 and ϵ N = 0.005 were used at first as an example to present the distribution of NNZC. As shown in Figure 3, 8847 out of 10,000 examples identified by LSST had six or seven coefficients. By contrast, 9606 NNZCs identified by the LSPL method had exactly two coefficients.
The median, 0.25 quantile, and 0.75 quantile were chosen to represent the NNZCs of all 25 MCSs with five different noise factors and five polynomial orders in Figure 4. The total numbers of candidate functions for each polynomial order are marked by horizontal lines with corresponding colors, which are also the maximum possible values of NNZC. The classic combination of mean value and standard deviation is not appropriate under this circumstance because the distribution of NNZC is strongly asymmetric, as shown in Figure 3. On the left side of Figure 4, the NNZC of the LSST method has a clear dependency on the noise factor ϵ N and polynomial order O r d . Except for the identification with O r d = 1 , more coefficients were identified with a higher noise factors. From left to right, the curves for each order O r d > 1 increase steeply on the low noise side and then flatten on the high noise side, which indicates a saturated character.
On the contrary, the NNZC of the LSPL method does not rise substantially with the noise factor ϵ N ; see Figure 4b. The medians of all MCSs do not vary from the true number of coefficients. Only in the MCSs with O r d 3 and ϵ N 0.0075 , a change of the 0.75 quantile occurs up to 3. Compared to LSST, the LSPL method identifies the sparsity of the numerical model with a significant advantage of precision, which means that LSPL has a lower risk of overfitting.

3.2.2. Convergence

To evaluate the quality of the reconstructed time series from the aspect of convergence, the divergence ratio η d i v was used, which is defined as the ratio between the number of divergent reconstructions n d i v and the total number of experiments n a l l in one MCS,
η d i v = n d i v n a l l .
The divergence ratio η d i v varies from 0 to 1. A lower η d i v means a higher portion of the MCS represents the conservative character of the single-mass oscillator. The subsequent analysis is separated into two parts using different initial values, x 1 * ( 0 ) and x 2 * ( 0 ) .
With the initial value x 1 * ( 0 ) and the LSST method, the reconstructed signals converge up to the polynomial order O r d = 4 with a low divergence ratio η d i v , as presented in Figure 5. A tendency can be identified that the divergence ratio η d i v rises with higher O r d and ϵ N . The maximum value in this figure equals 0.24 from the simulation, where ϵ N = 0.01 , O r d = 4 . In Figure 6a the divergence ratio η d i v with O r d = 5 increases proportionally with the noise factor up to 0.4 , which makes the curves of the first four polynomial orders in this figure unrecognizable. On the contrary, with x 1 * ( 0 ) and LSPL, η d i v equals zero in all MCSs; see Figure 6b.
The second test initial value x 2 * ( 0 ) is more challenging for the identified model than x 1 * ( 0 ) due to its ten times larger amplitude; see Figure 2. In Figure 6c,d, the divergence ratio η d i v of both methods up to a polynomial order of two remains zero. From O r d = 3 , the divergence ratios of LSST increase to higher values. In particular, more than 80 % of the reconstructions do not converge for ϵ N 0.0025 . On the other hand, the divergence ratio of LSPL rises up to only 7.2 % for ϵ N > 0.0025 .

3.2.3. Natural Frequency

In terms of the natural frequency of the single-mass oscillator, the difference between the identified and the true value is used to evaluate the correctness of a trained model. The true eigenfrequency of an oscillator from the training data set can be calculated using the following equation:
f t r u e = 1 2 π k m .
Since the mass, m, and stiffness, k, are randomly generated, the true eigenfrequency from the training set shows a symmetrical distribution in the histogram Figure 7.
The eigenfrequencies f r e c of the reconstructed data were obtained from the peaks of the spectra of the acceleration signals. The natural frequency difference reads
Δ f = f r e c f t r u e .
The distribution of Δ f with the initial state x 1 * ( 0 ) was evaluated in order to identify the dependency of natural frequency on the polynomial order of the candidate functions and noise intensity, because most reconstructions of the signals with x 1 * ( 0 ) succeed, as discussed in Section 3.2.2. In Figure 8, the natural frequencies of reconstructed accelerations with both methods have similar distributions up to polynomial order 4. With stronger noise, the error in the natural frequency increases continuously without a significant change in the range between quantiles. In the simulation with ϵ N = 0.01 and o r d = 4 , the medians of Δ f for the LSST and LSPL methods are equal to −0.0035 and −0.0034, respectively. The results of the LSST method in polynomial order 5 differ obviously from the other groups of data, with a larger error in the median and wider range between the 0.75 and 0.25 quantiles. Its median reaches −0.0158 with ϵ N = 0.01 . This phenomenon corresponds well with Figure 6a,b regarding the divergence ratio of both methods with the initial state x 1 * ( 0 ) , which means the missing samples of divergent reconstructions cause an additional error to the distribution of the natural frequency.
In order to reduce the complexity of the detailed comparison of the natural frequency, in the following analysis, only the results of the training data sets O r d = 3 and ϵ N = 0.005 are shown as an example. The natural frequency differences Δ f of both methods show a similar distribution for the initial condition x 1 * ( 0 ) as shown in Figure 9a,b. The maximum deviation corresponds to the resolution of the frequency vector after the fast Fourier transformation, i.e., 0.01 Hz. With the initial value x 2 * ( 0 ) , the distribution of the natural frequencies is flatter and wider when using LSST instead of using LSPL; see Figure 9c,d. Furthermore, the peak in Figure 9c is underestimated from the true value zero by 0.01 Hz. Despite the missing reconstructed signals with the LSSL method and the initial state x 2 * ( 0 ) , a conclusion can be made that the LSPL method has a more precise and general prediction of the natural frequency of oscillators than LSST, especially with a large initial state.

3.2.4. Coefficient of Determination R 2

The coefficient of determination R 2 describes the proportion of variability in training data that can be explained using a model [46]. To calculate R 2 with the noise-free training data x and the reconstructed time series x ^ , we use the following definition:
R 2 = 1 R S S T S S ,
T S S = Σ ( x i x ¯ ) 2 ,
R S S = Σ ( x i x ^ i ) 2 ,
where TSS means the total sum of squares and RSS is the residual sum of squares. The value of R 2 varies in the interval [0, 1]. The better a reconstruction fits the training data, the higher R 2 becomes. In the ideal case, R 2 is equal to 1.
As for the training process, the analysis of the coefficient of determination R 2 is only applied to the reconstruction using the same initial condition x 0 ( 0 ) as in the training process. The distributions of R 2 in Figure 10a,b show group characteristics for different polynomial orders. The first group of both LSST and LSPL methods with O r d = { 1 , 2 } have similar medians and quantiles; for example with ϵ N = 0.01 , their medians are equal to 0.9985, and the 0.75 and 0.25 quantiles are 0.9993 and 0.9971, respectively. The second group of the LSST method has two orders, O r d = { 3 , 4 } . However, the LSPL method has three orders, O r d = { 3 , 4 , 5 } , in its second group of R 2 with a higher median and smaller distribution quantiles. The R 2 of the LSST method in the fifth polynomial order decreases more rapidly with the increasing noise factor than the other groups of MCSs.
For the purpose of comparing the distribution of R 2 in detail, the boxplots as an example with O r d = 3 and ϵ N = 0.005 in Figure 11 and Table 2 indicate that LSPL has a slightly higher median value and a smaller interval between both quantiles compared to LSST. This means that the time series from the models trained by LSPL are more consistent with the noise-free true signals.

3.2.5. Comparison between LSST and LSPL

In comparison with the LSST method, LSPL tends to identify a more sparse model and has a lower overfitting risk according to the analysis of sparsity. The model trained by LSPL generally has a lower divergence ratio, η d i v , than the one trained by LSST, which means LSPL delivers more usable models, especially with a large initial amplitude. In terms of natural frequency, LSPL is advantageous when the initial amplitude is large, which means that LSPL gives more accurate predictions with smaller distributions. The coefficient of determination R 2 indicates that LSPL suffers from less noise interference in the training data. Thus, it can be concluded that the LSPL method is a more suitable regression method for identifying the governing equation of a single-mass oscillator in the SINDy framework.

4. The Test Bench

In this section, we introduce a test bench of a single-mass oscillator used to generate measurement time series for model building. The theoretical values of the parameters are calculated “bottom–up” from the known value of each basic element. These values will be compared with the “top–down” identified parameters, i.e., those obtained from SINDy.

4.1. Experimental Setup

The test bench shown in Figure 12a contains an isolated oscillator, an excitation, and measuring instruments. This single-mass oscillator was originally designed to represent a vehicle suspension leg, whose parameters, such as stiffness, are adjustable, in order to build a basis for further study of passive and active vibration isolation [47,48]. The moving mass and two Carbon Fiber Reinforced Polymer (CFRP) beams of the oscillator shown in Figure 12b are mounted on a rigid base that is suspended by four elastic ropes with low stiffness and a low damping coefficient, so that the natural frequency of the oscillator can be distinguished from the vibration of the suspension. The signal of the excitation is derived from a signal generator. Then, the signal is amplified via an amplifier and led to the shaker. The excitation force is brought to the rigid base of the oscillator by a coupling between the shaker and the base. The accelerometers in Figure 12c,d measure the accelerations at the moving mass and the base. The excitation force is measured using a force sensor between the shaker and the base. All signals are recorded simultaneously by a data acquisition system.

4.2. Governing Equation and Parameters

The stiffness of the oscillator is determined by the position of the clamps in the guide rails and the material damping causes energy dissipation during the oscillation. The base is hung with ropes and therefore movable [47,48]. Thus, the oscillator can be simplified theoretically as in Figure 13. There are two DoFs in the oscillator: x a b s for the moving mass and x b a s e for the base. The parameters m, d, and k are the mass, damping coefficient, and stiffness, respectively.
The motion of the oscillator can be described mathematically by the following equation:
m x ¨ a b s + d ( x ˙ a b s x ˙ b a s e ) + k ( x a b s x b a s e ) = 0 ,
because the excitation is only carried out on the base. Introducing the relation DoF x r e l = x a b s x b a s e , Equation (17) is:
m x ¨ a b s + d x ˙ r e l + k x r e l = 0 .
Dividing Equation (18) by m and setting ω 0 2 = k m and 2 D ω 0 = d m yields:
x ¨ a b s + 2 D ω 0 x ˙ r e l + ω 0 2 x r e l = 0 .
The calculated values of the parameters are listed in Table 3. Since errors accumulate into the theoretical values through the calculation, the listed theoretical values could vary to some degree from the true value. However, the numbers in Table 3 are useful to evaluate the order of magnitude of the identified parameters.

4.3. Excitation Signals

A vibration system can be excited in a wide frequency range with an impulse excitation. The shaker gives a proper frequency range but not an ideal impulse excitation. For example, the force at the excitation point in Figure 14 shows multiple peaks. Because of the coupling between the shaker and the base, an interaction exists during the vibration. Fortunately, the impulse excitation by the shaker does not affect the results of system identification, which is due to the fact that only kinematic data are relevant for SINDy.
Compared to the impulse excitation, the vibration excited by a sine sweep signal allows more data to be collected in a single measurement. Furthermore, the recorded signals have a larger amplitude, which is a benefit for the Signal-to-Noise Ratio (SNR). In this work, the frequency of the excitation ranges from 0 Hz up to 100 Hz in approximately 60 s, as shown in Figure 15.

5. Data Analysis and Results

Due to the advantages of LSPL over LSST mentioned in Section 3, LSPL was selected to experimentally identify the oscillator using the signals from the measurement. The identified models were then compared with the theoretical linear model in Equation (19) to evaluate the performance of this identification method.

5.1. System Identification with the SINDy-LSPL Method

The total force on the moving mass depends only on the relative kinematic force between the mass and the base. With Newton’s second law of motion, it can be assumed that the absolute acceleration of a moving mass x ¨ a b s is a function related to speed x ˙ r e l and displacement x r e l . This assumption is also supported by Equation (19), the theoretical governing equation of the oscillator. For the purpose of practical implementation, the signals of the relative kinematic position are collected in the following matrix:
X r e l = x r e l x ˙ r e l .
Cast into a sparse regression, the problem to be solved can be posed as:
x ¨ a b s = Θ ( X r e l ) ξ .
In Equation (21), Θ ( X r e l ) denotes the polynomial candidate functions and ξ the coefficient vector. The highest order of the models was set up with O r d { 3 , 5 } , in order to validate the nonlinear identification with middle and high polynomial orders.

5.2. Reconstruction of the Signals in State Space

The absolute acceleration x ¨ a b s is directly obtained from the measurement. However, the relative speed x ˙ r e l and displacement x r e l need to be reconstructed from the relative acceleration x ¨ r e l = x ¨ a b s x ¨ b a s e through integration. This problem can be solved with the integration property of Fourier transform [49]
F { t f ( τ ) d τ } = 1 j ω F ( ω ) + π F ( 0 ) δ ( ω ) .
In Equation (22), F ( ω ) = F { f ( t ) } is the Fourier transform of the function f ( t ) . Corresponding to this practical situation, f ( t ) equals x ¨ r e l and x ˙ r e l in the first and second integrations, respectively. In the first integration, the amplitude at zero frequency in the spectrum F { x ¨ r e l } indicates a translation of the moving mass with a constant acceleration. Since the oscillator vibrates only around the neutral position, theoretically, F { x ¨ r e l } | ω = 0 equals zero. Due to the same reason, there is also F { x ˙ r e l } | ω = 0 = 0 in the second integration. Furthermore, in order to avoid the drift effect, the spectrum under 10 Hz or 20 π rad/s is forced to be zero [50,51], so that
x ˙ r e l = F 1 { 1 j ω F { x ¨ r e l } } | ω > 20 π rad/s ,
x r e l = F 1 { 1 j ω F { x ˙ r e l } } | ω > 20 π rad/s
= F 1 { ( 1 j ω ) 2 F { x ¨ r e l } } | ω > 20 π rad/s .
In experiments with impulse excitations, the amplitudes of signals decay over time. To obtain a sufficient ratio of signals against noise, the portion of a signal with a large amplitude is used for identification, i.e., the first half of a signal. With zeroing the spectrum under 10 Hz , the reconstructed responses of velocity and displacement with low frequency excitation can be interfered. Therefore, only the data in the interval from 20 % to 100 % of the time span were used for identification. The selected portions of signals are visualized in Figure 16.

5.3. Results Using Impulse Excitation

The identified coefficients and their deviations from five experiments using impulse excitation are summarized in Table 4. The model identified by LSPL provides identical sparseness with polynomial orders 3 and 5. Except for the fifth experiment, the coefficients ξ 2 , ξ 3 and ξ 6 are non-zero values, i.e., the factors of the candidate functions x r e l , x ˙ r e l , and x ˙ r e l 2 respectively. In the fifth experiment, the model was even more compact because the candidate function x ˙ r e l 2 was excluded. The magnitude of the excitations has three levels, so the maximum excitation forces are about 3.4 N , 6.7 N , and 13.1 N .
The coefficients ξ 2 and ξ 3 are consistent in the five experiments. Converted form Table 4, Table 5 shows coefficients of variation of 0.07 % and 2 % for ξ 2 and ξ 3 , respectively. In comparison with the theoretical values, the identified coefficients ξ 2 and ξ 3 have consistent offsets by approximately 8.3 % and 28.3 % , respectively, in Table 4.
The coefficient ξ 6 varies by 84 % . It can be seen that the identified coefficient ξ 6 decreases to zero as the excitation force increases, as shown in Figure 17. The coefficient ξ 6 thus depends strongly on the setup of the experiment.

5.4. Results Using Sweep Signal

System identification with a sweep signal provides compact SINDy models, each with two identified parameters, ξ 2 and ξ 3 . The excitations have three levels with the peaks 5.4 N, 10.6 N, and 21.6 N in forces. Under these different settings, the identified coefficients match well with the results from the experiments with impulse excitations, as shown in Table 6.
Associating the identified coefficients ξ 2 and ξ 3 with the excitation force, it was found that with higher excitation strength the identified coefficients ξ 2 and ξ 3 decrease slightly. At the same time, the coefficient of determination R 2 decreases and the relative errors increase compared to theoretical values. Figure 18a,b shows the linear dependencies of the coefficients ξ 2 and ξ 3 on the maximum excitation strength F m a x . The lines through the data points were analyzed using linear regression. The estimated linear relationship is given by the following equations:
ξ 2 = ( 46263 + 52 F m a x ) s 2 ,
ξ 3 = ( 20.90 + 0.037 F m a x ) s 1 .
A possible explanation for this lies in the numerical integration, because zeroing out a data point in a Fourier series corresponds to subtracting a sine signal with the same frequency, amplitude, and phase from the time domain. The time series of velocity and displacement can thus be compressed. Another potential reason is the absence of terms with higher polynomial orders. The verification of these hypotheses and the improvement of the numerical integration can be investigated in further work. Overall, compared with the impulse excitation, the identified governing equation using sweep signals shows a more consistent result.

6. Conclusions and Outlook

The SINDy framework has rapidly developed in different extensions and has been applied to various dynamic systems in the past few years. In this work, we introduce a new sparse regression process for vibration systems based on the SINDy framework and Least Squares Post LASSO (LSPL) algorithms. In the numerical uncertainty analysis, we compared the two regression methods LSST and LSPL in terms of sparsity, convergence, eigenfrequency of oscillator, and coefficient of determination in Monte Carlo Simulations. Based on a through analysis and comparison with characteristic categories, the method SINDy-LSPL delivered a more sparse model and a more accurate prediction of the dynamic behavior. We designed a complete workflow to identify the governing equation of a single-mass oscillator using SINDy-LSPL in experimental tests. The problem of drift effect in the reconstructed velocity and displacement was accounted for by filtering out the low frequencies of reconstructed signals. In this evaluation, we tested two different types of excitation signals, namely impulse and sweep signals. The results reveal that the form and strength of excitation influence the identified result. A potential explanation of this phenomenon lies in the preprocess of training data. A detailed analysis should be dealt with in further research.
In future work in relation to SINDy-LSPL, we could further optimize the conditioning of the regression problem by improving the signal processing technique in the pre-processing of measurement data, such as denoising of the signals, integral processes, and normalization of the data. On the other hand, the usage of SINDy-LSPL in continuous vibration systems and discrete systems with multiple DOF is also worth exploring. We will also consider integrating SINDy-LSPL into adaptive control algorithms.

Author Contributions

Conceptualization, Y.R. and C.A.; methodology, Y.R.; validation, Y.R.; resources, Y.R. and C.A.; writing—original draft preparation, Y.R.; writing—review and editing, C.A. and T.M.; supervision, C.A. and T.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) grant number 57157498. The publication was funded by DFG and the Open Access Publishing Fund of Technical University of Darmstadt.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The original SINDy algorithm is provided in the supplemental material of [24].

Acknowledgments

The authors would like to thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for funding Y.R. within the Sonderforschungsbereich (SFB, Collaborative Research Center) 805 “Control of Uncertainties in Load-Carrying Structures in Mechanical Engineering”—project number: 57157498. The authors also thank the Open Access Publishing Fund of Technical University of Darmstadt for supporting the publication. The authors further thank Roland Platz for the possibility to perform experiments on the test stand.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Stender, M.; Oberst, S.; Hoffmann, N. Recovery of differential equations from impulse response time series data for model identification and feature extraction. Vibration 2019, 2, 25–46. [Google Scholar] [CrossRef] [Green Version]
  2. Mangan, N.M.; Askham, T.; Brunton, S.L.; Kutz, J.N.; Proctor, J.L. Model selection for hybrid dynamical systems via sparse regression. Proc. R. Soc. A 2019, 475, 20180534. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Loiseau, J.C.; Brunton, S.L. Constrained sparse Galerkin regression. J. Fluid Mech. 2018, 838, 42–67. [Google Scholar] [CrossRef] [Green Version]
  4. Unbehauen, H.; Bohn, C. Identifikation Dynamischer Systeme: Methoden zur Modellbildung Anhand von Messungen; Springer: Wiesbaden, Germany, 2016. [Google Scholar]
  5. Oberst, S. Nonlinear Dynamics: Towards a paradigm change via evidence-based complex dynamics modelling. In Proceedings of the 2018 Noise and Vibration Emerging Methods (NOVEM), Ibiza, Spain, 7–9 May 2018. [Google Scholar]
  6. Kerschen, G.; Worden, K.; Vakakis, A.F.; Golinval, J.C. Past, present and future of nonlinear system identification in structural dynamics. Mech. Syst. Signal Process. 2006, 20, 505–592. [Google Scholar] [CrossRef] [Green Version]
  7. Brunton, S.L.; Kutz, J.N. Methods for data-driven multiscale model discovery for materials. J. Phys. Mater. 2019, 2, 044002. [Google Scholar] [CrossRef]
  8. Noël, J.P.; Kerschen, G. Nonlinear system identification in structural dynamics: 10 more years of progress. Mech. Syst. Signal Process. 2017, 83, 2–35. [Google Scholar] [CrossRef]
  9. Mangan, N.M.; Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Inferring Biological Networks by Sparse Identification of Nonlinear Dynamics. IEEE Trans. Mol. Biol. Multi-Scale Commun. 2016, 2, 52–63. [Google Scholar] [CrossRef] [Green Version]
  10. Mormann, F.; Kreuz, T.; Andrzejak, R.G.; David, P.; Lehnertz, K.; Elger, C.E. Epileptic seizures are preceded by a decrease in synchronization. Epilepsy Res. 2003, 53, 173–185. [Google Scholar] [CrossRef]
  11. Rothman, P. Nonlinear Time Series Analysis of Economic and Financial Data; Dynamic Modeling and Econometrics in Economics and Finance; Springer: Boston, MA, USA, 1999; Volume 1. [Google Scholar]
  12. Coca, A.E.; Correa, D.C.; Zhao, L. Computer-aided music composition with LSTM neural network and chaotic inspiration. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN 2013), Dallas, TX, USA, 4–9 August 2013; Angelov, P., Levine, D., Apolloni, B., Eds.; IEEE: Piscataway, NJ, USA, 2013; pp. 1–7. [Google Scholar] [CrossRef]
  13. Serrà, J.; Serra, X.; Andrzejak, R.G. Cross recurrence quantification for cover song identification. New J. Phys. 2009, 11, 093017. [Google Scholar] [CrossRef]
  14. Qin, S.; Zhang, Y.; Zhou, Y.L.; Kang, J. Dynamic Model Updating for Bridge Structures Using the Kriging Model and PSO Algorithm Ensemble with Higher Vibration Modes. Sensors 2018, 18, 1879. [Google Scholar] [CrossRef] [Green Version]
  15. Csurcsia, P.Z.; Troyer, T.D. An empirical study on decoupling PNLSS models illustrated on an airplane. IFAC-PapersOnLine 2021, 54, 673–678. [Google Scholar] [CrossRef]
  16. Wu, T.; Kareem, A. Vortex-induced vibration of bridge decks: Volterra series-based model. J. Eng. Mech. 2013, 139, 1831–1843. [Google Scholar] [CrossRef]
  17. Qi, C.; Lin, J.; Wu, Y.; Gao, F. A Wiener Model Identification for Creep and Vibration Linear and Hysteresis Nonlinear Dynamics of Piezoelectric Actuator. IEEE Sens. J. 2021, 21, 27570–27581. [Google Scholar] [CrossRef]
  18. Saleem, A.; Mesbah, M.; Al-Ratout, S. Nonlinear hammerstein model identification of amplified piezoelectric actuators (APAs): Experimental considerations. In Proceedings of the 2017 4th International Conference on Control, Decision and Information Technologies (CoDIT), Barcelona, Spain, 5–7 April 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 633–638. [Google Scholar] [CrossRef]
  19. Lu, X.; Liao, W.; Huang, W.; Xu, Y.; Chen, X. An improved linear quadratic regulator control method through convolutional neural network–based vibration identification. J. Vib. Control 2021, 27, 839–853. [Google Scholar] [CrossRef]
  20. Zhang, Y.; Miyamori, Y.; Mikami, S.; Saito, T. Vibration-based structural state identification by a 1-dimensional convolutional neural network. Comput.-Aided Civ. Infrastruct. Eng. 2019, 34, 822–839. [Google Scholar] [CrossRef]
  21. Elmegaard, M.; Rübel, J.; Inagaki, M.; Kawamoto, A.; Starke, J. Equation-Free Continuation of Maximal Vibration Amplitudes in a Nonlinear Rotor-Bearing Model of a Turbocharger. In Proceedings of the 7th International Conference on Multibody Systems, Nonlinear Dynamics, and Control, San Diego, CA, USA, 30 August–2 September 2009; Parts A, B and C. Volume 4. [Google Scholar]
  22. Kaheman, K.; Kutz, J.N.; Brunton, S.L. SINDy-PI: A robust algorithm for parallel implicit sparse identification of nonlinear dynamics. Proc. R. Soc. A 2020, 476, 20200279. [Google Scholar] [CrossRef]
  23. Chen, Z.; Sun, H. Sparse representation for damage identification of structural systems. Struct. Health Monit. 2021, 20, 1644–1656. [Google Scholar] [CrossRef]
  24. Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. USA 2016, 113, 3932–3937. [Google Scholar] [CrossRef] [Green Version]
  25. Shea, D.E.; Brunton, S.L.; Kutz, J.N. SINDy-BVP: Sparse identification of nonlinear dynamics for boundary value problems. Phys. Rev. Res. 2021, 3, 023255. [Google Scholar] [CrossRef]
  26. Kaiser, E.; Kutz, J.N.; Brunton, S.L. Sparse identification of nonlinear dynamics for model predictive control in the low-data limit. Proc. R. Soc. A 2018, 474, 20180335. [Google Scholar] [CrossRef]
  27. Fasel, U.; Kaiser, E.; Kutz, J.N.; Brunton, B.W.; Brunton, S.L. SINDy with Control: A Tutorial. arXiv 2021, arXiv:2108.13404. [Google Scholar]
  28. Rudy, S.; Alla, A.; Brunton, S.L.; Kutz, J.N. Data-driven identification of parametric partial differential equations. SIAM J. Appl. Dyn. Syst. 2019, 18, 643–660. [Google Scholar] [CrossRef]
  29. Schaeffer, H. Learning partial differential equations via data discovery and sparse optimization. Proc. R. Soc. A 2017, 473, 20160446. [Google Scholar] [CrossRef] [Green Version]
  30. Tibshirani, R. Regression Shrinkage and Selection Via the Lasso. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  31. Baraniuk, R.G. Compressive sensing. IEEE Signal Process. Mag. 2007, 24, 118–120. [Google Scholar] [CrossRef]
  32. Brunton, S.L.; Tu, J.H.; Bright, I.; Kutz, J.N. Compressive sensing and low-rank libraries for classification of bifurcation regimes in nonlinear dynamical systems. SIAM J. Appl. Dyn. Syst. 2014, 13, 1716–1732. [Google Scholar] [CrossRef]
  33. Kutz, J.N.; Tu, J.H.; Proctor, J.L.; Brunton, S.L. Compressed sensing and dynamic mode decomposition. J. Comput. Dyn. 2016, 2, 165–191. [Google Scholar] [CrossRef]
  34. Zheng, P.; Askham, T.; Brunton, S.L.; Kutz, J.N.; Aravkin, A.Y. A Unified Framework for Sparse Relaxed Regularized Regression: SR3. IEEE Access 2019, 7, 1404–1423. [Google Scholar] [CrossRef]
  35. Belloni, A.; Chernozhukov, V. Least squares after model selection in high-dimensional sparse models. Bernoulli 2013, 19, 521–547. [Google Scholar] [CrossRef] [Green Version]
  36. Brunton, S.L.; Brunton, B.W.; Proctor, J.L.; Kaiser, E.; Kutz, J.N. Chaos as an intermittently forced linear system. Nat. Commun. 2017, 8, 19. [Google Scholar] [CrossRef]
  37. Champion, K.; Lusch, B.; Kutz, J.N.; Brunton, S.L. Data-driven discovery of coordinates and governing equations. Proc. Natl. Acad. Sci. USA 2019, 116, 22445–22451. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Kovacic, I.; Brennan, M.J. The Duffing Equation: Nonlinear Oscillators and Their Phenomena; Wiley: Oxford, UK, 2011. [Google Scholar]
  39. Harrell, J.F.E.; Harrell, F.E. Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis, 2nd ed.; Springer series in statistics; Springer: Cham, Switzerland, 2015. [Google Scholar] [CrossRef]
  40. Zhang, L.; Schaeffer, H. On the Convergence of the SINDy Algorithm. Multiscale Model. Simul. 2019, 17, 948–972. [Google Scholar] [CrossRef] [Green Version]
  41. Meng, F. Modeling of Moving Sound Sources Based on Array Measurements. Ph.D. Thesis, RWTH Aachen University, Aachen, Germany, 2018. [Google Scholar]
  42. Ljung, L. System Identification Toolbox: User’s Guide; Version: Matlab 2019a; MathWorks, Inc.: Natick, MA, USA, 2019. [Google Scholar]
  43. Fürnkranz, J.; Gamberger, D.; Lavrač, N. Foundations of Rule Learning; Cognitive Technologies; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar] [CrossRef]
  44. Shampine, L.F.; Reichelt, M.W. The matlab ode suite. SIAM J. Sci. Comput. 1997, 18, 1–22. [Google Scholar] [CrossRef] [Green Version]
  45. Nelles, O. Nonlinear System Identification: From Classical Approaches to Neural Networks and Fuzzy Models; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar] [CrossRef]
  46. Bonamente, M. Statistics and Analysis of Scientific Data, 2nd ed.; Graduate Texts in Physics; Springer: New York, NY, USA, 2017. [Google Scholar] [CrossRef] [Green Version]
  47. Platz, R.; Enss, G.C. Comparison of uncertainty in passive and active vibration isolation. In Model Validation and Uncertainty Quantification; Conference proceedings of the Society for Experimental Mechanics series; Atamturktur, H., Moaveni, B., Papadimitriou, C., Schoenherr, T., Eds.; Springer: Cham, Switzerland, 2015; Volume 3, pp. 15–25. [Google Scholar] [CrossRef]
  48. Lenz, J.; Platz, R. Quantification and Evaluation of Parameter and Model Uncertainty for Passive and Active Vibration Isolation. In Model Validation and Uncertainty Quantification; Barthorpe, R., Ed.; Springer International Publishing: Cham, Switzerland, 2020; Volume 3, pp. 135–147. [Google Scholar]
  49. Phillips, C.L.; Parr, J.M.; Riskin, E.A. Signals, Systems, and Transforms, 5th ed.; Pearson: Boston, MA, USA, 2014. [Google Scholar]
  50. Neitzel, F.; Schwanebeck, T.; Schwarz, W. Zur Genauigkeit von Schwingwegmessungen mit Hilfe von Beschleunigungs-und Geschwindigkeitssensoren. Allg. Vermess.-Nachrichten (AVN) 2007, 6, 202–2011. [Google Scholar]
  51. Hofmann, S. Numerische Integration von Beschleunigungssignalen. Mitteilungen Inst. FÜR Maschinenwesen Tech. Univ. Clausthal 2013, 38, 103–114. [Google Scholar]
Figure 1. Flowchart of (a) LSST and (b) LSPL.
Figure 1. Flowchart of (a) LSST and (b) LSPL.
Applsci 12 00747 g001
Figure 2. Comparison of the amplitudes of velocity signals with initial states x ( 0 ) , x 1 * ( 0 ) , and x 2 * ( 0 ) .
Figure 2. Comparison of the amplitudes of velocity signals with initial states x ( 0 ) , x 1 * ( 0 ) , and x 2 * ( 0 ) .
Applsci 12 00747 g002
Figure 3. Distribution of NNZC of the methods (a) LSST and (b) LSPL, O r d = 3 , ϵ N = 0.005 .
Figure 3. Distribution of NNZC of the methods (a) LSST and (b) LSPL, O r d = 3 , ϵ N = 0.005 .
Applsci 12 00747 g003
Figure 4. Median and quantile of NNZC with methods (a) LSST and (b) LSPL. The horizontal dotted lines mark the maximal numbers of candidate functions for each polynomial order.
Figure 4. Median and quantile of NNZC with methods (a) LSST and (b) LSPL. The horizontal dotted lines mark the maximal numbers of candidate functions for each polynomial order.
Applsci 12 00747 g004
Figure 5. Divergence ratio of LSST with x 1 * ( 0 ) up to O r d = 4 .
Figure 5. Divergence ratio of LSST with x 1 * ( 0 ) up to O r d = 4 .
Applsci 12 00747 g005
Figure 6. Divergence ratios with the two methods (LSST and LSPL) and two initial states ( x 1 * ( 0 ) and x 2 * ( 0 ) ): (a) x 1 * ( 0 ) , LSST; (b) x 1 * ( 0 ) , LSPL; (c) x 2 * ( 0 ) , LSST; and (d) x 2 * ( 0 ) , LSPL.
Figure 6. Divergence ratios with the two methods (LSST and LSPL) and two initial states ( x 1 * ( 0 ) and x 2 * ( 0 ) ): (a) x 1 * ( 0 ) , LSST; (b) x 1 * ( 0 ) , LSPL; (c) x 2 * ( 0 ) , LSST; and (d) x 2 * ( 0 ) , LSPL.
Applsci 12 00747 g006
Figure 7. Distribution of the natural frequency from training data set.
Figure 7. Distribution of the natural frequency from training data set.
Applsci 12 00747 g007
Figure 8. Median and quantile of error in frequency with the initial state x 1 * ( 0 ) and the two methods (a) LSST and (b) LSPL.
Figure 8. Median and quantile of error in frequency with the initial state x 1 * ( 0 ) and the two methods (a) LSST and (b) LSPL.
Applsci 12 00747 g008
Figure 9. Frequency variation with the two methods (LSST and LSPL) and two initial states ( x 1 * ( 0 ) and x 2 * ( 0 ) ): (a) x 1 * ( 0 ) , LSST; (b) x 1 * ( 0 ) , LSPL; (c) x 2 * ( 0 ) , LSST; and (d) x 2 * ( 0 ) , LSPL.
Figure 9. Frequency variation with the two methods (LSST and LSPL) and two initial states ( x 1 * ( 0 ) and x 2 * ( 0 ) ): (a) x 1 * ( 0 ) , LSST; (b) x 1 * ( 0 ) , LSPL; (c) x 2 * ( 0 ) , LSST; and (d) x 2 * ( 0 ) , LSPL.
Applsci 12 00747 g009
Figure 10. Median and quantile of R 2 with the two methods (a) LSST and (b) LSPL.
Figure 10. Median and quantile of R 2 with the two methods (a) LSST and (b) LSPL.
Applsci 12 00747 g010
Figure 11. Boxplot of R 2 of LSST and LSPL with O r d = 3 and ϵ N = 0.005 .
Figure 11. Boxplot of R 2 of LSST and LSPL with O r d = 3 and ϵ N = 0.005 .
Applsci 12 00747 g011
Figure 12. (a) Components of the test bench, (b) details of the oscillator, (c) accelerometers on the moving mass and (d) on the base.
Figure 12. (a) Components of the test bench, (b) details of the oscillator, (c) accelerometers on the moving mass and (d) on the base.
Applsci 12 00747 g012
Figure 13. Model of the oscillator.
Figure 13. Model of the oscillator.
Applsci 12 00747 g013
Figure 14. Impulse excitation force no.1.
Figure 14. Impulse excitation force no.1.
Applsci 12 00747 g014
Figure 15. (a) Time-domain signal and (b) spectrogram of sweep excitation.
Figure 15. (a) Time-domain signal and (b) spectrogram of sweep excitation.
Applsci 12 00747 g015
Figure 16. The selected intervals from measurement with (a) impulse excitation and (b) sweep signal excitation.
Figure 16. The selected intervals from measurement with (a) impulse excitation and (b) sweep signal excitation.
Applsci 12 00747 g016
Figure 17. Dependency of ξ 6 on the magnitude of the excitation force.
Figure 17. Dependency of ξ 6 on the magnitude of the excitation force.
Applsci 12 00747 g017
Figure 18. Linear dependency of the identified coefficients (a) ξ 2 und (b) ξ 3 on the excitation force F m a x .
Figure 18. Linear dependency of the identified coefficients (a) ξ 2 und (b) ξ 3 on the excitation force F m a x .
Applsci 12 00747 g018
Table 1. Oscillator parameters in numerical uncertainty analysis.
Table 1. Oscillator parameters in numerical uncertainty analysis.
ParameterValue
m in kg N ( 5 , 0.05 2 )
d in kg / s N ( 5 , 0.5 2 )
k in kg / s 2 N ( 2000 , 66.7 2 )
ϵ N { 0 , 0.0025 , 0.005 , 0.0075 , 0.01 }
x ( 0 ) [ 0 ; 1 ]
x 1 * ( 0 ) [ 0.1 ; 0 ]
x 2 * ( 0 ) [ 1 ; 0 ]
Table 2. Distribution of R 2 of LSST and LSPL with O r d = 3 and ϵ N = 0.005 .
Table 2. Distribution of R 2 of LSST and LSPL with O r d = 3 and ϵ N = 0.005 .
MethodLSSLLSPL
Median0.99960.9997
Upper quartile0.99980.9999
Lower quartile0.99920.9994
Maximum11
Minimum0.99360.9962
Number of points10,00010,000
Number of outliers474476
Table 3. Theoretical parameters of the oscillator, modified from [47,48].
Table 3. Theoretical parameters of the oscillator, modified from [47,48].
ParameterValueDescription
m0.755 kgmoving mass
k 3.298 × 10 4 N/mstiffness
D4%damping ratio
ω 0 2 43 , 682 s 2 coefficient for x r e l in Equation (19)
ω 0 209 s 1 angular eigenfrequency
f 0 33.3 Hz eigenfrequency
2 D ω 0 16.7 s 1 coefficient for x ˙ r e l in Equation (19)
Table 4. Identified coefficients by using impulse excitation and their errors compared with the theoretical values from Table 3.
Table 4. Identified coefficients by using impulse excitation and their errors compared with the theoretical values from Table 3.
No. F max in NCoefficientRelative Error R 2
ξ 2 ξ 3 ξ 6 | ξ 2 ω 0 2 ω 0 2 | | ξ 3 2 D ω 0 2 D ω 0 |
13.39−47,360 21.6 1498.5 8.4 % 29.3 % 0.9957
23.36 47 , 270 22.0 2381.0 8.2 % 31.5 % 0.9957
36.62 47 , 300 21.4 777.7 8.3 % 28.4 % 0.9970
46.72 47 , 305 21.4 705.3 8.3 % 28.1 % 0.9957
513.11 47 , 325 20.8 0 8.3 % 24.4 % 0.9959
Mean- 47 , 312 21.4 1072.5 8.3 % 28.3 % 0.9960
Table 5. Mean value, standard deviation, and coefficient of variation of the identified coefficients by using impulse excitation.
Table 5. Mean value, standard deviation, and coefficient of variation of the identified coefficients by using impulse excitation.
CoefficientMean ξ ¯ i Standard Deviation s ( ξ i ) Coefficient of Variation s ( ξ i ) / ξ ¯ i
ξ 2 −47,31233.3 0.07 %
ξ 3 21.4 0.43 2 %
ξ 6 1072.5 903.66 84 %
Table 6. Identified coefficients by using sweep signal excitation and their errors compared with theoretical values.
Table 6. Identified coefficients by using sweep signal excitation and their errors compared with theoretical values.
No. F max in NCoefficientRelative Error R 2
ξ 2 ξ 3 ξ 2 ω 0 2 ω 0 2 ξ 3 2 D ω 0 2 D ω 0
15.4−46,556 21.1 6.6 % 26.6 % 0.9998
210.6−46,790 21.3 7.1 % 27.7 % 0.9997
321.6−47,385 21.7 8.5 % 29.7 % 0.9996
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ren, Y.; Adams, C.; Melz, T. Uncertainty Analysis and Experimental Validation of Identifying the Governing Equation of an Oscillator Using Sparse Regression. Appl. Sci. 2022, 12, 747. https://doi.org/10.3390/app12020747

AMA Style

Ren Y, Adams C, Melz T. Uncertainty Analysis and Experimental Validation of Identifying the Governing Equation of an Oscillator Using Sparse Regression. Applied Sciences. 2022; 12(2):747. https://doi.org/10.3390/app12020747

Chicago/Turabian Style

Ren, Yaxiong, Christian Adams, and Tobias Melz. 2022. "Uncertainty Analysis and Experimental Validation of Identifying the Governing Equation of an Oscillator Using Sparse Regression" Applied Sciences 12, no. 2: 747. https://doi.org/10.3390/app12020747

APA Style

Ren, Y., Adams, C., & Melz, T. (2022). Uncertainty Analysis and Experimental Validation of Identifying the Governing Equation of an Oscillator Using Sparse Regression. Applied Sciences, 12(2), 747. https://doi.org/10.3390/app12020747

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop