Next Article in Journal
Analysis of Vibration and Acoustic Radiation Characteristics of Reinforced Laminated Cylindrical Shell Structure
Previous Article in Journal
Launch Dynamic Simulation of a Compressed-Air Launcher for Fire Suppression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Completion and Model Correction by Means of Evanescent Regularization

1
PIMM Laboratory, Arts et Métiers Institute of Technology, CNRS, CNAM, HESAM Université, 151 Boulevard de l’Hôpital, 75013 Paris, France
2
Renault, 1 Avenue du Golf, 78084 Guyancourt, France
3
Pprime Institute, University of Poitiers, 2 Bd des Frères Lumière, 86360 Chasseneuil-du-Poitou, France
4
CNRS, CREATE Ltd., 1 Create Way, #08-01 CREATE Tower, Singapore 138602, Singapore
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(17), 9616; https://doi.org/10.3390/app13179616
Submission received: 26 June 2023 / Revised: 9 August 2023 / Accepted: 22 August 2023 / Published: 25 August 2023

Abstract

:
System components are often regarded as part of a whole system, especially when it comes to data-driven modeling. Thus, subsystem modeling is disregarded in general when building a data-driven response, especially since multiple subsystem outputs are never measured in real applications. However, subsystem knowledge and accurate modeling are of utmost importance when aiming to repair, tune or troubleshoot a system. This work proposes a holistic modeling of subsystems in an embedded system setting. A hybrid modeling starting from the physics-based model is proposed in this work, correcting or enhancing the model, and predicting output variables, even when a measurement is never available for some of those variables. The process relies on the variables’ history, and employs an adjoint-free neural ordinary differential equation technique, along with evanescent regularization to enhance the convergence on the unmeasurable variables. The updated model converges to the exact measurements, for both the measurable and the unmeasurable variables. Multiple examples are presented using synthetic data, to allow an easy evaluation of the hidden or unmeasurable variables. The relative error offered by the updated model is around 0.001 % for the measurable quantities and 0.1 % for the unmeasurable ones.

1. Introduction and Related Works

Data-driven modeling is gaining popularity in engineering. Multiple novel applications are appearing on a daily basis. A combination of data-driven modeling and the more experienced physics-based modeling, to enhance diagnosis and prognosis, taking the best of both paradigms, is also an active research topic [1]. Multiple data-driven approaches, combined with physical modeling, are used and referred to as hybrid models [2].
Hybrid modeling has enhanced the predictive ability of the available models [3]. However, when a typical physics-based model is not accurate enough, such as when it requires a correction through data-driven approach, the industrial world would be asking to investigate the origin of the error. Such an investigation has been hindered until now by the fact that many intermediate variables used in linking/connecting subsystems are never measured, nor are they measurable. Thus, a component by component data-driven modeling becomes tricky to achieve, unless a methodology for data completion is available.
Multiple works exist on the completion of missing data [4], however, these works focus on data imputation, when data is missing from a data set. Data can be either completely missing at random time samples, missing at random time sample, or not randomly missing at a given time sample. However, in the present work, data related to a given output are never measured, and probably are unmeasurable. Unmeasurable variables have been recently considered by the scientific community, with several application in the engineering field [5]. In [5], the authors used physical modeling and simulation to complement data-driven approach with the missing unmeasurable data, however, there was no data-driven correction methodology to update these unmeasurable data. In [6], the authors aimed to build the most suitable differential equation of a physical domain, through using a pool of multiple linear and non-linear operators. Discovering physical equations can later be used to predict missing data in regions where the data are not measured in the same domain. This is suitable in the case of continuum mechanics and continuous quantities in a domain, obeying to a certain differential equation in space and time. However, this approach is not suitable when the learning parameters that are not correlated between each other through differentiation of the inputs. In another work [7], the authors removed or hid the unmeasurable quantities through substituting the equations of the unmeasurable parameters into the equations of the measurable ones, without a need of computing the unmeasurable values.
Multiple unmeasurable variables are listed in the literature, and workarounds are being created to overcome limitations [8]. The present work aims at addressing the problem of non-observed quantities that are of interest in a dynamical system. The method relies on the use of Taken’s theorem. In fact, Taken’s theorem affirms that the state of a system’s output can be determined from previous historical observation of the same outputs [9]. Thus, considering the historical evolution of a system, one can predict existing dependencies, even if not all the inputs are observable, using mutual information [10].
This work proposes a holistic framework for model updating and correction through a data-driven methodology, while also updating the unmeasurable variables, even if they are never observed. This approach is novel and has never been addressed until now. The starting point of the proposed methodology is the existence of prior knowledge, a model, which requires enhancement or correction. In Section 2, the available simulation is improved based on measured real data, available only for the measurable quantities of interest. In Section 3, a hybrid modeling of the available simulation, enhancing the results, is built. The methodology is illustrated on an academic use case to clearly illustrate the adopted approach, without any loss of generality.

2. Correction of the Available Model

This section starts by addressing a simple model, to grasp the proposed methodology. Later, details on the approach and results will be given.

2.1. Methods

This section revisits some numerical methods and describes those used in this work.

2.1.1. Dynamical Model

We consider that a model is available, coming from a system modeling software, for example. For the sake of simplicity, we consider:
d y d t = A y
where A is a no-singular and diagonalizable matrix, with negative eigenvalues to ensure stability. The fact of being diagonalizable and having negative eigenvalues is not a limitation of the proposed methodology, however, they allow us to define sound physical models. The simulated problem is supposed to be less accurate than the exact one, involving the matrix B such as:
B = A + ϵ A
with ϵ small enough. Thus, the modeling and its simulation considers:
d y d t = B y
The discreet counterparts of Equations (1) and (3) lead to y e and y s , the reference and simulated solutions, respectively. Thus, using a simple Euler explicit integration scheme in time, one can write at the i-time step:
y i + 1 e = y i e + Δ t A y i e y i + 1 s = y i s + Δ t B y i s
where Δ t is the time step.
The model update and correction algorithm takes as input a partial measurements of y , which is named y m and aims to predict both y m of dimension N m , as well as y n m of dimension N n m , the non-measurable part of y , such as N m + N n m = N , the dimension of the quantities of interest y . First, the measurable and non-measurable parts of the model are separated such as:
y i + 1 s , m y i + 1 s , n m = y i s , m y i s , n m + Δ t B m , m B m , n m B n m , m B n m , n m y i s , m y i s , n m
Identifying the components of matrix B in the previous equation seems a tricky issue, because at each time step t i , the solution involves the hidden variables at the previous time step t i 1 . Therefore, one could think that there are too many unknowns to be identified; the components of B and the hidden estate at each time step, a number that exceeds the number of available equations, the latter being the number of observable states at each time step.
However, by using recurrence, all the solutions at any time step will depend on only the initial one, which consists of some observable and hidden values. Thus, the number of equations (involving the observable state at each time step) should be large enough (when considering a sufficient number of time steps, which it is usually the case) for computing the matrix and the hidden state at only the initial time.
Since this work also aims at identifying a correction for both the measurable and non-measurable outputs, a substitution of y s , n m through the use of the Schur complement is not the most suitable approach. Instead, a second time iteration will show that measurable and non-measurable quantities are coupled:
y i + 2 s , m y i + 2 s , n m = y i + 1 s , m y i + 1 s , n m + Δ t B m , m B m , n m B n m , m B n m , n m y i + 1 s , m y i + 1 s , n m = y i s , m y i s , n m + 2 Δ t B m , m B m , n m B n m , m B n m , n m + Δ t 2 B m , m B m , n m B n m , m B n m , n m 2 y i s , m y i s , n m = y i s , m y i s , n m + 2 Δ t B m , m B m , n m B n m , m B n m , n m y i s , m y i s , n m + Δ t 2 B m , m B m , m + B m , m B n m , m B m , m B m , n m + B m , n m B n m , n m B n m , m B m , m + B n m , n m B n m , m B n m , m B m , n m + B n m , n m B n m , n m y i s , m y i s , n m
From Equation (6), one can note that after a second iteration of the integration scheme, all the non-measurable quantities y s , n m , as well as the parts of the modeling matrix B multiplying them, appear in the equation of the measurable quantities y s , m . Thus, an optimization problem can be formulated on the measurable quantities only, and calculating a correction of the model matrix B for both the measurable and non-measurable quantities. This is only possible by taking into consideration the history of the measurable quantities  y s , m .
The optimization problem can be formulated under the following form:
B , y 0 s , n m = a r g min B R N 2 y 0 s , n m R N n m i = 1 n t j = 1 N m y i e , j y i s , j 2 ,
where y 0 s , n m is the initial condition of the non-measurable variables, and n t is the number of time steps performed in the integration scheme. Selecting several values of n t can generate as much equations as required for the problem’s solution. However, the loss function defined in Equation (7) is highly non-linear and involves high degrees of the problem’s unknowns. To improve the convergence of the optimization problem using a gradient descent algorithm, a regularization is required. Our proposal is making use of the evanescent regularization, revisited in Section 2.1.2.

2.1.2. The Evanescent Regularization

The evanescent regularization is a regularization technique initially developed to address the Cauchy problem, while ensuring convergence of the quantities of interest [11]. The authors in [12] used the Evanescent regularization to address the solution inside a given domain, while only knowing the solution on the boundary of the domain. In that case, the inverse problem had observable or “measurable” degrees of freedom on the boundary, and unmeasurable ones inside the domain. The use of the evanescent regularization is demonstrated to lead to the slow convergence of the unmeasurable quantities of interest, as well as a strong convergence of the measurable ones, towards the exact solution of the problem [13]. The evanescent regularization is also used when the extra boundary conditions are available on a part of the domain, while totally missing on another part of the boundary [11].
Thus, the loss function L is reformulated in the following form:
L = i = 1 n t j = 1 N m y i e , j y i s , j 2 + λ 1 B m , m B ^ m , m 2 + λ 2 B m , n m B ^ m , n m 2 + λ 3 y 0 s , n m y ^ 0 s , n m 2
with λ 1 , λ 2 and λ 3 the evanescent regularization parameters. The · ^ quantities are the best estimation of the regularized outputs of the optimization problem, which can be taken as the known values from the previous iteration of the gradient descent algorithm. In fact, the evanescent regularization will improve the convergence of both quantities, with a faster convergence of the measurable quantities of interest. Thus, the regularized optimization problem is reformulated as:
B , y 0 s , n m = a r g min B R N 2 y 0 s , n m R N n m L ,
To solve the optimization problem defined in Equation (9), the gradients can always be computed numerically, at the expense of dramatically increasing the computation time of the gradient descent algorithm. It is therefore preferable to use an integration scheme with automatic gradient differentiation through time steps, such as the Neural Ordinary Differential Equations (Neural ODE) methods [14]. This will allow a fast and reliable gradient calculation through automatic differentiation and will accelerate the convergence of the optimization algorithm.

2.1.3. Time Integration and Adjoint-Free Neural ODE

The use of neural Ordinary Differential Equation (ODE), with the adjoint method, involves the integration of another differential equation, backward in time, to compute the gradients. This integration can involve very low time step requirements and increase dramatically the computation time. Therefore, in this work, this issue is alleviated through proposing an adjoint-free neural ODE. This adjoint-free neural ODE uses a neural network that estimates the derivatives of the quantities of interest y , while computing analytically, through automatic differentiation, their gradients with respect to the neural network training variables. In this case, the matrix B appearing in Equation (9) is interpreted as the weights of a single layer dense neural network, with no bias.
To illustrate the adjoint-free neural ODE, the derivative of the quantities of interest is written as:
d y d t = h ( t , y ; p ) ,
with h ( t , y ; p ) is a neural network and p are any input parameters. The feed-forward evaluation is possible though explicit integration using:
y i + 1 = y i + Δ t h ( t i , p i , y i ) y i + 2 = y i + 1 + Δ t h ( t i + 1 , p i + 1 , y i + 1 ) = = y i + Δ t h ( t i , p i , y i ) + Δ t h t i + 1 , p i + 1 , y i + Δ t h ( t i , p i , y i )
The derivation of y i + n t with respect to the trainable parameters of h , θ for instance, can be performed through a straightforward manner by the means of the chain rule, and using the property ( f g ( θ ) ) = f ( g ( θ ) ) · g ( θ ) . Thus, if θ is the vector of the neural network trainable variables, one can write ( h h ( θ ) ) = h ( h ( θ ) ) · h ( θ ) . The gradients of y i + n t are therefore easily computed, using the chain rule, while h ( θ ) is computed through automatic differentiation.
The neural ODE can therefore compute several values for y i + n t , to be used in the loss function and the optimization problem defined in Equation (9), which can be re-written in its general form:
θ , y 0 s , n m = a r g min θ R N 2 y 0 s , n m R N n m L ,
θ being the model hyperparameters to optimize, which are the values of the matrix B of dimensionality N 2 in our selected example ( θ B in our example). The final algorithm for updating the simulation model and training the model weights is illustrated in Algorithm 1. The ADAM stochastic optimization algorithm [15] is used for the gradient descent in our examples.
Algorithm 1 Algorithm used for optimizing the simulation model and detecting the initial condition of the unmeasurable parameters, using the adjoint-free neural ODE and the evanescent regularization
  • Initialize θ as θ ^ , y 0 s , n m as y ^ 0 s , n m
  • while L ϵ do
  •    while  i < n t do
  •      Compute y i using adjoint-free Neural ODE integration, Equation (11)
  •    end while
  •    Compute L
  •    Evaluate L θ and L y 0 s , n m
  •     θ y 0 s , n m A D A M θ y 0 s , n m ; L θ L y 0 s , n m
  • end while
  • return θ ; y 0 s , n m

2.2. Results: Model Correction

In this section, for the sake of simplicity, a simple model with N = 5 is considered (given in the Appendix A), and a single dense neural network layer connecting the inputs and the outputs. No input parameters p are used in this case, the time is the only model input. The trainable parameters θ and y 0 s , n m are initiated with the values coming from the available simulation. The initial curves are shown in Figure 1, for the exact model (in solid lines) and the simulated one (in dashed lines). In this section, only the last output is assumed to be unmeasurable.
The initial condition for the unmeasurable variable is deliberately set to a wrong initial value y 0 h , n m = 0.8 , while the exact initial conditions are considered for the measurable ones, y 0 h , m = 1 . The training is performed on a time domain t 0 , 10 s with n t = 1000 time steps and Δ t = 0.01 s . The training with an adjoint-free neural ODE is performed on a portable PC within half an hour. The results after the training are illustrated in Figure 2 for the measurable and non-measurable variables.The used values of the regularization coefficients in this section are λ 1 = 10 5 , λ 2 = 10 5 , and λ 3 = 1 .
The results illustrated in Figure 2 highlight the ability of the network to correct the existing simulation even though neither the initial value nor the output of the last quantity of interest are available. The mean relative errors are:
  • Relative error on y 0 s , n m , the initial condition of the non-measurable quantity: 0.49%
  • Relative error on the measurable quantities y m : 0.01%
  • Relative error on the unmeasurable quantities y n m : 0.43%
The predictive ability is highlighted, and shows that the proposed approach is able to correct an existing simulation through time variation of only several quantities of interest.

3. A Hybrid Modeling Approach

3.1. A Single Unmeasurable Quantity

When it comes to hybrid modeling, the approach is similar to the previously proposed one; using the same methods described in Section 2.1. However, this approach assumes the existence of another model C existing in parallel to the simulation, and integrating the results simultaneously. For instance, in the linear case for example, the hybrid model solution y i + 1 h at any time steps can be written as:
y i + 1 e = y i e + Δ t A y i e y i + 1 s = y i s + Δ t B y i s y i + 1 h = y i h + Δ t B y i h + Δ t C y i h
In the non-linear situation, the hybrid model would be written as:
d y h d t = h ( t , p , y h ) + g ( t , p , y h )
where g is the correction term and h is the initial simulation model assumed non-trainable in this case. The loss function is changed in the hybrid modeling to include the L 2 norm regularization with respect to the hyperparameters of the trainable model g ( t , p , y ) , because it is expected that the correction is not large with respect to B . For instance, the new loss function J is written in the linear case by:
J = i = 1 n t j = 1 N m y i e , j y i h , j 2 + λ 1 C m , m C ^ m , m 2 + λ 2 C m , n m C ^ m , n m 2 + + λ 3 y 0 h , n m y ^ 0 h , n m 2 + λ 4 i = 1 N j = 1 N C ^ i j 2 + λ 5 C m , n m C ^ m , m 2
where C ^ i j includes all of the components of C , and enforce the sparsity in the correction, even if a stronger enforcement could be applied by using the L 1 norm. The new loss function can be generalized for the non-linear case using:
J = i = 1 n t j = 1 N m y i e , j y i h , j 2 + λ 1 θ m , m θ ^ m , m 2 + λ 2 θ m , n m θ ^ m , n m 2 + + λ 3 y 0 h , n m y ^ 0 h , n m 2 + λ 4 k = 1 N θ θ ^ k 2 + λ 5 θ m , n m θ ^ m , m 2 ,
with θ being the trainable parameters of the non-linear model g , having a total number of N θ trainable parameters.
The exact error matrix C used in this example is shown in Appendix A. The last term multiplying λ 5 couples the amplitudes of the equations computing the measurable part and the non-measurable part, to reduce the possibility of having a scale bias. The optimization problem is now written as:
θ , y 0 h , n m = a r g min θ R N 2 y 0 h , n m R N n m J ,
where θ are the hyperparameters to optimize. The training is performed on a standard laptop within few minutes, even when starting from a very wrong initial condition y 0 h , n m . The results are illustrated in Figure 3 for both the measurable and the unmeasurable quantities of interest, when performing the training over 100 time steps only, and an initial guess y 0 h , n m = 0.5 , while 1 was the correct initial condition in this example. The used values of the regularization coefficients in this example are λ 1 = 10 3 , λ 2 = 10 3 , λ 3 = 10 3 , λ 4 = 10 4 , and λ 5 = 10 2 .
The mean relative errors in the updated model are found to be:
  • Relative error on y 0 h , n m , the initial condition of the non-measurable quantity: 0.15%
  • Relative error on the measurable quantities y m : 0.0093%
  • Relative error on the unmeasurable quantities y n m : 0.104%
We can conclude that the hybrid model is able to outperform the simulation correction, with a lower number of time steps. In fact, increasing the number of the available time steps increases the information that is used by the machine learning algorithm to correct the simulation. On the other hand, the increase in the number of time steps integration requires an increase in the computation time. The approach illustrated in Section 2.2 did correct the simulation, but even after 1000 time steps available, the results were less accurate than those of the hybrid model after 100 available time steps only.

3.2. A More Complex Case

This section addresses a more complex example, where the last two variables of interest in our model are unmeasurable. This leads to a fraction of 40% of our model being hidden or unmeasurable. Using the same loss function given in Equation (17), with the regularization coefficients λ 1 = 10 3 , λ 2 = 10 3 , λ 3 = 10 3 , λ 4 = 10 4 and λ 5 = 10 2 , the results are presented in Figure 4 proving again an excellent accuracy. The initialization was set to [ 0.5 ; 1.5 ] for the two unmeasurable quantities instead of 1, a 50% relative error with respect to the correct initial conditions.
Figure 4 showcases the ability of the proposed methodology to correct the model with nearly half of the variables set unmeasurable. The final mean relative errors for the updated model are found to be:
  • Mean relative error on y 0 h , n m , the initial condition of the non-measurable quantity: 0.67%
  • Mean relative error on the measurable quantities y m : 0.051%
  • Mean relative error on the unmeasurable quantities y n m : 0.3%

4. Conclusions

In this work, a systematic approach to update or correct any simulation by using an intrusive correction, which handles simulation component correction, is proposed. This approach conserves the simulation structure and is thus fully explainable. The evanescent regularization, as well as an adjoint-free neural ODE, are leveraged to achieve the correction. The work also shows that hybrid modeling with L 2 norm regularization of the cost function outperforms a direct correction of the model, even when using a lower number of available simulation and experimental time steps.

Author Contributions

Conceptualization, C.G., F.C. and Y.T.; methodology, C.G. and A.C.; software, C.G. and D.J.; validation, D.J.; formal analysis, C.G., F.C. and D.J.; writing—original draft preparation, C.G.; writing—review and editing, Y.T. and F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data generator is included.

Acknowledgments

Authors knowledge the ESI, SKF and RTE Chairs at ENSAM, the support of Renault Group and the French ANRT through its CIFRE programme. This research is also part of the programme DesCartes and is supported by the National Research Foundation, Prime Minister Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The data involved in the numerical examples addressed in this papers where generated by using the matrices A and B given by:
A = 4.35611932 1.22951563 1.83881934 0.54265555 0.01757481 1.15562685 3.69112028 0.49813577 0.0742603 0.20472346 1.79339231 0.37881996 4.76496262 0.99545275 1.8469943 0.65640466 0.03439998 1.15462889 3.33237733 0.67350006 0.32680216 0.47521164 1.4571903 0.90412792 1.2567266
and
B = 4.84013258 1.63964011 1.30937905 0.1723914 0.1572111 1.63964011 4.10124475 0.03130452 0.44452445 0.06508717 1.30937905 0.03130452 5.29440291 0.6251886 1.98663059 0.1723914 0.44452445 0.6251886 3.70264148 0.53386377 0.1572111 0.06508717 1.98663059 0.53386377 1.39636289
The considered time integration is the simplest Euler’s integration schema, with a time step Δ t = 0.01 s. The induced error C is the difference between A and B :
C = 0.4840 0.4101 0.5294 0.3703 0.1396 0.4840 0.4101 0.5294 0.3703 0.1396 0.4840 0.4101 0.5294 0.3703 0.1396 0.4840 0.4101 0.5294 0.3703 0.1396 0.4840 0.4101 0.5294 0.3703 0.1396

References

  1. Chinesta, F.; Cueto, E. Empowering engineering with data, machine learning and artificial intelligence: A short introductive review. Adv. Model. Simul. Eng. Sci. 2022, 9, 21. [Google Scholar] [CrossRef]
  2. Ghnatios, C. A hybrid modeling combining the proper generalized decomposition approach to data-driven model learners, with application to nonlinear biphasic materials. Comptes Rendus Mécanique 2021, 349, 259–273. [Google Scholar] [CrossRef]
  3. Ghnatios, C.; Kestelyn, X.; Denis, G.; Champaney, V.; Chinesta, F. Learning Data-Driven Stable Corrections of Dynamical Systems-Application to the Simulation of the Top-Oil Temperature Evolution of a Power Transformer. Energies 2023, 16, 5790. [Google Scholar] [CrossRef]
  4. Sun, Y.; Li, J.; Xu, Y.; Zhang, T.; Wang, X. Deep learning versus conventional methods for missing data imputation: A review and comparative study. Expert Syst. Appl. 2023, 227, 120201. [Google Scholar] [CrossRef]
  5. Shourangiz-Haghighi, A.; Tavakolpour-Saleh, A. A neural network-based scheme for predicting critical unmeasurable parameters of a free piston Stirling oscillator. Energy Convers. Manag. 2019, 196, 623–639. [Google Scholar] [CrossRef]
  6. Chen, Z.; Liu, Y.; Sun, H. Physics-informed learning of governing equations from scarce data. Nat. Commun. 2021, 12, 6136. [Google Scholar] [CrossRef] [PubMed]
  7. Ding, B.; Pan, H. Output feedback robust model predictive control with unmeasurable model parameters and bounded disturbance. Chin. J. Chem. Eng. 2016, 24, 1431–1441. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Wang, S.; Heinrich, M.K.; Wang, X.; Dorigo, M. 3D hybrid formation control of an underwater robot swarm: Switching topologies, unmeasurable velocities, and system constraints. ISA Trans. 2023, 136, 345–360. [Google Scholar] [CrossRef] [PubMed]
  9. Taken, F. Detecting Strange Attractors in Turbulence. In Dynamical Systems and Turbulence; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1981; Volume 898, pp. 366–381. [Google Scholar]
  10. Fraser, A.; Swinney, H. Independent coordinates for strange attractors from mutual information. Phys. Rev. A 1986, 33, 1134–1140. [Google Scholar] [CrossRef] [PubMed]
  11. Delvare, F.; Cimetière, A. The Evanescent Regularieation Method for Elliptic Cauchy Problems. In Inverse Problems and Computational Mechanics; Editura Academiei: Bucharest, Romania, 2011; Volume 1, pp. 101–124. [Google Scholar]
  12. Ghnatios, C.; Xu, G.; Leygue, A.; Visonneau, M.; Chinesta, F.; Cimetière, A. On the space separated representation when addressing the solution of PDE in complex domains. Discret. Contin. Dyn. Syst. S 2016, 9, 475–500. [Google Scholar] [CrossRef]
  13. Caillé, L. Méthodes de Régularisation Évanescente pour la Complétion de Données. Ph.D. Thesis, Université de Caen Normandie, Caen, France, 2018. [Google Scholar]
  14. Chen, R.T.Q.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D. Neural Ordinary Differential Equations. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS 2018), Montreal, QC, Canada, 3–8 December 2018; Volume 32, pp. 1–18. [Google Scholar]
  15. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Figure 1. Exact solution and simulation of the considered model with N = 5 .
Figure 1. Exact solution and simulation of the considered model with N = 5 .
Applsci 13 09616 g001
Figure 2. Final output of the trained neural network for both the measurable and unmeasurable outputs.
Figure 2. Final output of the trained neural network for both the measurable and unmeasurable outputs.
Applsci 13 09616 g002
Figure 3. Final output of the trained hybrid modeling using a neural network for the correction models. The shown results are for both the measurable and unmeasurable outputs.
Figure 3. Final output of the trained hybrid modeling using a neural network for the correction models. The shown results are for both the measurable and unmeasurable outputs.
Applsci 13 09616 g003
Figure 4. Final output of the trained hybrid modeling using a neural network for the correction models with two unmeasurable quantities y ^ 4 and y ^ 5 . The shown results are for both the measurable and unmeasurable outputs.
Figure 4. Final output of the trained hybrid modeling using a neural network for the correction models with two unmeasurable quantities y ^ 4 and y ^ 5 . The shown results are for both the measurable and unmeasurable outputs.
Applsci 13 09616 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ghnatios, C.; Jiang, D.; Tourbier, Y.; Cimetière, A.; Chinesta, F. Data-Completion and Model Correction by Means of Evanescent Regularization. Appl. Sci. 2023, 13, 9616. https://doi.org/10.3390/app13179616

AMA Style

Ghnatios C, Jiang D, Tourbier Y, Cimetière A, Chinesta F. Data-Completion and Model Correction by Means of Evanescent Regularization. Applied Sciences. 2023; 13(17):9616. https://doi.org/10.3390/app13179616

Chicago/Turabian Style

Ghnatios, Chady, Di Jiang, Yves Tourbier, Alain Cimetière, and Francisco Chinesta. 2023. "Data-Completion and Model Correction by Means of Evanescent Regularization" Applied Sciences 13, no. 17: 9616. https://doi.org/10.3390/app13179616

APA Style

Ghnatios, C., Jiang, D., Tourbier, Y., Cimetière, A., & Chinesta, F. (2023). Data-Completion and Model Correction by Means of Evanescent Regularization. Applied Sciences, 13(17), 9616. https://doi.org/10.3390/app13179616

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop