entropy-logo

Journal Browser

Journal Browser

Machine Learning for Prediction, Data Assimilation, and Uncertainty Quantification of Dynamical Systems

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 17051

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics, Department of Meteorology, Institute for CyberScience, The Pennsylvania State University, University Park, PA 16802, USA
Interests: data assimilation; data-driven modelling; stochastic computational methods; machine learning

E-Mail Website
Co-Guest Editor
Courant Institute of Mathematical Sciences, New York University, New York, NY 10012-1185, USA
Interests: geometric data analysis; spectral analysis of dynamical systems; statistical forecasting; climate dynamics

E-Mail Website
Co-Guest Editor
Department of Mathematical Sciences, George Mason University, 4400 University Drive, MS 3F2, Fairfax, VA 22030, USA
Interests: manifold learning; non-parametric and semiparametric modeling; dynamical systems; filtering; data assimilation; control

Special Issue Information

Dear Colleagues,

Modeling dynamical systems is ubiquitous in a large variety of applications in the sciences. As the capability to collect data advances, an important and growing challenge is to extract relevant information from large datasets in ways that can improve modeling. Recent empirical results suggest that various machine learning algorithms are effective tools in approximating the solution operator of the underlying dynamics without relying on a parametric modeling assumption, but instead leveraging the available datasets to learn the dynamics. While the empirical successes are an important first step, they naturally introduce many practical and theoretical questions.

In this Special Issue, we particularly welcome contributions that address the following problems (other contributions relevant to the topic are also welcome):

  1. Capabilities and Limitations of Purely Data-Driven Models: A theoretical or thorough empirical study to understand the extent to which model-free (or nonparametric) techniques can be used to improve the prediction of high-dimensional complex dynamical systems.  One possible line of inquiry would be how to use criteria from information theory to determine any such limitations.
  2. Leveraging Information from Partial Models: Another pertinent question, particularly in scenarios with intrinsically high-dimensional dynamics, is how to efficiently augment a partial or imperfect first-principles (parametric) model with a data-driven model to correct model error or model unresolved degrees of freedom. Prominent applications include subgrid-scale modeling and closure schemes for complex systems.
  3. Data Assimilation: High-quality prediction requires powerful data assimilation techniques in order to determine accurate initial conditions. How can one leverage machine learning to improve data assimilation? For example, an issue that is particularly prevalent in the data assimilation community is estimating non-stationary second-order statistics, and this may be an opportunity for model-free methods.  Conversely, how can one leverage modern data assimilation methods to improve model-free forecasting techniques?
  4. Uncertainty Quantification: It is crucially important to provide confidence in the prediction through a reliable uncertainty quantification (UQ). How can one leverage machine learning in this context? Methods of improving UQ in parametric modeling with machine learning or evaluating UQ techniques applied to model-free methods are welcomed.

Prof. Dimitrios Giannakis
Prof. John Harlim
Prof. Tyrus Berry
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • model-free prediction
  • nonparametric models
  • model error
  • machine learning
  • kernel methods
  • data assimilation
  • Bayesian inferences
  • uncertainty quantification

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 1218 KiB  
Article
Bayesian Update with Importance Sampling: Required Sample Size
by Daniel Sanz-Alonso and Zijian Wang
Entropy 2021, 23(1), 22; https://doi.org/10.3390/e23010022 - 26 Dec 2020
Cited by 5 | Viewed by 2263
Abstract
Importance sampling is used to approximate Bayes’ rule in many computational approaches to Bayesian inverse problems, data assimilation and machine learning. This paper reviews and further investigates the required sample size for importance sampling in terms of the χ2-divergence between target [...] Read more.
Importance sampling is used to approximate Bayes’ rule in many computational approaches to Bayesian inverse problems, data assimilation and machine learning. This paper reviews and further investigates the required sample size for importance sampling in terms of the χ2-divergence between target and proposal. We illustrate through examples the roles that dimension, noise-level and other model parameters play in approximating the Bayesian update with importance sampling. Our examples also facilitate a new direct comparison of standard and optimal proposals for particle filtering. Full article
Show Figures

Figure 1

22 pages, 2705 KiB  
Article
Data-Driven Model Reduction for Stochastic Burgers Equations
by Fei Lu
Entropy 2020, 22(12), 1360; https://doi.org/10.3390/e22121360 - 30 Nov 2020
Cited by 13 | Viewed by 2372
Abstract
We present a class of efficient parametric closure models for 1D stochastic Burgers equations. Casting it as statistical learning of the flow map, we derive the parametric form by representing the unresolved high wavenumber Fourier modes as functionals of the resolved variable’s trajectory. [...] Read more.
We present a class of efficient parametric closure models for 1D stochastic Burgers equations. Casting it as statistical learning of the flow map, we derive the parametric form by representing the unresolved high wavenumber Fourier modes as functionals of the resolved variable’s trajectory. The reduced models are nonlinear autoregression (NAR) time series models, with coefficients estimated from data by least squares. The NAR models can accurately reproduce the energy spectrum, the invariant densities, and the autocorrelations. Taking advantage of the simplicity of the NAR models, we investigate maximal space-time reduction. Reduction in space dimension is unlimited, and NAR models with two Fourier modes can perform well. The NAR model’s stability limits time reduction, with a maximal time step smaller than that of the K-mode Galerkin system. We report a potential criterion for optimal space-time reduction: the NAR models achieve minimal relative error in the energy spectrum at the time step, where the K-mode Galerkin system’s mean Courant–Friedrichs–Lewy (CFL) number agrees with that of the full model. Full article
Show Figures

Figure 1

21 pages, 686 KiB  
Article
Data-Driven Corrections of Partial Lotka–Volterra Models
by Rebecca E. Morrison
Entropy 2020, 22(11), 1313; https://doi.org/10.3390/e22111313 - 18 Nov 2020
Cited by 4 | Viewed by 2601
Abstract
In many applications of interacting systems, we are only interested in the dynamic behavior of a subset of all possible active species. For example, this is true in combustion models (many transient chemical species are not of interest in a given reaction) and [...] Read more.
In many applications of interacting systems, we are only interested in the dynamic behavior of a subset of all possible active species. For example, this is true in combustion models (many transient chemical species are not of interest in a given reaction) and in epidemiological models (only certain subpopulations are consequential). Thus, it is common to use greatly reduced or partial models in which only the interactions among the species of interest are known. In this work, we explore the use of an embedded, sparse, and data-driven discrepancy operator to augment these partial interaction models. Preliminary results show that the model error caused by severe reductions—e.g., elimination of hundreds of terms—can be captured with sparse operators, built with only a small fraction of that number. The operator is embedded within the differential equations of the model, which allows the action of the operator to be interpretable. Moreover, it is constrained by available physical information and calibrated over many scenarios. These qualities of the discrepancy model—interpretability, physical consistency, and robustness to different scenarios—are intended to support reliable predictions under extrapolative conditions. Full article
Show Figures

Figure 1

24 pages, 1162 KiB  
Article
Can Short and Partial Observations Reduce Model Error and Facilitate Machine Learning Prediction?
by Nan Chen
Entropy 2020, 22(10), 1075; https://doi.org/10.3390/e22101075 - 24 Sep 2020
Cited by 2 | Viewed by 2814
Abstract
Predicting complex nonlinear turbulent dynamical systems is an important and practical topic. However, due to the lack of a complete understanding of nature, the ubiquitous model error may greatly affect the prediction performance. Machine learning algorithms can overcome the model error, but they [...] Read more.
Predicting complex nonlinear turbulent dynamical systems is an important and practical topic. However, due to the lack of a complete understanding of nature, the ubiquitous model error may greatly affect the prediction performance. Machine learning algorithms can overcome the model error, but they are often impeded by inadequate and partial observations in predicting nature. In this article, an efficient and dynamically consistent conditional sampling algorithm is developed, which incorporates the conditional path-wise temporal dependence into a two-step forward-backward data assimilation procedure to sample multiple distinct nonlinear time series conditioned on short and partial observations using an imperfect model. The resulting sampled trajectories succeed in reducing the model error and greatly enrich the training data set for machine learning forecasts. For a rich class of nonlinear and non-Gaussian systems, the conditional sampling is carried out by solving a simple stochastic differential equation, which is computationally efficient and accurate. The sampling algorithm is applied to create massive training data of multiscale compressible shallow water flows from highly nonlinear and indirect observations. The resulting machine learning prediction significantly outweighs the imperfect model forecast. The sampling algorithm also facilitates the machine learning forecast of a highly non-Gaussian climate phenomenon using extremely short observations. Full article
Show Figures

Figure 1

22 pages, 1381 KiB  
Article
Kernel-Based Approximation of the Koopman Generator and Schrödinger Operator
by Stefan Klus, Feliks Nüske and Boumediene Hamzi
Entropy 2020, 22(7), 722; https://doi.org/10.3390/e22070722 - 30 Jun 2020
Cited by 37 | Viewed by 6188
Abstract
Many dimensionality and model reduction techniques rely on estimating dominant eigenfunctions of associated dynamical operators from data. Important examples include the Koopman operator and its generator, but also the Schrödinger operator. We propose a kernel-based method for the approximation of differential operators in [...] Read more.
Many dimensionality and model reduction techniques rely on estimating dominant eigenfunctions of associated dynamical operators from data. Important examples include the Koopman operator and its generator, but also the Schrödinger operator. We propose a kernel-based method for the approximation of differential operators in reproducing kernel Hilbert spaces and show how eigenfunctions can be estimated by solving auxiliary matrix eigenvalue problems. The resulting algorithms are applied to molecular dynamics and quantum chemistry examples. Furthermore, we exploit that, under certain conditions, the Schrödinger operator can be transformed into a Kolmogorov backward operator corresponding to a drift-diffusion process and vice versa. This allows us to apply methods developed for the analysis of high-dimensional stochastic differential equations to quantum mechanical systems. Full article
Show Figures

Figure 1

Back to TopTop