Next Article in Journal
Neutrosophic Quadruple Vector Spaces and Their Properties
Next Article in Special Issue
An Evolve-Then-Correct Reduced Order Model for Hidden Fluid Dynamics
Previous Article in Journal
Multiple Solutions for Nonlocal Elliptic Systems Involving p(x)-Biharmonic Operator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Intrusive Inference Reduced Order Model for Fluids Using Deep Multistep Neural Network

by
Xuping Xie
1,*,†,
Guannan Zhang
1,† and
Clayton G. Webster
1,2,†
1
Computation and Applied Mathematics, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
2
Department of Mathematics, University of Tennessee, Knoxville, TN 37996, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(8), 757; https://doi.org/10.3390/math7080757
Submission received: 15 June 2019 / Revised: 11 August 2019 / Accepted: 12 August 2019 / Published: 19 August 2019
(This article belongs to the Special Issue Machine Learning in Fluid Dynamics: Theory and Applications)

Abstract

:
In this effort we propose a data-driven learning framework for reduced order modeling of fluid dynamics. Designing accurate and efficient reduced order models for nonlinear fluid dynamic problems is challenging for many practical engineering applications. Classical projection-based model reduction methods generate reduced systems by projecting full-order differential operators into low-dimensional subspaces. However, these techniques usually lead to severe instabilities in the presence of highly nonlinear dynamics, which dramatically deteriorates the accuracy of the reduced-order models. In contrast, our new framework exploits linear multistep networks, based on implicit Adams–Moulton schemes, to construct the reduced system. The advantage is that the method optimally approximates the full order model in the low-dimensional space with a given supervised learning task. Moreover, our approach is non-intrusive, such that it can be applied to other complex nonlinear dynamical systems with sophisticated legacy codes. We demonstrate the performance of our method through the numerical simulation of a two-dimensional flow past a circular cylinder with Reynolds number Re = 100. The results reveal that the new data-driven model is significantly more accurate than standard projection-based approaches.

1. Introduction

The full order model (FOM) of realistic engineering applications in fluid dynamics often represents a large scale dynamic system. High-fidelity computational fluid dynamics (CFD) simulations of the FOM are so computationally expensive that they put a heavy burden on the computational resources despite the available CFD software and supercomputers with thousands of cores. Consequently, the use of FOM for such simulations is often impractical and prohibitive for time-critical applications such as system identification, flow control, design optimization.
The reduced order modeling in fluid dynamics is used to construct an accurate low-dimensional approximation to the full system with orders of magnitude reduction in computational cost. The first usage of reduced order model (ROM) in fluid dynamics was by [1] for studying the intensity of turbulence and coherent structures. Many recent successful applications of ROM in fluid problems can be found in [2,3,4,5,6,7,8,9].
The Galerkin projection based reduced order model (GP-ROM) is one of the most popular methods that has been widely used in practice. The GP-ROM, in an offline stage, first constructs a reduced space and then uses the Galerkin projection of the FOM operator to obtain a low-dimensional nonlinear dynamical system, i.e., ROM dynamics. The reduced space is often generated by proper orthogonal decomposition (POD), also known as principal component analysis. In an online stage, the obtained reduced dynamics can be used to approximate the full system efficiently for various applications, such as long-term prediction, flow control. However, the projection step requires that the full model operators have to be available in order to obtain the ROM dynamics. This limits the applicability of projection based model reduction in situations where the full model is unknown [10,11]. More importantly, the computational cost of assembling the reduced operators—tensors from the projection of FOM operator—scales with the large dimension of the underlying high-dimensional of FOM. For this reason, the GP-ROM are efficient for problems where the reduced operators must be constructed only once.
On the other hand, the GP-ROM generates inaccurate approximations for highly non-stationary (nonlinear) fluids, e.g., turbulence. In the literature, a common explanation for this failure is that the Galerkin projection does not preserve the stability properties from the full model. A deeper reason is that the low-dimensional space used in the Galerkin projection cannot resolve the nonlinear interaction of the fluid system [12,13,14]. Consequently, resulting in a projection based stability error which makes GP-ROM fail in nonlinear fluid applications. Balajewicz [15] studied the application of low dimensional modelling of shear flows; Ballarin [16] applied the POD-Galerin to the parametrized steady incompressible Navier–Stokes equations (NSE); Xie [17] introduced filtered ROM method for 2D cylinder flow.

1.1. Related Work

Closure modeling. Numerous stabilization strategies have been devised to address the instability problem, known as closure modeling. The fundamental idea of closure modeling is to model the lost information from the low-dimensional space since it is generated through POD truncation. This truncation is to keep the first few POD modes that extract the most dominant structure of the full system and discard the rest modes. Closure models generally can be categorized into two common approaches. One physically models the effect of the discarded POD mode by adding artificial viscosity to the reduced system [6,15,16,18]. The other to mathematically models the ROM dynamics by solving a related optimization problem [14,17]. Another data-driven approach is the POD with a trained radial basis function (RBF) network to estimate various physical paramters with little prior knowledge of the system. The POD-RBF achieves good numerical robustness and stability, see [19,20,21]. The most recent development for closure modeling is to apply the neural network to approximate the lost information [22].
Differential equations learning. Bridging numerical differential equations and deep neural networks have gained enormous attention recently. Chang [23] proposed dynamical system viewing of residual networks (ResNet). Lu [24] introduced the linear multistep network architecture for the first time to analyze ResNet on classification task. Sparse regression, Gaussian process, multistep neural networks have been applied for the data-driven discovery of dynamic systems [25,26,27,28,29]. More recently, an ordinary differential equation network (ODE-net) was introduced for supervised learning [30]. In this work, we focus on the traditional projection based reduced order modeling process and make improvement by using the state-of-the-art deep learning method.

1.2. Our Approach

In this paper, we propose a novel non-intrusive learning reduced order model framework for a fluid dynamic system. The new framework provides a general concept of learning the optimal reduced dynamic system from the data. Inspired by the successful development in learning differential equations with deep networks, we apply the linear multistep neural network (LMNet) to learn the reduced order model (LMNet-ROM). Unlike closure modeling and the existing non-intrusive model reduction methods, in this work, we focus on a different perspective. First, we do not use Galerkin projection and model the closure problem. The optimal reduced dynamic system that can address the instability issue is learned for a given supervised learning task. Secondly, the new model does not approximate reduced operators whereas other non-intrusive models use interpolation or regression method to infer reduced operators [10,11]. Moreover, our viewpoint easily enables us to answer the common question—what is the best ROM dynamic system to approximate the full system for a given low-dimensional subspace? We demonstrate the performance of the new LMNet-ROM is better than GP-ROM in full order model approximation and long-term prediction. The main contributions can be summarized as follows:
  • A novel non-intrusive learning reduced order model framework for fluid dynamics, which is applicable to general nonlinear dynamical systems with sophisticated legacy codes.
  • Our framework overcomes the instability issue of the projection based model reduction, and provides accurate approximation and long-term prediction of the full system.
  • The learning process of our approach is more computationally efficient than the construction of reduced operators in the classic projection based methods.

2. Reduced Order Modeling

In this section, we present the Galerkin projection based reduced order modeling framework in fluid dynamic system. The classical Navier–Stokes equations (NSE) are often used as a mathematical model in fluid dynamics,
u t R e 1 Δ u + u · u + p = 0 ,
· u = 0 ,
where u is the velocity, p the pressure, and  R e the Reynolds number. We use the initial condition u ( x , 0 ) = u 0 ( x ) and (for simplicity) homogeneous Dirichlet boundary conditions: u ( x , t ) = 0 . For convenience, we use u t = f ( u , R e , p ) as the general notation for the NSE in the rest of the paper.
Reduced Space. Proper orthogonal decomposition (POD) is the dimension reduction method that we used to generate the reduced space. It starts with the data matrix, U = [ u 0 , u 1 , , u s ] R D h × ( s + 1 ) , collected by the numerical solutions or experimental observations of the full system (1) at s + 1 different time instances. The POD method seeks a low-dimensional space X r that approximates the data ( U ) optimally with respect to the L 2 -norm. It formulates the following eigenvalue problem:
U U T φ i = λ i φ i , i = 1 , 2 , , d ,
where d is the rank of the data matrix U U T . λ i and φ i are the eigenvalues and POD basis, respectively. The reduced space is given after the truncation as Φ r : = { φ 1 , , φ r } R D h × r .
Galerkin projection-ROM (GP-ROM). For a given space Φ r , the GP-ROM finds the approximation of the velocity field spanned by the low-dimensional space,
u u r ( x , t ) j = 1 r a j ( t ) φ j ( x ) ,
where { a j ( t ) } j = 1 r are the sought time-varying coefficients. The GP-ROM can be obtained by projecting the FOM onto the POD space: i = 1 , , r ,
u r t , φ i = f ( u r , R e ) , φ i .
Here, ( · , · ) is the L 2 inner product. Note that the POD basis functions are weakly divergence free, i.e.,  · φ i = 0 , so the pressure term will not be included in the ROM projection Equation (5) as ( p , · φ i ) = 0 . The solution of GP-ROM can be determined by the following nonlinear dynamic system:
a ˙ = L a + a N a ,
where L and N are ROM operators that can be obtained by projection
L i m = 1 R e Δ φ m , φ i , N i m n = φ m · φ n , φ i .
Note that the reduced system (6) efficiently approximates the full model of NSE as the dimension r is generally very small ( O ( 10 ) ) compared to the high dimension of u ( D h O ( 10 5 ) ).
ROM closure models. The ROM closure models can be generally written as the following dynamic system:
a ˙ = L a + a N a + τ ,
where τ is an artificial term that model the effect of the discarded POD modes using various approaches e.g., see [17,22,31,32,33]. We note that the dynamics of the closure model (8) is more accurate than GP-ROM dynamics (6) in the approximation of the model, relevant studies can be found in [2,12,13,14]. The challenge of closure modeling is that the τ is unknown, i.e., no explicit formula. Therefore, closure models are empirical modification of the GP-ROM from physical or mathematical perspective.

3. Learning Reduced Order Model

In this section, we present the architecture of learning the reduced order model from deep neural networks. In contrast to the standard GP-ROM framework, we learn the reduced dynamical system from the data without intrusively using ROM operators (e.g., L , N )

3.1. Rom Dynamics

We consider the low dimensional ROM dynamic system as a general function,
a ˙ = f r ( a ) .
We claim that this function f r has a general representation of the ROM dynamics including system (6) and (8). Our goal is to learn the ROM dynamics (9) in a given set of temporal data and return a closed form model that can be used to accurately approximate and predict the full system.
For a given data-set of snapshot solutions of NSE (1), U = u 0 , , u s R D h × s + 1 , at time steps t 0 , , t s . The best approximation of snapshots data by the POD space is given by, u j = i = 1 r b i j φ i , j = t 0 , , t s . The reduced dynamics (9) is to find coefficients a such that,
u r = i = 1 r a i j φ i u = i = 1 r b i j φ i .
It indicates that the optimal solution from the r-dimensional ROM dynamic system is given by the full model data, b j , such that
b ˙ = f r ( b ) .
This provides a framework in which the ROM dynamics can be learned from the data-set B = [ b 0 , , b s ] R r × ( s + 1 ) , i.e., time-varying coefficients of FOM data. The training data-set can be computed by the following,
B = Φ r T W U .
The formula is derived by the L 2 projection of data U to the low-dimensional space Φ r . W R D h × D h is the weight matrix of the L 2 inner product, we use finite element weights in this work.

3.2. Linear Multistep Network (LMNet)

Motivated by differential equation learning, we adopt the linear multistep network architecture [24,34] to construct a structured nonlinear regression model that can learn the reduced dynamics. In this work, we only consider the implicit multistep method, Adams–Moulton (AM) scheme [35] as it has better stability property. The K-step AM method is defined as follows:
a n = i = 1 K ( α i a n i + β i Δ t f r ( a n i ) ) + β 0 Δ t f r ( a n )
We discretize the ROM dynamic system (9) by using the AM scheme (13) with a neural network. The parameters of this neural network can be learned by minimizing the mean squared error loss function:
M S E : = 1 N K + 1 n = K N | L n | 2 ,
where N is the total number of time instance in the ROM dynamic system, K denotes the step for AM method. L n is the local truncation error from the Taylor expansion of K-step AM method (13),
L n = i = 0 K α i a n i Δ t β i f r ( a n i ) .
The derivation of the loss function follows the truncation error of the multistep method which can be obtained via a Taylor series, and we add the formula in the Appendix A, while other references can be found in [35,36]. The goal of this neural network was to approximate the function f r with the input of the network is the coefficients a and the output of the network is the f r ( a ) representing the time derivative. This neural network architecture uses a family of implicit linear mulistep method to discretize the ROM dynamic system (9). The implicit scheme comes with the last term β 0 Δ t f r ( a n ) such that β 0 0 . In neural network training, the parameters α i , β i are given for a certain K-step AM method. Also, we do not need to compute the implicit term since the coefficients a n is known (i.e., the training data), which makes the implementation simple. The target function will be trained by minimizing the cost function (15). Another advantage of this nonlinear regression is that we do not have to approximate the temporal gradients [11,26] since the time derivatives are discretized by the AM method. Our approach is different from the method in PolyNet [37]. The PolyNet focused on the inception of residual neural network block and can be interpreted as an approximation to one step of the backward Euler scheme, i.e.,  a n = ( I Δ t f ) 1 a n 1 = [ I + Δ t f + ( Δ t f ) 2 + + ( Δ t f ) n + ] a n 1 , I denotes the identity mapping. The author used second order approximation in their paper [37].

3.3. LMNet-ROM

We use the trained neural network as the ROM dynamic system (9) to approximate the full system of NSE. We emphasize that the novelty of our approach is the non-intrusively learning of the reduced system, whereas GP-ROM and closure models require the use of FOM operators. Figure 1 is the flowchart of the LMNet-ROM and GP-ROM framework.
We outline the Algorithm 1 of the framework in the following:
Algorithm 1 Linear multistep network reduced order model learning (LMNet-ROM).
 Compute the reduced POD space from the data of NSE by (3)
 Compute the training dataset B via (12)
 Train the neural network using loss function (14)
 The LMNet-ROM for NSE is obatined from the trained low-dimensional dynamic system:
a ˙ = f r N e t ( a )
We claim that the learned reduced dynamics (16) has a better approximation to the full model than system (7) and (8) since it is learned optimally from the FOM data. The new framework, see Figure 1, only requires input data from a system and does not use any FOM operator, which can be generally applied to reduced order modeling of any fluid dynamical system. The main offline computational cost of the LMNet-ROM is training the neural network whereas the GP-ROM is the construction of ROM operators in (7). In the numerical experiment, we show that the offline computation of our model is faster than the GP-ROM.

4. Numerical Experiment

In this section, we present the preliminary numerical results to demonstrate the advantages of our new model. The test case is a 2D channel flow past a circular cylinder at a R e = 100 . It is a benchmark problem that has been used as a numerical test in fluid dynamics, see [8,17,38,39,40].

4.1. Implementation Details

The domain is a 2.2 × 0.41 rectangular channel with a radius= 0.05 cylinder, centered at ( 0.2 , 0.2 ) , see Figure 2. No slip boundary conditions are prescribed for the walls and on the cylinder, and the inflow and outflow profiles are given by [17,40] u 1 ( 0 , y , t ) = u 1 ( 2.2 , y , t ) = 6 0.41 2 y ( 0.41 y ) , u 2 ( 0 , y , t ) = u 2 ( 2.2 , y , t ) = 0 . The kinematic viscosity is ν = 10 3 , there is no forcing, and the flow starts from rest.
The velocity snapshots of NSE (1) were generated by a finite element method with meshsize approximately 103,000 which gives a fully resolved solution. The lift and drag computation agree well with results from references in [41]: c d , m a x = 3.2261 , c l , m a x = 1.0040 .
A total number of 2500 snapshots U were collected from T = [ 0 , 5 ] at every time step Δ t = 0.002 . The LMNet-ROM was built and trained in Tensorflow.

4.2. Full Order Model Approximation

After obtaining the trained neural network, we used the ROM dynamic system (16) to approximate the full model of NSE (1). The following average L 2 error formula has been used to quantify the accuracy of the model,
E = 1 s + 1 j = 0 s Ω ( u j u r j ) d Ω .
We first evaluated the model with different layers and neurons. The dimension of the ROM dynamics is fixed to be r = 8 with the one step Adams–Moulton method. Table 1 provides a crude estimate that increasing network width (256 neurons) might have the potential over-fitting issue whereas decreasing units (64) may not be enough to reach a good accuracy. Also, the network depth (number of hidden layers) have a positive effect on the performance of the model. To reduce the numerical efforts, we used one hidden layer and 128 neurons for the neural network in the rest of our evaluations. We emphasize that to fully understand the model sensitivity with respect to network architecture, a systematic study involving regularization, batch normalization and drop out is needed in future research.
We also test the LMNet-ROM with different steps in the training. Table 2 lists the error between the new model and the exact data. As a comparison, we add GP-ROM result in the last entry. Table 2 shows that the LMNet-ROM consistently provide more accurate results as increasing steps of Adams–Moulton (AM) method. An intuitive explanation is that the stability property of AM method helps to regularize the network and eventually achieve a good calibration. Large steps (K) requires high computational cost of training the network. To balance the output accuracy and training cost, we used step K = 1 for the rest of our numerical tests.
Noisy data. The above numerical tests are carried out on the deterministic data. In some situations, however, problems may contain noise measurements. We study the robustness of the new method with respect to noise data by adding Gaussian noise to the data-set for both models. Table 3 lists the error comparison between the LMNet-ROM and GP-ROM for different level of noise data. The results show that the LMNet-ROM cannot preserve good performance when the noise magnitude (≥1%) is high while GP-ROM does. The argument is that the learned dynamical system fully depends on the data which makes it vulnerable to noise interference. The GP-ROM, however, requires the use of FOM operators making it less sensitive than LMNet-ROM to noise. Further approaches should be investigated for the potential improvement for this problem. As for deterministic data, the LMNet-ROM is better than GP-ROM in full system approximation.

4.3. Long-Term Prediction

In this section, we make a thorough study of long-term predictability of the new LMNet-ROM. The solution to our new model and the GP-ROM were computed by the dynamic system (16) and (6), respectively. We used direct numerical simulation (DNS) to denote the exact solution (data) of NSE. We used snapshot data that were collected from the time interval T = [ 0 , 3 ] to generate the POD space and train the neural network. We then run the reduced systems (16) and (6) for T = [ 0 , 5 ] to make the prediction. Figure 3 plots the phase portraits of the first few coefficients, a 2 , a 3 , a 4 , from both models and the DNS data. The red line that depicts the result from LMNet-ROM has a closer mimic of DNS data, whereas the portraits from the GP-ROM have small deviation. This behavior tells that the dynamics of LMNet-ROM predict future states better than GP-ROM with the given information. Note that a 1 was a constant and not meaningful to discuss since the first POD mode φ 1 represents the mean flow.
We also looked at the prediction of the time evolution of energy ( E ( t j ) = 1 2 u j L 2 ), vorticity construction, and drag coefficients. Figure 4 shows the long-term prediction of energy travel and drag. The main observation is that the LMNet-ROM performs much better than GP-ROM. The energy and drag generated from GP-ROM have a huge deterioration when time evolves, which means the prediction is not accurate. Clearly, the prediction from LMNet-ROM is impressively good as it is stable and close to DNS, see Figure 4. Vorticity describes the local spinning motion of a fluid system. Figure 5 plots the vorticity construction from the velocity field around the cylinder at the end time T = 5 . As depicted in Figure 5, the LMNet-ROM correctly predicts the vortex street behind the cylinder while GP-ROM not. The above results are presented for dimension r = 8 , but similar results can be found for r = 4 , 6 . Overall, the long-term predictability of LMNet-ROM is much better than GP-ROM.

4.4. LMNet-ROM vs. Closure Models

In this section, we present a preliminary numerical comparison between the LMNet-ROM and the closure models. Due to the wide class of stabilization closure models and the availability of their open source implementation, a thorough comparison between our new model and all other models is not practical. Therefore, we use the two most recent models (open source implemenation), data-driven-filtered ROM (DDF-ROM) and evolve-then-filter ROM (EF-ROM), to illustrate the numerical comparison. The EF-ROM uses a two steps regularization strategy to improve the GP-ROM [33]. The first step was to use the forward Euler discretization to evolve the GP-ROM dynamic system (7). Then it followed a ROM filter to smooth the solution for regularization purpose, more details can be found in [33]. The DDF-ROM solveds an optimization problem to approximate the τ in system (8) [17]. Table 4 lists the average L 2 error from the three models with different dimensions. It surprisingly indicates that the closure models with mathematical methods cannot outperform the LMNet-ROM. This result was impressive, given that the LMNet-ROM is learned through the neural network from the data without acquiring any FOM information.
We note that the goal of both the stabilization closure models and our new model was to provide an accurate reduced dynamics to approximate the full system from different perspective. The former generally model the artificial term ( τ ) in the system (8) physically or mathematically, whereas the latter use data to learn the dynamics (16). We claim that the non-intrusive learning framework has the generality that can be applied to any nonlinear fluid system since it only requires the training data.

4.5. Computational Cost

In this section, we discuss the computational efficiency of the proposed LMNet-ROM. The main computational cost of reduced order models is the offline computation since the cost of solving a small ODE system is negligible at the online stage. The FOM simulation (DNS) time is used as a benchmark to evaluate the performance of each model. The computation was carried out on a 64-bit Linux system with a single 2.70 GHz CPU. The DNS CPU time is 36,828.53(s). Table 5 lists the CPU time from each ROM and the associated speed-up factor. The LMNet-ROM time is only computed from one step AM method with one layer and 128 neurons, given the fact that this network architecture achieves good accuracy in the previous test. The result in Table 5 reveals that the LMNet-ROM is more efficient compared to other models. This is a huge advantage of our new method since reducing the computational cost while maintaining good accuracy is the primary goal of reduced order modeling.

5. Conclusions and Outlook

In this paper, we proposed a novel learning reduced order model framework for the numerical simulation of fluid flows. This framework was based on the recent development of linear multistep network architecture. We numerically studied the LMNet-ROM in the simulation of a 2D flow past a cylinder. The numerical results demonstrate that the LMNet-ROM was significantly more accurate than the GP-ROM in system approximation and long-term prediction. Furthermore, we compared the new model with the two most recent stabilization closure models, EF-ROM and DDF-ROM. The results show that our new method outperforms the two closure models. Overall, the LMNet-ROM beats the aforementioned models both in accuracy and computational efficiency, which provides a promising and encouraging approach in model reduction of fluid dynamics. The main advantage of using our approach is the fully data-driven process while maintain good numerical accuracy. The model does not require the stored ROM operator whereas other ROM method does. However, the LMNet-ROM’s potential still needs to be explored. We outline some research directions that could be pursued.
Probably the most important next step is to study the parametrized system prediction of the LMNet-ROM. The current neural network is trained under the data-set given by one parameter value ( R e ) from the full order model of NSE. How does the LMNet-ROM predict systems with different parameter values, such as initial conditions and boundary conditions? Parametrized system prediction is a challenging problem in engineering applications. We hope to provide a systematic investigation with the new LMNet-ROM in later research.
Another important research direction is to improve the robustness of the model with respect to noise data. Table 3 shows the drawback of this model for high magnitude noisy (≥1%) data. We plan to address this issue by improving the neural network architecture. Also, regularization for preventing over-fitting needs to be fully studied in future research.
Finally, the generality study of LMNet-ROM is worth investigation. Although we constructed and tested the LMNet-ROM in a fluid dynamics setting, the LMNet-ROM framework can be applied to any type of nonlinear partial differential equations (PDE) that is amenable to reduced order modeling. The only input needed in the LMNet-ROM framework is the data from the FOM of any system, see Algorithm 1. The LMNet-ROM procedure does not restrict it to the particular physical system modeled by the given nonlinear PDE. Since the LMNet-ROM is built by fully data-driven learning, we expect it to be successful in the numerical simulation of general mathematical models (e.g., from elasticity or bioengineering).

Author Contributions

Investigation, X.X.; methodology X.X. and G.Z.; project administration, C.G.W.; writing—original draft, X.X.; writing—review and editing, G.Z. and C.G.W.

Funding

This work is supported by the Scientific Discovery through Advanced Computing (SciDAC) program funded by U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences, Division of Materials Sciences and Engineering; U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under contract number ERKJ314; and, National Science Foundation, Division of Mathematical Sciences, Computational Mathematics program under contract number DMS1620280 and DMS1620027.

Acknowledgments

We thank the anonymous reviewers for their careful reading of our manuscript and their many insightful comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FOMFull order model
ROMReduced order model
LMNetLinear multistep neural network
PODProper orthogonal decomposition
CFDComputational fluid dynamics
DNSDirect numerical simulation
GP-ROMGalerkin projection reduced-order model
EF-ROMEvolve-then-filter reduced-order model
DDF-ROMData-driven filtered reduced-order model
NSENaiver–Stokes equations
AMAdams–Moulton

Appendix A. Loss Function

The loss function is the truncation error of the mulistep method which can be derived from Taylor seriers,
a n 1 = a ( t n ) Δ t a ( t n ) + Δ t 2 2 ! a ( t n ) Δ t 3 3 ! a ( t n ) + Δ t 4 4 ! a ( t n ) + a n 2 = a ( t n ) 2 Δ t a ( t n ) + 2 2 Δ t 2 2 ! a ( t n ) 2 3 Δ t 3 3 ! a ( t n ) + 2 4 Δ t 4 4 ! a ( t n ) + a n 3 = a ( t n ) 3 Δ t a ( t n ) + 3 2 Δ t 2 2 ! a ( t n ) 3 3 Δ t 3 3 ! a ( t n ) + 3 4 Δ t 4 4 ! a ( t n ) +    a n K = a ( t n ) K Δ t a ( t n ) + K 2 Δ t 2 2 ! a ( t n ) K 3 Δ t 3 3 ! a ( t n ) + K 4 Δ t 4 4 ! a ( t n ) +
Similarly,
f r ( a n 1 ) = a ( t n Δ t ) = a ( t n ) Δ t a ( t n ) + Δ t 2 2 ! a ( t n ) Δ t 3 3 ! a ( t n ) + f r ( a n 2 ) = a ( t n 2 Δ t ) = a ( t n ) 2 Δ t a ( t n ) + 2 2 Δ t 2 2 ! a ( t n ) 2 3 Δ t 3 3 ! a ( t n ) + f r ( a n 3 ) = a ( t n 3 Δ t ) = a ( t n ) 3 Δ t a ( t n ) + 3 2 Δ t 2 2 ! a ( t n ) 3 3 Δ t 3 3 ! a ( t n ) +     f r ( a n K ) = a ( t n K Δ t ) = a ( t n ) K Δ t a ( t n ) + K 2 Δ t 2 2 ! a ( t n ) K 3 Δ t 3 3 ! a ( t n ) +
Substituting these expansions into the truncation error for L n yields a convenient formula,
L n = i = 0 K [ α i a n i Δ t β i f r ( a n i ) ] = Δ t [ i = 0 K i α i i = 0 K β i ] a ( t n ) + Δ t 2 [ i = 0 K i 2 2 α i i = 0 K β i ] a ( t n ) + + Δ t 3 [ i = 0 K i 3 6 α i i = 0 K i 2 2 β i ] a ( t n ) + Δ t 4 [ i = 0 K i 4 24 α i i = 0 K i 3 6 β i ] a ( 4 ) ( t n ) + = l = 1 Δ t l [ i = 0 K ( α i i l l ! β i i l 1 ( l 1 ) ! ) a ( l ) ( t n ) ]
Here, a ( l ) denotes the l - th derivative whereas a n means the solution at time step n. For the K - step method, α 0 = 1 and α k 0 , the coefficients in the above equation are not uniquely defined, since multiplication through by a constant defines the same method. Usually the coefficients are normalized such that i = 1 K α i = 1 .

References

  1. Lumley, J.L. The structure of inhomogeneous turbulent flows. Atmos. Turbul. Radio Wave Propag. 1967, 2, 166–178. [Google Scholar]
  2. Noack, B.R.; Morzynski, M.; Tadmor, G. Reduced-Order Modelling for Flow Control; Springer: Berlin/Heidelberg, Germany, 2011; Volume 528. [Google Scholar]
  3. Obinata, G.; Anderson, B.D. Model Reduction for Control System Design; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  4. Carlberg, K.; Farhat, C.; Cortial, J.; Amsallem, D. The GNAT method for nonlinear model reduction: Effective implementation and application to computational fluid dynamics and turbulent flows. J. Comput. Phys. 2013, 242, 623–647. [Google Scholar] [CrossRef] [Green Version]
  5. Rowley, C.W.; Dawson, S.T. Model reduction for flow analysis and control. Ann. Rev. Fluid Mech. 2017, 49, 387–417. [Google Scholar] [CrossRef]
  6. Amsallem, D.; Farhat, C. Stabilization of projection-based reduced-order models. Int. J. Numer. Meth. Eng. 2012, 91, 358–377. [Google Scholar] [CrossRef]
  7. Xie, X.; Wells, D.; Wang, Z.; Iliescu, T. Approximate deconvolution reduced order modeling. Comput. Methods Appl. Mech. Eng. 2017, 313, 512–534. [Google Scholar] [CrossRef] [Green Version]
  8. Kutz, J.N.; Brunton, S.L.; Brunton, B.W.; Proctor, J.L. Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems; SIAM: Philadelphia, PA, USA, 2016; Volume 149. [Google Scholar]
  9. San, O.; Maulik, R. Extreme learning machine for reduced order modeling of turbulent geophysical flows. Phys. Rev. E 2018, 97, 042322. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Xiao, D.; Fang, F.; Buchan, A.; Pain, C.; Navon, I.; Muggeridge, A. Non-intrusive reduced order modelling of the Navier–Stokes equations. Comput. Methods Appl. Mech. Eng. 2015, 293, 522–541. [Google Scholar] [CrossRef]
  11. Peherstorfer, B.; Willcox, K. Data-driven operator inference for nonintrusive projection-based model reduction. Comput. Methods Appl. Mech. Eng. 2016, 306, 196–215. [Google Scholar] [CrossRef] [Green Version]
  12. Noack, B.R.; Stankiewicz, W.; Morzyński, M.; Schmid, P.J. Recursive dynamic mode decomposition of transient and post-transient wake flows. J. Fluid Mech. 2016, 809, 843–872. [Google Scholar] [CrossRef] [Green Version]
  13. Loiseau, J.C.; Brunton, S.L. Constrained sparse Galerkin regression. J. Fluid Mech. 2018, 838, 42–67. [Google Scholar] [CrossRef] [Green Version]
  14. Carlberg, K.; Barone, M.; Antil, H. Galerkin v. discrete-optimal projection in nonlinear model reduction. arXiv 2015, arXiv:1504.03749. [Google Scholar]
  15. Balajewicz, M.J.; Dowell, E.H.; Noack, B.R. Low-dimensional modelling of high-Reynolds-number shear flows incorporating constraints from the Navier–Stokes equation. J. Fluid Mech. 2013, 729, 285–308. [Google Scholar] [CrossRef]
  16. Ballarin, F.; Manzoni, A.; Quarteroni, A.; Rozza, G. Supremizer stabilization of POD–Galerkin approximation of parametrized steady incompressible Navier–Stokes equations. Int. J. Numer. Meth. Engng. 2015, 102, 1136–1161. [Google Scholar] [CrossRef]
  17. Xie, X.; Mohebujjaman, M.; Rebholz, L.; Iliescu, T. Data-driven filtered reduced order modeling of fluid flows. SIAM J. Sci. Comput. 2018, 40, B834–B857. [Google Scholar] [CrossRef]
  18. Protas, B.; Noack, B.R.; Östh, J. Optimal nonlinear eddy viscosity in Galerkin models of turbulent flows. J. Fluid Mech. 2015, 766, 337–367. [Google Scholar] [CrossRef] [Green Version]
  19. Ostrowski, Z.; Białecki, R.; Kassab, A. Solving inverse heat conduction problems using trained POD-RBF network inverse method. Inverse Probl. Sci. Eng. 2008, 16, 39–54. [Google Scholar] [CrossRef]
  20. Rogers, C.A.; Kassab, A.J.; Divo, E.A.; Ostrowski, Z.; Bialecki, R.A. An inverse POD-RBF network approach to parameter estimation in mechanics. Inverse Probl. Sci. Eng. 2012, 20, 749–767. [Google Scholar] [CrossRef] [Green Version]
  21. Xiao, D.; Fang, F.; Pain, C.; Hu, G. Non-intrusive reduced-order modelling of the Navier–Stokes equations based on RBF interpolation. Int. J. Numer. Methods Fluids 2015, 79, 580–595. [Google Scholar] [CrossRef]
  22. San, O.; Maulik, R. Neural network closures for nonlinear model order reduction. Adv. Comput. Math. 2018, 44, 1717–1750. [Google Scholar] [CrossRef] [Green Version]
  23. Chang, B.; Meng, L.; Haber, E.; Tung, F.; Begert, D. Multi-level residual networks from dynamical systems view. arXiv 2017, arXiv:1710.10348. [Google Scholar]
  24. Lu, Y.; Zhong, A.; Li, Q.; Dong, B. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. arXiv 2017, arXiv:1710.10121. [Google Scholar]
  25. Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. USA 2016, 113, 3932–3937. [Google Scholar] [CrossRef] [Green Version]
  26. Rudy, S.H.; Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Data-driven discovery of partial differential equations. Sci. Adv. 2017, 3, e1602614. [Google Scholar] [CrossRef]
  27. Raissi, M.; Karniadakis, G.E. Hidden physics models: Machine learning of nonlinear partial differential equations. J. Comput. Phys. 2018, 357, 125–141. [Google Scholar] [CrossRef] [Green Version]
  28. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Machine learning of linear differential equations using Gaussian processes. J. Comput. Phys. 2017, 348, 683–693. [Google Scholar] [CrossRef] [Green Version]
  29. Wang, Z.; Xiao, D.; Fang, F.; Govindan, R.; Pain, C.C.; Guo, Y. Model identification of reduced order fluid dynamics systems using deep learning. Int. J. Numer. Methods Fluids 2018, 86, 255–268. [Google Scholar] [CrossRef]
  30. Chen, T.Q.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D. Neural Ordinary Differential Equations. arXiv 2018, arXiv:1806.07366. [Google Scholar]
  31. Parish, E.J.; Duraisamy, K. A paradigm for data-driven predictive modeling using field inversion and machine learning. J. Comput. Phys. 2016, 305, 758–774. [Google Scholar] [CrossRef] [Green Version]
  32. Gouasmi, A.; Parish, E.J.; Duraisamy, K. A priori estimation of memory effects in reduced-order models of nonlinear systems using the Mori–Zwanzig formalism. Proc. R. Soc. A 2017, 473, 20170385. [Google Scholar] [CrossRef] [PubMed]
  33. Wells, D.; Wang, Z.; Xie, X.; Iliescu, T. An evolve-then-filter regularized reduced order model for convection-dominated flows. Int. J. Numer. Methods Fluids 2017, 84, 598–615. [Google Scholar] [CrossRef]
  34. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Multistep Neural Networks for Data-driven Discovery of Nonlinear Dynamical Systems. arXiv 2018, arXiv:1801.01236. [Google Scholar]
  35. Ascher, U.M.; Petzold, L.R. Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations; SIAM: Philadelphia, PA, USA, 1998; Volume 61. [Google Scholar]
  36. Embree, M. Numerical Analysis Lecture Notes. February 2018. Available online: http://www.math.vt.edu/people/embree/math5466/nanotes.pdf (accessed on 15 June 2019).
  37. Zhang, X.; Li, Z.; Change Loy, C.; Lin, D. Polynet: A pursuit of structural diversity in very deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 718–726. [Google Scholar]
  38. Schäefer, M.; Turek, S. The benchmark problem “flow around a cylinder”. Flow Simul. High-Perform. Comput. II 1996, 52, 547566. [Google Scholar]
  39. Brunton, S.; Tu, J.; Bright, I.; Kutz, J. Compressive sensing and low-rank libraries for classification of bifurcation regimes in nonlinear dynamical systems. SIAM J. Appl. Dyn. Syst. 2014, 13, 1716–1732. [Google Scholar] [CrossRef]
  40. Mohebujjaman, M.; Rebholz, L.G.; Xie, X.; Iliescu, T. Energy balance and mass conservation in reduced order models of fluid flows. J. Comput. Phys. 2017, 346, 262–277. [Google Scholar] [CrossRef]
  41. Caiazzo, A.; Iliescu, T.; John, V.; Schyschlowa, S. A numerical investigation of velocity-pressure reduced order models for incompressible flows. J. Comput. Phys. 2014, 259, 598–616. [Google Scholar] [CrossRef]
Figure 1. Flowchart of projection based model reduction and the new non-intrusive learning reduced order modeling framework.
Figure 1. Flowchart of projection based model reduction and the new non-intrusive learning reduced order modeling framework.
Mathematics 07 00757 g001
Figure 2. Channel flow around a cylinder domain.
Figure 2. Channel flow around a cylinder domain.
Mathematics 07 00757 g002
Figure 3. Phase portraits of the coefficients a 2 , a 3 , a 4 from linear multistep neural network (LMNet) to learn the reduced order model (LMNet-ROM) (red), Galerkin projection (GP)-ROM (green) and direct numerical simulation (DNS) data (blue) with dimension r = 8 .
Figure 3. Phase portraits of the coefficients a 2 , a 3 , a 4 from linear multistep neural network (LMNet) to learn the reduced order model (LMNet-ROM) (red), Galerkin projection (GP)-ROM (green) and direct numerical simulation (DNS) data (blue) with dimension r = 8 .
Mathematics 07 00757 g003
Figure 4. Plots of the time evolution of energy E ( t j ) (left) and drag (right). The solutions are generated from LMNet-ROM and GP-ROM with dimension r = 8 .
Figure 4. Plots of the time evolution of energy E ( t j ) (left) and drag (right). The solutions are generated from LMNet-ROM and GP-ROM with dimension r = 8 .
Mathematics 07 00757 g004
Figure 5. Vorticity prediction plots from the solution of LMNet-ROM (right) and GP-ROM (middle) with dimension r = 8 . The exact data (DNS) is plotted on the left.
Figure 5. Vorticity prediction plots from the solution of LMNet-ROM (right) and GP-ROM (middle) with dimension r = 8 . The exact data (DNS) is plotted on the left.
Mathematics 07 00757 g005
Table 1. Average L 2 error between trajectories of the learned reduced order model with dimension r = 8 and the exact data for the different number of hidden layers and neurons.
Table 1. Average L 2 error between trajectories of the learned reduced order model with dimension r = 8 and the exact data for the different number of hidden layers and neurons.
Neurons64128256
Layers
1 5.62 × 10 2 7.22 × 10 3 4.31 × 10 1
2 4.01 × 10 3 5.75 × 10 3 2.95 × 10 2
3 1.64 × 10 2 4.04 × 10 3 3.11 × 10 3
Table 2. Average L 2 error between the new model and exact data for the different number of steps and dimensions.
Table 2. Average L 2 error between the new model and exact data for the different number of steps and dimensions.
Dimensionr = 4r = 6r = 8
K/model
1 1.88 × 10 3 3.05 × 10 3 7.22 × 10 3
2 2.83 × 10 4 6.04 × 10 4 1.45 × 10 3
3 2.67 × 10 4 6.22 × 10 4 7.14 × 10 3
4 3.79 × 10 4 6.20 × 10 4 6.37 × 10 4
GP-ROM 1.66 × 10 1 7.30 × 10 2 1.92 × 10 2
Table 3. Average L 2 error for different noise magnitudes.
Table 3. Average L 2 error for different noise magnitudes.
ModelGP-ROMLMNet-ROM
Noise
0.0% 1.92 × 10 2 7.22 × 10 3
0.5% 1.93 × 10 2 2.44 × 10 3
1% 2.05 × 10 2 2.20 × 10 2
5% 2.92 × 10 2 8.76 × 10
Table 4. Average L 2 error from different models.
Table 4. Average L 2 error from different models.
Dimensionr = 4r = 6r = 8
Model
EF-ROM 1.23 × 10 1 7.31 × 10 2 1.84 × 10 2
DDF-ROM 2.27 × 10 1 1.14 × 10 2 1.22 × 10 2
LMNet-ROM 1.91 × 10 3 3.57 × 10 3 7.22 × 10 3
Table 5. Offline cost (second) and speed up factor from each reduced order model (ROM) with dimension r = 8 .
Table 5. Offline cost (second) and speed up factor from each reduced order model (ROM) with dimension r = 8 .
ModelCostSpeed-Up Factor ( DNS ROM )
GP-ROM855.52 s43.05
EF-ROM867.20 s42.47
DDF-ROM6373.97 s5.78
LMNet-ROM445.25 s82.71

Share and Cite

MDPI and ACS Style

Xie, X.; Zhang, G.; Webster, C.G. Non-Intrusive Inference Reduced Order Model for Fluids Using Deep Multistep Neural Network. Mathematics 2019, 7, 757. https://doi.org/10.3390/math7080757

AMA Style

Xie X, Zhang G, Webster CG. Non-Intrusive Inference Reduced Order Model for Fluids Using Deep Multistep Neural Network. Mathematics. 2019; 7(8):757. https://doi.org/10.3390/math7080757

Chicago/Turabian Style

Xie, Xuping, Guannan Zhang, and Clayton G. Webster. 2019. "Non-Intrusive Inference Reduced Order Model for Fluids Using Deep Multistep Neural Network" Mathematics 7, no. 8: 757. https://doi.org/10.3390/math7080757

APA Style

Xie, X., Zhang, G., & Webster, C. G. (2019). Non-Intrusive Inference Reduced Order Model for Fluids Using Deep Multistep Neural Network. Mathematics, 7(8), 757. https://doi.org/10.3390/math7080757

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop