Next Article in Journal
Numerical Computations of Vortex Formation Length in Flow Past an Elliptical Cylinder
Next Article in Special Issue
Assessment and Prediction of Air Entrainment and Geyser Formation in a Bottom Outlet: Field Observations and CFD Simulation
Previous Article in Journal
Flow and Convection in Metal Foams: A Survey and New CFD Results
Previous Article in Special Issue
Numerical Study on the Flow and Heat Transfer Coupled in a Rectangular Mini-Channel by Finite Element Method for Industrial Micro-Cooling Technologies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Control and Optimization of Interfacial Flows Using Adjoint-Based Techniques

by
Alexandru Fikl
1,
Vincent Le Chenadec
1,2,* and
Taraneh Sayadi
1,3
1
Department of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
2
Multi Scale Modelling and Simulation Laboratory, Gustave Eiffel University, CNRS UMR 8208, F-77474 Marne-la-Vallée, France
3
Jean-le-Rond d’Alembert Institute, CNRS UMR 7190, Sorbonne University, 75005 Paris, France
*
Author to whom correspondence should be addressed.
Fluids 2020, 5(3), 156; https://doi.org/10.3390/fluids5030156
Submission received: 16 July 2020 / Revised: 3 September 2020 / Accepted: 4 September 2020 / Published: 10 September 2020
(This article belongs to the Special Issue Advances in Numerical Methods for Multiphase Flows)

Abstract

:
The applicability of adjoint-based gradient computation is investigated in the context of interfacial flows. Emphasis is set on the approximation of the transport of a characteristic function in a potential flow by means of an algebraic volume-of-fluid method. A class of optimisation problems with tracking-type functionals is proposed. Continuous (differentiate-then-discretize) and discrete (discretize-then-differentiate) adjoint-based gradient computations are formulated and compared in a one-dimensional configuration, the latter being ultimately used to perform optimisation in two dimensions. The gradient is used in truncated Newton and steepest descent optimisers, and the algorithms are shown to recover optimal solutions. These validations raise a number of open questions, which are finally discussed with directions for future work.

1. Introduction

Two-phase flows are central to a wide range of environmental and technological applications [1,2,3,4]. Energy conversion systems, for example, often rely on the combustion of high-energy-density liquid fuels to meet weight and volume restrictions. The flows encountered in such systems are often turbulent, and therefore encompass a vast range of spatial and temporal scales. In addition to multiple scales, these flows contain multiple physical and chemical phenomena such as capillarity, combustion, flow separation, radiation, and pollutant formation. These phenomena are of different nature, and their intricate coupling makes their understanding and prediction particularly challenging. As a consequence of the remarkable advances in computational sciences realized over the past decades, complex physical processes can now be simulated with an astonishing degree of fidelity and accuracy. In the case of two-phase flows, in particular, simulation and modeling have evolved from simplistic or back-of-the-envelop estimates (integral balance, analytical analysis) to petascale and soon exascale computing applications, generating unprecedented amounts of information.
While simulations of complex flows with increasing degree of confidence is of great importance, the ability to extract relevant optimization and control strategies dedicated to improving their performance and efficiency is as crucial. Access to such optimization strategies would have vast implications in many areas such as coating, crop protection, inkjet printing, and energy conversion devices [5,6]. From an environmental perspective, taking the example of energy conversion devices, the existence of such capabilities will have a large impact on reducing pollutants/emissions (NOx, soot, etc.), from the oceans all the way to the atmosphere, leading to cleaner and more efficient sources of energy.
Optimisation techniques materialize this shift in two ways: (i) Gradient-based methods and (ii) derivative-free techniques. Examples of derivative-free algorithms include pattern search methods, approximation models, response surfaces and evolutionary/heuristic global algorithms. While an efficient class of generic algorithms based on the surrogate management framework [7] and artificial neural networks [8] have been used for optimization in fluid mechanics, mainly in the area of aerodynamic shape optimization, they could require many function evaluations, for training purposes, for example. When detailed simulations of interfacial flows are concerned, each function evaluation commands a full (commonly unsteady) CFD computation. In such situations, gradient-based methods are at an advantage.
Common methods in extracting the gradient information are analytical or use finite differences. As far as the analytical determination of the gradient is concerned, the subtleties associated with the numerical approach, such as the meshing process, discretization, and the complexity of the governing equations, especially when regions with steep gradients or discontinuities (interfaces) are present, make extracting the analytical gradient expression close to impossible, and hence suffers from limited applicability. In the case of finite differences, the numerous parameters involved in modeling make this approach prohibitively expensive for large problems and can also be easily overwhelmed by numerical noise. Adjoint-based methodology allows determining the gradient at a cost comparable to a single function evaluation, regardless of the number of parameters, and is therefore a suitable alternative. Using this approach, gradient is computed in the form of algebraic or differential equations based on the problem’s Lagrange multipliers or adjoint variables, which in turn is used in standard optimization algorithms that rely on Jacobian information (such as the conjugate-gradient family). The use of adjoint methods for design and optimization has been an active area of research, which started with the pioneering work of Pironneau [9] with applications in fluid mechanics and later in aeronautical shape optimization by Jameson and co-workers [10,11,12]. Ever since these groundbreaking studies, adjoint-based methods have been widely used in fluid mechanics, especially in the fields of acoustics and thermo-acoustics [13,14]. These areas provide a suitable application for adjoint-based methods, which are inherently linear, since they too are dominated by linear dynamics. Recently, nonlinear problems have also been tackled, within the context of optimal control of separation on a realistic high-lift airfoil or wing, enhancement of mixing efficiency, minimal turbulence seeds and shape optimization to name a few [15,16,17,18]. The presence of discontinuities due to interfaces, together with the unsteady nature of relevant multiphase flow configurations constitutes another step in complexity.
As far as interfacial flows are concerned, significant research efforts have been dedicated to sensitivity analysis of thermal-hydraulic models for light-water reactor simulations, led by Cacuci [19,20]. The formalism developed by Cacuci and co-workers is based upon the two-fluid methodology, which requires a large number of closure assumptions (especially related to the interface topology) and lacks predictability in the absence of experimental data. A second research effort can be found in the motion planning of crystal growth and molten metal solidification, where by adjusting heat deposition on a sample, one controls the geometry of the solid-liquid interface [21,22]. Transport phenomena in solidification are limited to heat diffusion. While interface motion due to energy balance raises a significant number of challenges (Stefan flow), convective, viscous and capillary effects are absent. The underlying mathematical challenges, even in the low-Reynolds and Weber number ranges, remain widely unexplored. Likewise, adjoint-based methods have also been applied for optimization purposes in multi-phase flows through porous media (Darcy flow [23]). Since the adjoint-based methods in the context of multiphase models have, to date, not been applied to high-fidelity simulation of unsteady interfacial flows, it remains an open question that needs addressing.
The proposed study, therefore, focuses on the control of an interface in a potential flow for unity density and viscosity ratios in the absence of surface tension. The emphasis is set on (i) the solution of the characteristic function equation using an algebraic Volume-of-Fluid method (as opposed to the level set method in all documented studies) and (ii) the formulation of both continuous and discrete adjoint problems.
The manuscript is structured as follows: Section 2 describes the optimisation problem and the equations governing the evolution of the flow (primal problem), as well as the continuous adjoint equations (dual problem). Section 3 presents the discretization of the aforementioned problem (functional, primal and dual problems). The use of the adjoint-based gradient computation is implemented in a steepest descent framework in Section 4, ultimately applied in Section 5 where the results of one-dimensional and two-dimensional optimisations are presented and discussed. Finally, conclusions and perspective of this work are proposed in Section 6.

2. Continuous Governing Equations

Traditionally, adjoint equations are derived from the continuous primal (forward) equations by means of the Lagrange multiplier theory, and are used to obtain first-order sensitivity information for chosen cost functionals. Note that the resulting adjoint operators are then implemented using the differentiate-then-discretize approach, otherwise referred to as the continuous adjoint method. In this work, however, an alternative approach of discretize-then-differentiate is selected, where the spatially discretised primal equations are numerically processed and differentiated, to produce the associated adjoint code (discrete adjoint method). The continuous form of the adjoint equations, however, remains important since it clarifies the overall structure of the problem, and in particular how the gradient is computed updated throughout the iterative optimiser. Therefore, in this section, the equations governing the primal (direct/forward) and the dual (adjoint/backward) problems together with a consistent set of boundary conditions are presented. The discrete adjoint approach, used to generate the results of Section 5, will be presented subsequently in Section 3.
The problem of interest involves controlling the movement of the interface by means of boundary conditions on an auxiliary velocity field. In the general case, the evolution of the velocity field is governed by the incompressible Navier-Stokes equations. For the current application, a simplified potential flow is considered, where the inflow/outflow velocity on chosen boundaries is used as a control variable. In generic terms, the optimization problem may be summarized as follows
min h H a d J ( h ) subject to the flow s governing equations ,
where the objective functional J , the convex set of admissible control variables H a d and the flow’s governing equations are described in the remainder of this section, together with the adjoint-based gradient computation.

2.1. Objective Functional

The objective function is constructed using tracking-type functionals of the following form
J α , h = γ 1 2 0 T S α t , · , ϕ t , · Ω 2 d t + γ 2 2 S α T , · , ϕ T , · Ω 2 + γ 3 2 h Γ N 2
where,
S : α , ϕ α 1 α ϕ ,
and ϕ is a distance function whose zero level set matches the desired interface location, h denotes the controller, and  α the phase indicator. In order to express the norms appearing in the above equation, volume and surface inner products are defined, respectively, as follows,
f , g Ω Ω f ( x ) g ( x ) d V f , g Γ Γ f ( x ) g ( x ) d S ,
with induced norms f · 2 = f , f · . This form of the objective functional is close to the one proposed by Bernauer et al. in the context of Stefan flow [22]. The first term on the right-hand side of Equation (2) matches the evolution of the interface, the second the value at the final time and the third is a regularization term for the control variable.

2.2. Primal (Forward) Problem

The two-phase Navier-Stokes equations, supplemented by a scalar advection equation characterizing the interface location (the characteristic function equation), form the so-called one-fluid formulation [24]. Following this approach, the evolution of interface is defined by the linear advection equation of an indicator (Heaviside) function
α t + u · α = 0 , ( t , x ) [ 0 , T ] × Ω , α ( 0 , x ) = α 0 ( x ) , x Ω , α ( t , x ) = α D ( t , x ) , ( t , x ) [ 0 , T ] × Γ
where u is the velocity field, α 0 is the initial condition and α D denotes the Dirichlet boundary conditions prescribed on the inflowing boundary Γ = { x Ω | u · n 0 } . The domain Ω can be separated into two subdomains occupied by light ( Ω l ) and dark ( Ω d ) fluids, shown in Figure 1, such that
α = 0 , x Ω l , 1 , x Ω d .
The velocity field driving the interface advection is assumed to be irrotational and incompressible. Therefore, there exists a velocity potential φ such that u = φ . The velocity potential must satisfy the following Laplace equation
Δ φ = 0 , x Ω , φ = g ( x ) , x Γ D , φ · n = h ( x ) , x Γ N
where Γ D and Γ N are disjoint subsets of the domain boundary Ω , over which, Dirichlet and Neumann boundary conditions are prescribed, respectively. The Neumann boundary conditions h ( x ) are used as the control quantity, whereas the Dirichlet boundary conditions are given and fixed. The case of Γ D = is also considered, where all the domain boundaries are controlled.
The constraint set H a d for the Neumann boundary conditions on the velocity potential is assumed to be a given Hilbert space. If  Γ D = , h must also satisfy the necessary condition for the existence of a solution to Equation (7)
Ω h ( x ) d x = 0 .
The above constraint is also convex, so the resulting set H a d will also be convex. Additional assumptions for the existence of a solution to the optimization problem (1) are given, e.g., in Gunzburger [25]. Uniqueness is out of scope for the current work, but can be expected, since all the equations and constraints in question are linear.

2.3. Dual (Adjoint) Problem

The theory of Lagrange multipliers is employed, to derive the continuous adjoint equations, and an augmented cost functional (Lagrangian) is constructed, with the constraints provided by Equations (5) and (7). For the optimization problem (Equation (1)), the Lagrangian is given by
L = J 0 T Ω α * ( α t + φ · α ) d V d t Ω φ * Δ φ d V Ω α 0 * ( α α 0 ) d V 0 T Γ α D * ( α α D ) d S d t Γ D φ D * ( φ g ) d S Γ N φ N * ( φ · n h ) d S
where α * , α 0 * , α D * , φ * , φ D * and φ N * are the associated Lagrange multipliers/adjoint variables. In order to compute the gradient of J (required by the gradient-based optimizer used in this work), the variations with respect to all variables but h are set to zero.
By construction, the primal problem (Equations (5) and (7), which also include the initial and boundary conditions) is recovered by setting the variations with respect to the adjoint variables to zero. By setting the variations with respect to α and φ to zero, the adjoint equations are obtained: Firstly, the advection equation for the adjoint volume fraction
α t * · ( α * φ ) = γ 1 R α , ϕ , ( t , x ) [ 0 , T ] × Ω , α * ( T , x ) = γ 2 R α T , x , ϕ T , x , x Ω , α * t , x = 0 ( t , x ) [ 0 , T ] × Ω \ Γ
where the source term comes from the derivative of the cost functional and is given by
R = S S α ϕ
with S as defined in Equation (3). It should be noted that the boundary conditions are only defined on the outflow boundaries of the adjoint equations. Since the adjoint is solved backward in time, boundaries outflowing in the primal problem turn into inflowing boundaries in the dual one.
Secondly, the adjoint velocity potential satisfies a Poisson equation
Δ φ * = 0 T · ( α * α ) d t , x Ω , φ * = 0 , x Γ D , φ * · n = 0 T α * α · n d t x Γ N .
We can see that the boundary conditions on the adjoint velocity potential φ * automatically satisfy the condition for the existence and uniqueness of a solution when Γ D = . In addition, unless the interface intersects with the boundary of the computational domain, the Neumann boundary condition on the adjoint velocity potential will be homogeneous.
The variation with respect to the only remaining variable, h, yields the gradient, and is obtained by repeatedly applying integration by parts and matching terms when appropriate, resulting in
d J d h = γ 3 h φ * Γ N .
This gradient will ultimately be used in the gradient-based iterative algorithm, and will ultimately vanish once an extremum is reached. This is often referred to as the optimality condition.

3. Discrete Governing Equations

In this section, the discretisations of the primal and dual problems are presented. The main difference between the equations presented in Section 2 and those in this section is that the variables in the previous sections belong to an infinite dimensional functional Hilbert space, whereas in this section, the variables belong to a subspace of R n . As mentioned previously, the discretize-then-differentiate approach described here is the one that was ultimately used to produce the results in Section 5.

3.1. Objective Functional

The discrete form of the cost functional (2) is defined as
J h ( h ) = γ 1 2 n = 1 N S α n , ϕ n Ω h 2 Δ t n + γ 2 2 S α N , ϕ N Ω h 2 + γ 3 2 h Γ N h 2
where N denotes the total number of timesteps. The required inner products are then constructed as follows,
( f , g ) Ω h f T V g , ( f , g ) Γ h f T A g
where V and A denote the “volume” of cells and the “area” of cell faces intersecting the boundary, respectively (these definitions depends on the dimensionality of the problem). As such, we find that V and A are positive-definite diagonal weight matrices, allowing the definition of a non-degenerate associated norm.

3.2. The Thinc Scheme

For the discretization of the advection Equation (5) a variant of the algebraic Volume-of-Fluid (VOF) method, referred to as the THINC scheme [26], is used. The main advantages of algebraic Volume-of-Fluid methods, as opposed to geometric VOF, are their simplicity and their differentiability, a requirement in the context of gradient-based optimization.
Here the one-dimensional version of the THINC scheme presented in Xiao et al. [26] is briefly described. Standard finite volume notations are used, where Ω i is a cell of length V i = Δ x i centered at x i . This cell is bounded by two points x i ± 1 2 that form the boundary Ω i . The discrete dependent field consists of the cell-averaged indicator function values also referred to as the volume fractions
α i t = 1 V i Ω i α ( t , x ) d x .
For divergence-free velocity fields, Equation (5) may be recast in conservative form. Upon integrating over a cell Ω i , the semi-discrete equation is obtained
d α i d t + u α i + 1 2 , j u α i 1 2 , j Δ x i = 0 .
Finally, integration between t n and t n + 1 yields
α i n + 1 α i n + f i + 1 2 n + 1 2 f i 1 2 n + 1 2 = 0
where α i n = α i t n is as defined in Equation (17) and f i ± 1 2 n + 1 2 represents the flux through the face x i ± 1 2 between t n and t n + 1 .
While classical finite volume methods leverage polynomial approximations of the dependent field to approximate the flux functions, the THINC scheme relies on a parametrization more suited to indicator functions, which over any given cell Ω i reads
ϕ i n : x 1 2 1 + γ i n tanh β x x i Δ x i δ i n
or in non-dimensional form,
ϕ ˜ i n : ξ 1 2 1 + γ i n tanh β ξ δ i n
where
ξ = x x i Δ x i .
The global parameter β , and the local ones γ i n , δ i n are determined as follows:
  • β is the non-dimensional slope steepness, defining the sharpness of the interface. Larger β values imply sharper interfaces. Note that β is often expressed in terms of two other parameters as follows
    β = η 1 arctanh 1 2 ϵ
    where η represents the number of cells for the hyperbolic tangent to transition from 1 + ϵ to 1 ϵ ,
  • γ i n is the interface direction
    γ i n = sgn α i + 1 n α i 1 n
    where sgn denotes the signum function,
  • δ i n is the non-dimensional intercept or jump location, which defines the 0.5 level set of ϕ i n , and is imposed by the volume conservation constraint
    α i n = Ω i ϕ i n x d x ,
    which can be inverted explicitly to give
    δ i n = 1 2 β ln sinh β 1 ω i n 2 sinh β 1 + ω i n 2
    where ω i n = γ i n 2 α i n 1 .
The THINC approximation of the flux f i 1 2 n + 1 2 then consists of
f i 1 2 n = f i 1 n , + + f i n ,
where
f i 1 n , + = 0.5 λ i 1 2 n , + 0.5 ϕ ˜ i 1 n ξ d ξ , = λ i 1 2 n , + 2 + γ i 1 n 2 β ln cosh β δ i 1 n 0.5 cosh β δ i 1 n 0.5 + λ i 1 2 n , + ,
f i n , = 0.5 0.5 λ i 1 2 n , ϕ ˜ i n ξ d ξ , = λ i 1 2 n , 2 γ i n 2 β ln cosh β δ i n + 0.5 + λ i 1 2 n , cosh β δ i n + 0.5
and
λ i 1 2 n , + = max u i 1 2 n Δ t n + 1 2 Δ x i 1 , 0 , λ i 1 2 n , = min u i 1 2 n Δ t n + 1 2 Δ x i , 0 .
Away from interfaces ( α 0 or α 1 ), the intercept value (Equation (25)) grows unbounded, which can trigger numerical instability. The THINC flux value itself however remains bounded, and the issue can be resolved by simply switching to the first order approximation presented below away from the interface.
On the one hand, it can be shown that
f i 1 n , + = α i 1 n exp γ i 1 n β 1 λ i 1 2 n , + sinh β λ i 1 2 n , + sinh β + O α i 1 n 2
as α i 1 n 0 and
f i 1 n , + = λ i 1 2 n , + 1 α i 1 n exp γ i 1 n β 1 λ i 1 2 n , + sinh β λ i 1 2 n , + sinh β + O 1 α i 1 n 2
as α i 1 n 1 . On the other hand,
f i n , = α i n exp γ i n β 1 + λ i 1 2 n , sinh β λ i 1 2 n , sinh β + O α i n 2
as α i n 0 and
f i n , = λ i 1 2 n , 1 α i n exp γ i n β 1 + λ i 1 2 n , sinh β λ i 1 2 n , sinh β + O 1 α i n 2
as α i n 1 .
Given these first order approximations of the THINC flux away from interfaces, one simply needs to revert to the appropriate limit whenever α < ϵ or 1 ϵ < α to construct a numerically stable flux.

3.3. Primal (Forward) Problem

Following spatial and temporal integration, the advection equation (Equation (5)) is discretized as follows
α n + 1 = α n + Δ t V 1 A α n , φ ,
where, the discrete operator A can be further broken into terms pertaining to each dimension, e.g., for two dimensions
A ( α n , φ ) = X ( α n , φ ) Y ( α n , φ ) .
This notation generalizes the one used in the previous section (Equation (18)) by defining more general operators, for example in the x direction
X i j α n , φ = A i + 1 2 , j f i + 1 2 , j n ( α n , φ ) A i 1 2 , j f i 1 2 , j n ( α n , φ ) α i j n A i + 1 2 , j φ i + 1 , j φ i , j A i 1 2 , j φ i , j φ i 1 , j
where, the numerical flux can be chosen as the THINC flux described in the previous section or any other (first order upwind, etc.). The operators in the other spatial directions are defined similarly. It should be noted that with this generalization no assumption is made as far as the compressibility of the velocity field is concerned. This condition will be satisfied through the velocity potential and its accuracy depends entirely on the accuracy of the Poisson solver and the approximation used.
Away from the boundaries, the discrete Laplace equation in two dimensions simply takes the following form,
Δ y i φ i + 1 , j φ i , j Δ x i + 1 / 2 φ i , j φ i 1 , j Δ x i 1 / 2 Δ x i φ i , j + 1 φ i , j Δ y j + 1 / 2 φ i , j φ i , j 1 Δ y j 1 / 2 = 0 ,
with classical treatments of the Dirichlet and Neumann boundary conditions used where appropriate. Overall, the system of equations can be rewritten as
L φ = b ( h ) ,
where b is the vector of boundary conditions, defined as a function of the control variable h . As in the continuous case, if  Γ D = , the Laplace operator L is singular (with a null-space of dimension one spanned by the constant vector, noted 1 ), so the right-hand side must satisfy
b h , 1 Ω h = 0 .

3.4. Dual (Adjoint) Problem

The Lagrangian of the discrete problem can be assembled, and the calculus of variations performed to ultimately derive the equation for the adjoint of the advection equation
α * , n = α * , n + 1 + Δ t n V 1 A α n ( α n , φ ) T α * , n + 1 + Δ t n R α n , ϕ n
where, R denotes a pointwise evaluation of the function R defined in Equation (11). The final condition (the adjoint equation is marched backward in time) is
α * , N = R α N , ϕ N .
The adjoint velocity potential equation then becomes,
L φ * = n = 0 N 1 Δ t n + 1 A φ α n , ϕ T α * , n + 1 .
Finally, the gradient can now be obtained by differentiating with respect to h , which yields
d J h d h = γ 3 h + A 1 d b d h ( h ) T φ * + n = 0 N Δ t A h ( α n , φ , h ) T α * , n + 1 .

4. Optimisation Algorithm

In this work, a simple steepest decent algorithm is used to iteratively update the controler until the optimality condition is satisfied. More sophisticated algorithm (e.g., preconditioned nonlinear conjugate gradient method or Quasi-Newton methods such as the BFGS algorithm) can equally be used. The algorithm describing each step of the optimisation procedure is given in Algorithm 1.
Algorithm 1: Optimization algorithm for Equation (1)
Fluids 05 00156 i001

5. Results

In this section, the results of the discrete adjoint methodology applied to one- and two-dimensional flow configurations are presented and described. The one-dimensional configuration is included for validation purposes. Once validated, the optimisation is performed in a two-dimensional setting and appropriate control strategies are extracted.

5.1. Validation: One-Dimensional Droplets

In the one-dimensional setting, the divergence-free condition imposes a constant velocity field, that matches the control variable h R (renamed c in the rest of this section). Therefore, the optimisation problem can be simplified accordingly, with the new cost functional reading
J ( α , c ) = γ 1 2 Ω | ϕ d ( T ) | 2 ( α ( T ) ( 1 α ( T ) ) ) 2 d x + γ 2 2 c 2 .
As mentioned, the use of velocity potential and the optimization of boundary conditions is not required, and hence not included. The remainder of the one-dimensional system is as described previously. The gradient is therefore given as,
c J = γ 2 c 0 T Ω α * α x d x d t ,
and in discrete form,
c J h = γ 2 c Δ t n i α i * , n + 1 f i + 1 2 c f i 1 2 c .
This completes the definition of the optimization problem. The THINC flux approximation is then used to obtain α i n , for all m and i. Using the computed volume fraction, the adjoint equations are solved and, finally the above cost function and gradient are computed. A truncated Newton algorithm (implemented in the numpy suite) is used in lieu of the steepest descent algorithm described in Section 4.
The desired interface will be given by the following signed distance function (analytical expression):
ϕ ( t , x ) = x ( 0.8 + c t ) x < 0.7 + c t , 0.6 + c t x x [ 0.7 + c t , 0.45 + c t ] , x ( 0.3 + c t ) x [ 0.45 + c t , 0.2 + c t ] , 0.1 + c t x x > 0.2 + c t ,
where c is the velocity at which the desired interface moves. Additionally, Ω = [ 1.5 , 1.5 ] , with final time T = 1 , the desired velocity c = 1 , and CFL condition is set at 0.9 . The initial condition for the color function is given by mollifying the above signed distance function:
α ( 0 , x ) = 1 2 1 + tanh β Δ x ϕ 0 , x ,
which can be seen in Figure 2. The parameters of the THINC scheme are η = 12.5 and ϵ = 0.01 (which gives β = 0.18 ). Finally, the initial guess for starting the iteration process of the optimization algorithm is c ( 0 ) = 0.1 , and weights γ 1 = 1.0 , γ 2 = 0.0 .
The solution for c ( 0 ) = 0.1 and c = 1.0 of the state and adjoint equations at time t = T can be seen in Figure 3. As expected, for c = 1.0 , the adjoint variable is uniformly zero, indicating an optimal solution. In the case c ( 0 ) = 0.1 , the adjoint equations computed by both continuous and discrete approaches are in good agreement. The adjoint variable assumes non-zero values only in the vicinity of the interface, indicating the direction in which the interface should evolve in order to reduce the objective functional.
The performances of the truncated Newton algorithm are displayed in Figure 4. Figure 4a shows that the value of the objective function decreases monotonically until the optimal value is reached at the sixth iteration. At this point, the value of cost function does not reduce any further. Note, that this value is not exactly zero: This is expected given the diffuse nature of the interface, and the average information used in definition of the tracking-type functional (Equation (14)). On the other hand, the residual (calculated by comparing to optimal c, Figure 4b) reaches zero machine also after six iterations as a consequence of the symmetry of the problem.

5.2. Two-Dimensional Tests

Having seen that the proposed optimization problem is well-posed and a correct optimal solution can be obtained in a one-dimensional setting, a two-dimensional configuration is considered. The spatial domain is set to Ω = [ 0.5 , 0.5 ] × [ 0.5 , 0.5 ] and the focus is placed on the solution at the final time T = 0.4 with the following weights:
γ 1 = 10 4 , γ 2 = 10 4 , γ 3 = 10 4 .
A large weight ( γ 2 ) is attributed to the solution at time T, in order to ensure that the contribution of the adjoint variable dominates the calculated gradient. This has shown to improve the convergence speed of the steepest descent algorithm.
The desired interface contains a single droplet moving uniformly in the x direction. The signed distance function for such an interface is given by:
ϕ ( t , x ) = r c ( t ) x ,
where r = 0.15 is the droplet radius and
c t = 0.35 + u t , v t
is the droplet center, moving with velocity u = u , v = ( 1 , 0 ) . The desired velocity field has a uniform magnitude of 1, and the timestep is selected on the assumption that the velocity magnitude remains lower than 2 at all time, resulting in CFL condition set to 0.9 .
The initial condition for the advection equation is given as,
α ( 0 , x ) = 1 2 1 + tanh β Δ x ϕ 0 , x .
The parameters for the mollifier are, a slope thickness of η = 1.0 , and a cutoff value of ϵ = 0.01 , which give a slope steepness of β = 2.3 .
The boundary conditions on the velocity potential are set as,
Γ N = { 0.5 } × [ 0.5 , 0.5 ] , Γ D = Ω \ Γ N ,
such that the left boundary of the domain is controlled with Neumann boundary conditions and all the remaining boundaries are uncontrolled with Dirichlet boundary conditions. The expected exact solution for the velocity potential is:
φ ( x ) = u x + v y .
The Dirichlet boundary conditions can be computed from the above solution. The initial guess for the remaining Neumann boundary conditions, which are to be optimized, is given by a sinusoidal variation around the known optimal solution h = 1 , resulting in
h ( 0 ) ( y ) = 1 + cos ( 2 π y ) .
The initial distribution of the Neumann boundary condition, as well as, the location of the interface is shown in Figure 5. Figure 5a shows the initial location of the interface, shown by dashed lines. The initial guess for the Neumann boundary condition can also be seen in Figure 5b, defined on the left vertical boundary of the domain. The droplet is expected to travel left to right from the initial location (dashed lines in the figure). Due to the distribution of the velocity at the left boundary (sinusoidal wave), the velocity at the mid-domain is higher than the bottom and top, resulting in a squeezed droplet, identified by contours of volume fraction in Figure 5a. Of course, the resulting interface is different from the desired interface, identified by the solid line in the figure, and hence the need for an optimization procedure. Note, that the figure shows the initial and the end location of the particle, but the evolution of the interface at all times is considered in the objective function, defined by Equation (14).
As opposed to the one-dimensional problem, the linearization of the THINC flux (and therefore, its transpose Equation (40)) in the two-dimensional setting, gives rise to discontinuities away from the interface, where the interface direction γ changes sign. For one-dimensional advection, these changes typically occur at the center of a structure (droplet or bubble), and were mitigated by sharpening the interface sufficiently. In the two-dimensional setting, however, a similar strategy can not be devised: in regions where the interface is almost grid-aligned, the interface direction may change within a single cell, regardless of the steepness parameter. In this precise instance, the THINC scheme is not differentiable anymore. As a consequence, neither the gradient nor the adjoint equation are defined. In the case of a spherical structure, this occurs on the top, bottom, left and right of the droplet or bubble, and does not disappear by increasing the mesh resolution. Interestingly, this phenomenon does not occur in the direct vicinity of the interface, but away from it.
Alternative strategies were explored to alleviate this bottleneck, including the use of mollifier with compact support and the hybridisation with a first order upwind flux away from the interface [27]. However, none of the undertaken strategies proved to fully address the limitation of the THINC scheme, except for a differentiate-then-discretize approach, consisting of a THINC discretization (and a multidimensional extension such as THINC/SW [26]) for the direct problem, and the first order upwind flux for the linearization. This approach, however, while acceptable as far as contact discontinuities (including interfaces) are concerned, is shown to lead to divergent solutions in the presence of shocks [28]. Therefore, in this work, the discretize-then-differentiate principle of Section 5.1 is employed, together with a first order upwind scheme to approximate the volume fraction fluxes, instead of the THINC scheme used in the one dimensional example.
Figure 6 shows the volume fraction and the velocity field as produced after the extraction of the control strategy. Figure 6a shows that the volume fraction is now symmetric around the location of the desired interface (solid lines in the figure). On the other hand, this figure also demonstrates the limitation of the algorithm. Due to the use of a highly diffusive scheme (upwind) for convection of the interface, the volume fraction is diffused around the interface. Nevertheless, the distribution of the velocity potential in Figure 6b shows that the expected uniform velocity on the left vertical boundary is achieved through the control process. The uniform velocity allows the droplet to travel from left to the right of the domain without changing shape, conforming to the desired interface.
The performance of the optimization algorithm is presented in Figure 7. The evolution of the cost functional is reported for two terms: distributed part corresponding to γ 1 (denoted by J 1 ), and final time part corresponding to γ 2 (denoted by J 2 ). Based on the set values for γ ’s (Equation (49)), these two terms are the main contributors to the overall value of the cost functional. This figure shows that both terms are decreasing during the optimisation process. As expected with a steepest descent algorithm, progress is slow, but ultimately a minimum is reached. The minimum value is small but not zero. The origin of this discrepancy can be attributed to two main factors: (i) The interface is mollified, and only averaged characteristic values are used in the numerical functional (in this sense, numerical and desired interfaces will never identically match) and (ii) the upwind scheme is severely diffusive, which magnifies this effect. Both these factors can be seen in Figure 6 illustrating the distribution of the volume fraction, as mentioned before.
The adjoint variables themselves are also given in Figure 8. These figures show that the final state of the adjoint volume fraction is as expected zero around the zero level set of the desired interface and has a large error outside where the upwind scheme has diffused the solution. In the case of the adjoint velocity potential, similarly, the larger values can be found on the x axis, where the adjoint volume fraction has non-zero values. Adjoint variables can to some extent be used to analyze the gradient. One important hint that the gradient is correct is given by the adjoint velocity potential from Figure 8a. This figure shows that large negative values are present in the center of the left boundary of the domain. This is indeed the expected result since that is where the largest error in the initial guess of the control h resides, i.e., the top of the cosine bump. Towards the end, the value of the adjoint velocity potential becomes much more uniform on the left boundary, albeit not zero. The adjoint velocity potential can not become exactly zero on this boundary due to existing diffusion, reducing the overall accuracy of the result. Similar to the one-dimensional problem, good convergence is achieved, validating the previous analysis and formulation.

6. Conclusions and Future Work

This study explored the feasability of control of interfaces in the context of algebraic volume-of-fluid methods. A potential flow where two fluids of equal densities and viscosities interact across an interface without surface tension is considered. A discretize-then-differentiate approach is applied to the THINC scheme and found to correctly predict the gradient of a tracking-type functional, which in turn is successfully minimized using a truncated Newton algorithm. As expected from the theory of hyperbolic systems, this one-dimensional application confirms that as contact discontinuities, interfaces can be controlled with readily available algebraic volume-of-fluid transport schemes. This is in contrast with genuinely non-linear waves (shocks), for which only temporally and spatially consistent adjoints with tailored numerical fluxes are known to yield converging adjoint solutions [28,29].
The linearization of the algebraic Volume-of-Fluid employed here, however, gives rise to singularities away from interfaces, where interface direction changes occur. In one-dimensional configurations, this lack of differentiability of the THINC scheme can be circumvented. This is not the case in dimensions two and higher. A first order upwind scheme is therefore chosen instead, and shown to effectively provide gradient information in a motion planning optimisation problem.
The aforementioned lack of differentiability raises the question of whether algebraic Volume-of-Fluid methods such as THINC are best suited for gradient computation. Along the same line, the binary information content of the volume fraction field (presence of the dark or light fluid) also raises challenges when it comes to formulating meaningful (and convex) tracking type functionals. This can allegedly be attributed to the simplicity of the potential flow model employed here, but prior to focusing future efforts on more physical models, the use of front-tracking techniques will be explored in future related work.
The addition of more physics will bring more relevant functionals along with more challenges, since they will also introduce topological changes. As a matter of fact, since phenomena such as ligament and sheet break-ups are not well-posed in the sharp interface limit (subcontinuum physics missing), the first step in this direction will explore the continuous and discrete adjoint formulations up until breakup/coalescence.
Finally, the optimiser itself, having a large impact on the rate of convergence, was found to be a critical part of the algorithm. Improvements including line search, preconditioning, and Newton-like methods will be evaluated against derivative-free methods.

Author Contributions

Conceptualization, A.F., V.L.C. and T.S.; methodology, A.F., V.L.C. and T.S.; software, A.F.; validation, A.F.; writing—original draft preparation, A.F., V.L.C. and T.S.; writing—review and editing, V.L.C. and T.S.; visualization, A.F.; supervision, V.L.C. and T.S. All authors have read and agreed to the published version of the manuscript.

Funding

Taraneh Sayadi kindly acknowledges financial support from Deutsche Forschungsgemeinschaft (DFG) through SFB/TRR 129.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mortazavi, M.; Le Chenadec, V.; Moin, P.; Mani, A. Direct numerical simulation of a turbulent hydraulic jump: Turbulence statistics and air entrainment. J. Fluid Mech. 2016, 797, 60–94. [Google Scholar] [CrossRef]
  2. Habla, F.; Fernandes, C.; Maier, M.; Densky, L.; Ferrás, L.L.; Rajkumar, A.; Carneiro, O.S.; Hinrichsen, O.; Nóbrega, J.M. Development and validation of a model for the temperature distribution in the extrusion calibration stage. Appl. Therm. Eng. 2016, 100, 538–552. [Google Scholar] [CrossRef]
  3. Fernandes, C.; Faroughi, S.A.; Carneiro, O.S.; Nóbrega, J.M.; McKinley, G.H. Fully-resolved simulations of particle-laden viscoelastic fluids using an immersed boundary method. J. Non Newton. Fluid Mech. 2019, 266, 80–94. [Google Scholar] [CrossRef] [Green Version]
  4. de Campos Galuppo, W.; Magalhães, A.; Ferrás, L.L.; Nóbrega, J.M.; Fernandes, C. New boundary conditions for simulating the filling stage of the injection molding process. Eng. Comput. 2020. [Google Scholar] [CrossRef]
  5. Khaliq, M.H.; Gomes, R.; Fernandes, C.; Nóbrega, J.M.; Carneiro, O.S.; Ferrás, L.L. On the use of high viscosity polymers in the fused filament fabrication process. Rapid Prototyp. J. 2017, 23, 727–735. [Google Scholar] [CrossRef]
  6. Le Chenadec, V.; Pitsch, H. A Conservative Framework For Primary Atomization Computation and Application to the Study of Nozzle and Density Ratio Effects. At. Sprays 2013, 23, 1139–1165. [Google Scholar] [CrossRef]
  7. Marsden, A.; Feinstein, J.; Taylor, C. A computational framework for derivative-free optimization of cardiovascular geometries. Comput. Methods Appl. Mech. Eng. 2008, 197, 1890–1905. [Google Scholar] [CrossRef]
  8. Pierret, S.; Coelho, R.; Kato, H. Multidisciplinary and multiple operating points shape optimization of three-dimensional compressor blades, Structural and Multidisciplinary Optimization. Comput. Methods Appl. Mech. Eng. 2007, 33, 61–70. [Google Scholar]
  9. Pironneau, O. On optimum design in fluid mechanics. J. Fluid Mech. 1974, 64, 97–110. [Google Scholar] [CrossRef]
  10. Jameson, A. Aerodynamic design via control theory. J. Sci. Comput. 1988, 3, 233–260. [Google Scholar] [CrossRef] [Green Version]
  11. Jameson, A.; Martinelli, L.; Pierce, N.A. Optimum Aerodynamic Design Using the Navier-Stokes Equations. Theor. Comp. Fluid Dyn. 1998, 10, 213–237. [Google Scholar] [CrossRef] [Green Version]
  12. Reuther, J.J.; Jameson, A.; Alonso, J.J.; Rimlinger, M.J.; Saunders, D. Constrained Multipoint Aerodynamic Shape Optimization Using an Adjoint Formulation and Parallel Computers, Part 2. J. Aircr. 1999, 36, 61–74. [Google Scholar] [CrossRef]
  13. Juniper, M. Triggering in the horizontal Rijke tube: Non-normality, transient growth and bypass transition. J. Fluid Mech. 2010, 667, 272–308. [Google Scholar] [CrossRef] [Green Version]
  14. Lemke, M.; Reiss, J.; Sesterhenn, J. Adjoint-based analysis of thermoacoustic coupling. AIP Conf. Proc. Am. Inst. Phys. 2013, 1558, 2163–2166. [Google Scholar]
  15. Nemili, A.; Özkaya, E.; Gauger, N.; Kramer, F.; Thiele, F. Discrete adjoint based optimal active control of separation on a realistic high-lift configuration. In New Results in Numerical and Experimental Fluid Mechanics X: Contributions to the 19th STAB/DGLR Symposium Munich, Germany, 2014; Springer: Berlin/Heidelberg, Germany, 2016; pp. 237–246. [Google Scholar]
  16. Schmidt, S.; Ilic, C.; Schulz, V.; Gauger, N. Three-dimensional large-scale aerodynamic shape optimization based on shape calculus. AIAA J. 2013, 51, 2615–2627. [Google Scholar] [CrossRef] [Green Version]
  17. Rabin, S.; Caulfield, C.; Kerswell, R. Designing a more nonlinearly stable laminar flow via boundary manipulation. J. Fluid Mech. R1 2014, 738, 1–12. [Google Scholar] [CrossRef] [Green Version]
  18. Foures, D.; Caulfield, C.; Schmid, P. Optimal mixing in two-dimensional plane Poiseuille flow at finite Péclet number. J. Fluid Mech. 2014, 748, 241–277. [Google Scholar] [CrossRef] [Green Version]
  19. Cacuci, D.G.; Ionescu-Bujor, M. Adjoint sensitivity analysis of the RELAP5 MOD3.2 two-fluid thermal-hydraulic code system—I: Theory. Nucl. Sci. Eng. 2000, 136, 59–84. [Google Scholar] [CrossRef]
  20. Ionescu-Bujor, M.; Cacuci, D.G. Adjoint sensitivity analysis of the RELAP5 MOD3.2 two-fluid thermal-hydraulic code system—II: Applications. Nucl. Sci. Eng. 2000, 136, 85–121. [Google Scholar] [CrossRef]
  21. Bernauer, M.K.; Herzog, R. Implementation of an X-FEM solver for the Classical two-phase Stefan problem. SIAM J. Sci. Comput. 2011, 52, 271–293. [Google Scholar] [CrossRef]
  22. Bernauer, M.K.; Herzog, R. Optimal control of the classical two-phase Stefan problem in level set formulation. SIAM J. Sci. Comput. 2011, 33, 342–363. [Google Scholar] [CrossRef]
  23. Bernauer, M.K.; Herzog, R. Adjoint-based optimization of multi-phase flow through porous media—A review. Comput. Fluids 2011, 46, 40–51. [Google Scholar]
  24. Kataoka, I. Local instant formulation of two-phase flow. Int. J. Multiph. Flow 1986, 12, 745–758. [Google Scholar] [CrossRef]
  25. Gunzburger, M.D. Perspectives in Flow Control and Optimization; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2003. [Google Scholar]
  26. Xiao, F.; Ii, S.; Chen, C. Revisit to the THINC Scheme: A Simple Algebraic VOF Algorithm. J. Comput. Phys. 2011, 230, 7086–7092. [Google Scholar] [CrossRef] [Green Version]
  27. Fikl, A. Adjoint-Based Optimization for Hyperbolic Balance Laws in the Presence of Discontinuities. Ph.D. Thesis, University of Illinois at Urbana-Champaign, Champaign, IL, USA, 2016. [Google Scholar]
  28. Fikl, A.; Le Chenadec, V.; Sayadi, T.; Schmid, P. A comprehensive study of adjoint-based optimization of non-linear systems with application to Burgers’ equation. In Proceedings of the 46th AIAA Fluid Dynamics Conference, Washington, DC, USA, 13–17 June 2016. [Google Scholar]
  29. Giles, M.B. Discrete Adjoint Approximations with Shocks. In Hyperbolic Problems: Theory, Numerics, Applications; Hou, T.Y., Tadmor, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
Figure 1. Example geometry and boundary layout.
Figure 1. Example geometry and boundary layout.
Fluids 05 00156 g001
Figure 2. Initial condition of the advection equation (dashed) and the signed distance function to the desired interface (solid).
Figure 2. Initial condition of the advection equation (dashed) and the signed distance function to the desired interface (solid).
Fluids 05 00156 g002
Figure 3. Solution at time t = T for the initial guess c ( 0 ) = 0.1 and the optimal solution c = 1.0 .
Figure 3. Solution at time t = T for the initial guess c ( 0 ) = 0.1 and the optimal solution c = 1.0 .
Fluids 05 00156 g003
Figure 4. Cost functional and velocity values at each function call in the optimization algorithm.
Figure 4. Cost functional and velocity values at each function call in the optimization algorithm.
Fluids 05 00156 g004
Figure 5. Volume fraction and velocity potential of the uncontrolled solution: The dashed circle represents the initial location of the interface and the solid circle represents the desired interface location at T = 0.4 .
Figure 5. Volume fraction and velocity potential of the uncontrolled solution: The dashed circle represents the initial location of the interface and the solid circle represents the desired interface location at T = 0.4 .
Fluids 05 00156 g005
Figure 6. Volume fraction and velocity potential of the controlled solution: The dashed circle represents the initial location of the interface and the solid circle represents the desired interface location at T = 0.4 .
Figure 6. Volume fraction and velocity potential of the controlled solution: The dashed circle represents the initial location of the interface and the solid circle represents the desired interface location at T = 0.4 .
Fluids 05 00156 g006
Figure 7. Evolution of the two terms of the cost functional as a function of the descent iteration.
Figure 7. Evolution of the two terms of the cost functional as a function of the descent iteration.
Fluids 05 00156 g007
Figure 8. Adjoint volume fraction: The solid circle represents the desired interface location.
Figure 8. Adjoint volume fraction: The solid circle represents the desired interface location.
Fluids 05 00156 g008

Share and Cite

MDPI and ACS Style

Fikl, A.; Le Chenadec, V.; Sayadi, T. Control and Optimization of Interfacial Flows Using Adjoint-Based Techniques. Fluids 2020, 5, 156. https://doi.org/10.3390/fluids5030156

AMA Style

Fikl A, Le Chenadec V, Sayadi T. Control and Optimization of Interfacial Flows Using Adjoint-Based Techniques. Fluids. 2020; 5(3):156. https://doi.org/10.3390/fluids5030156

Chicago/Turabian Style

Fikl, Alexandru, Vincent Le Chenadec, and Taraneh Sayadi. 2020. "Control and Optimization of Interfacial Flows Using Adjoint-Based Techniques" Fluids 5, no. 3: 156. https://doi.org/10.3390/fluids5030156

APA Style

Fikl, A., Le Chenadec, V., & Sayadi, T. (2020). Control and Optimization of Interfacial Flows Using Adjoint-Based Techniques. Fluids, 5(3), 156. https://doi.org/10.3390/fluids5030156

Article Metrics

Back to TopTop