Next Article in Journal
Predictive Path Following and Collision Avoidance of Autonomous Connected Vehicles
Next Article in Special Issue
Uncertainty Propagation through a Point Model for Steady-State Two-Phase Pipe Flow
Previous Article in Journal
Fractional Sliding Mode Nonlinear Procedure for Robust Control of an Eutrophying Microalgae Photobioreactor
Previous Article in Special Issue
Application and Evaluation of Surrogate Models for Radiation Source Search
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximation and Uncertainty Quantification of Systems with Arbitrary Parameter Distributions Using Weighted Leja Interpolation

by
Dimitrios Loukrezis
1,2,*,† and
Herbert De Gersem
1,2,†
1
Institute for Accelerator Science and Electromagnetic Fields (TEMF), Technische Universität Darmstadt, 64289 Darmstadt, Germany
2
Centre for Computational Engineering, Technische Universität Darmstadt, 64289 Darmstadt, Germany
*
Author to whom correspondence should be addressed.
Current address: Schlossgartenstraße 8, 64289 Darmstadt, Germany.
Algorithms 2020, 13(3), 51; https://doi.org/10.3390/a13030051
Submission received: 14 December 2019 / Revised: 20 February 2020 / Accepted: 22 February 2020 / Published: 26 February 2020

Abstract

:
Approximation and uncertainty quantification methods based on Lagrange interpolation are typically abandoned in cases where the probability distributions of one or more system parameters are not normal, uniform, or closely related distributions, due to the computational issues that arise when one wishes to define interpolation nodes for general distributions. This paper examines the use of the recently introduced weighted Leja nodes for that purpose. Weighted Leja interpolation rules are presented, along with a dimension-adaptive sparse interpolation algorithm, to be employed in the case of high-dimensional input uncertainty. The performance and reliability of the suggested approach is verified by four numerical experiments, where the respective models feature extreme value and truncated normal parameter distributions. Furthermore, the suggested approach is compared with a well-established polynomial chaos method and found to be either comparable or superior in terms of approximation and statistics estimation accuracy.

1. Introduction

Studying physical systems often involves assessing the influence of random parameters upon a system’s behavior and outputs. Such uncertainty quantification (UQ) studies are becoming increasingly popular in a variety of technical domains. Of particular importance to industrial scientists and engineers are so-called non-intrusive UQ methods, where the term “non-intrusive” signifies that the often complex and typical proprietary simulation software applied to model physical systems are incorporated into UQ studies without any modifications.
Sampling methods such as (quasi-) Monte Carlo (MC) [1] or latin hypercube sampling (LHS) [2] are the most common approaches applied for non-intrusive UQ, as they are easy to implement, straightforwardly parallelizable, and can be applied to problems with a large number of parameters under relatively mild assumptions regarding the regularity of the underlying mathematical model. However, their slow convergence rates render them unattractive or even intractable in cases where high accuracy is demanded and single simulations are highly time-consuming.
Computationally efficient alternatives are spectral UQ methods [3,4], which approximate the functional dependence of a system’s quantity of interest (QoI) on its input parameters. The most popular black-box methods of this category employ global polynomial approximations by aid of either Lagrange interpolation schemes [5,6] or generalized polynomial chaos (gPC) [7], where the latter approach is typically based on least squares (LS) regression [8,9,10] or pseudo-spectral projection methods [11,12,13], or, less commonly, on interpolation [14]. Assuming a smooth input-output dependence, spectral UQ methods provide fast convergence rates, in some cases even of exponential order. However, their performance decreases rapidly for an increasing number of input parameters due to the curse of dimensionality [15]. To overcome this bottleneck, state-of-the-art approaches rely on algorithms for the adaptive construction of sparse approximations [8,9,10,16,17,18].
Adaptive sparse approximation algorithms for Lagrange interpolation methods typically employ nested sequences of univariate interpolation nodes. While not strictly necessary [19], nested node sequences are helpful for the efficient construction of sparse grid interpolation algorithms. Nested node sequences are readily available for the well studied cases of uniformly or normally distributed random variables (RVs) [20,21], but not for more general probability measures. In principle, one could derive nested interpolation (or quadrature) rules tailored to an arbitrary probability density function (PDF) [22,23], however, this is a rather cumbersome task for which dedicated analyses are necessary each and every time a new PDF is considered.
Diversely, the main requirement in gPC approximations, either full or sparse, is to find a complete set of polynomials that are orthogonal to each other with respect to the input parameter PDFs. In the case of arbitrary PDFs, such polynomials can be numerically constructed [24,25,26]. As a result, despite the fact that according to a number of studies Lagrange interpolation methods enjoy an advantage over gPC-based methods with respect to the error-cost ratio [27,28,29], they are abandoned whenever the input parameter PDFs are not uniform, or normal, or closely related distributions, e.g., log-uniform or log-normal. Consequently, the literature lacks comparisons between Lagrange interpolations and gPC-based approximations for other probability measures.
This work investigates the use of Lagrange interpolation methods based on the recently introduced weighted Leja node sequences [30] for the approximation of systems with arbitrarily distributed random parameters. Weighted Leja sequences provide interpolation and quadrature nodes tailored to an arbitrary PDF in a seamless way. Moreover, being nested by definition, they allow for the efficient construction of sparse grids, thus enabling high-dimensional UQ studies. We note that this paper focuses on continuous PDFs of arbitrary shapes and does not consider data-driven approaches which define a PDF through its moments without any further assumptions about it [31,32].
Due to a number of desirable properties, e.g., granularity, interpolation, and quadrature stability, and nestedness, weighted Leja nodes have been getting increasing attention in the context of approximation and UQ [16,30,33,34,35,36]. Nonetheless, with the exceptions of [34,37,38] where beta distributions are considered for the parameters, the use of Leja nodes has been limited to uniform and log-normal parameter distributions. The contribution of this paper is exactly to start filling this gap with the application of weighted Leja interpolation for possibly high-dimensional systems with arbitrary parameter PDFs.
To that end, we present here the use of weighted Leja rules in the context of sparse Lagrange interpolation for approximation and UQ purposes in a comprehensive and self-consistent way. Section 2 summarizes the main theory related to weighted Leja interpolation. In Section 2.1 we first present a general overview of the UQ setting in which Leja interpolation will be applied. Then, in Section 2.2 we recall the basics of univariate Lagrange interpolation, both standard and hierarchical. Unweighted and weighted Leja rules along with their properties are presented in Section 2.3, while in in Section 2.4 the extension of Lagrange interpolation to higher dimensions, along with a dimension-adaptive, Leja-based scheme for the construction of sparse approximations and its use for UQ purposes, is presented. Numerical experiments which verify the suitability of the Leja interpolation scheme for physical systems with arbitrarily distributed stochastic input parameters are presented in Section 3. In the same section, the adaptive weighted Leja interpolation UQ method is compared against a well-known adaptive gPC algorithm based on least angle regression (LAR). We conclude with a discussion on the overall findings of this work and on possibilities for further research in Section 4.

2. Theory and Methodology

2.1. Stochastic Parametric Models

Without loss of generality, let us assume that the behavior of the physical system under consideration can be modeled by a set of parametric partial differential equations (PDEs). The general form of the corresponding mathematical model is given by:
D y ( u ) = 0 ,
where D = D y ( u ) is a differential operator, y Ξ R N a N-dimensional parameter vector, u = u y the parameter-dependent solution of Equation (1), and Ξ a measurable set called the image space. Typically, we are interested in a certain quantity, that is the QoI, which is computed by post-processing the solution u = u y . For simplicity, we assume that the QoI is a real scalar, however, the methodology discussed in Section 2.4 can be applied to complex or vector-valued QoIs with only minor modifications. We further assume that the functional dependence between the parameter vector and the QoI is given by the map g : y g y , which is assumed to be deterministic, i.e., the exact same output g y is observed each and every time the QoI is evaluated for the same parameter vector y .
Parametric problems in the form of Equation (1) arise in both deterministic and stochastic settings. An example of the former case is, e.g., an optimization study, while an example of the latter is, e.g., a UQ study as considered in this work. In the stochastic setting, the parameter vector y corresponds to a realization of a N-dimensional RV Y = Y 1 , Y 2 , , Y N , alternatively called a random vector. Assuming that the RVs Y n , n = 1 , , N , are mutually independent, the joint PDF is given as a product of univariate PDFs, such that ϱ ( y ) = n = 1 N ϱ n ( y n ) . While the case of dependent RVs is not rigorously addressed here, we mention the option of using generalized Nataf or Rosenblatt transformations to transform the dependent RVs into independent RVs [39,40]. The sparse interpolation method presented in Section 2.4 can then be applied to the transformed RVs.
Due to its dependence on stochastic input parameters, the deterministic map g results now in a random output. In other words, the input uncertainty propagates through the deterministic model and renders the QoI uncertain. The main goal of UQ studies is to quantify this output uncertainty by computing statistics, such as moments, event probabilities, or sensitivity metrics. Denoting the expectation operator by E · , any statistic can be written in the general form:
E ϕ g = Ξ ϕ g y ϱ y d y ,
where the functional ϕ corresponds to the statistic of interest, e.g. ϕ g = g for the expected value of the QoI and ϕ g = g E g 2 for its variance.
The integral of Equation (2) cannot in general be computed analytically. Therefore, one relies on numerical integration methods, such as MC integration or quadrature. Considering the case of MC integration, if the original input-output map g corresponds to an expensive numerical model, then it is desirable to replace it by an inexpensive albeit accurate surrogate model g ˜ g . In this work, this surrogate is obtained by interpolating g. To capture accurately the behavior of g ( y ) , given that y are realizations of a random vector, the interpolation must be based on nodes tailored to the PDF ϱ ( y ) . The surrogate model can also be used for tasks other than computing statistics, e.g., for optimization. Moreover, by integrating the interpolant, one can derive a quadrature scheme for the computation of Equation (2). Similar to the interpolation, nodes tailored to the PDF are crucial for the convergence of the quadrature. Finally, in the case of a high-dimensional random vector Y , sparse quadrature and interpolation schemes must be used in order to mitigate the effect of the curse of dimensionality. Such sparse schemes most commonly rely on nested univariate node sequences (see Section 2.2).
As pointed out in Section 1, there exist interpolation nodes that meet the aforementioned requirements for the well studied cases of uniform and normal PDFs, e.g., Clenshaw–Curtis and Genz–Keister nodes, respectively [20,21]. We note that only a finite number of Genz–Keister nodes are available, thus imposing an additional limitation on its achievable accuracy. However, constructing suitable interpolation and quadrature rules for more exotic or even arbitrary PDFs, while in principle possible [22,23], remains a computational challenge. This problem is here circumvented with the use of weighted Leja nodes, discussed in Section 2.3, which are easy to compute, by definition nested, and tailored to a continuous PDF.

2.2. Univariate Interpolation Schemes

Let us assume a univariate input-output map g y , where y denotes a realization of a single RV Y. We let the non-negative integer i Z 0 denote the interpolation level, Z i = y i , j j = 1 m ( i ) the level-i grid of interpolation nodes, and m : Z 0 Z + , m 0 = 1 , a strictly monotonically increasing “level-to-nodes” function, relating the interpolation level to the grid size. Based on these three elements, in the following subsections we define Lagrange and hierarchical interpolation rules, as well as related quadrature schemes.

2.2.1. Univariate Lagrange Interpolation

The level-i univariate Lagrange interpolation is defined as:
I i g y = j = 1 m ( i ) g y i , j l i , j y ,
where l i , j are univariate Lagrange polynomials, defined on the interpolation grid Z i as:
l i , j y = k = 1 , k j m ( i ) y y i , k y i , j y i , k .
The accuracy of the interpolation depends crucially on the choice of nodes, e.g., Gauss–Legendre and Gauss–Hermite quadrature nodes are popular choices for RVs following uniform or normal distributions, respectively. In particular, the error between g and I i g is related to the best polynomial approximation error ϵ best through [6,16]:
g I i g L P i 1 + L i ϵ best ,
where P i denotes the space of Lagrange polynomials with degree up to m ( i ) 1 and L i the Lebesgue constant:
L i = sup I i g L P i g L P i .
Therefore, assuming that the interpolation error decreases for increasing interpolation levels, the interpolation nodes should be associated to a Lebesgue constant which grows at a slower rate.

2.2.2. Hierarchical Univariate Interpolation

We further demand that the interpolation nodes form sequences of nested grids for increasing interpolation levels, such that Z i 1 Z i , i > 0 . In that case, we may define a hierarchical interpolation scheme as follows. We first define the operator Δ i as the difference between two consecutive univariate interpolants, i.e.,
Δ i = I i I i 1 ,
where I 1 g ( y ) = 0 . The interpolation in Equation (3) can now be given as a sum of interpolant differences, such that:
I i g y = k = 0 i Δ k g y .
The hierarchy of formula in Equation (5) is obvious if we consider two consecutive interpolation levels, i 1 and i, in which case the level-i interpolation reads:
(6a) I i g y = I i 1 g y + j : y i , j Z i Z i 1 g y i , j I i 1 g y i , j l i , j y (6b) = I i 1 g y + j : y i , j Z i Z i 1 s i , j l i , j y ,
where the interpolation coefficients s i , j , given by:
s i , j = g y i , j I i 1 g y i , j , y i , j Z i Z i 1 ,
are called hierarchical surpluses. In the particular case of m ( i ) = i + 1 , which can in fact be accomplished using the Leja sequences discussed in Section 2.3, a single grid point is added for each new interpolation level. Then, the index j can be dropped and Equation (6) is simplified to I i g y = I i 1 g y + s i l i y .
The obvious advantage of using nested grids and the hierarchical Equations (5) or (6), is that at each new level i the QoI must be evaluated only at the new nodes y i , j Z i Z i 1 . Moreover, univariate hierarchical interpolation rules constitute the backbone of the sparse adaptive multivariate interpolation algorithms discussed in Section 2.4.

2.2.3. Interpolatory Univariate Quadrature

Considering the UQ goal of estimating specific statistics given by Equation (2), we may use an available interpolation I i g to derive an appropriate quadrature scheme. For example, assuming an univariate interpolation given by:
I i g y = j = 1 m ( i ) s i , j l i , j y ,
the expected value of the QoI can be estimated as:
E g E I i g = j = 1 m ( i ) s i , j w i , j ,
where w i , j are quadrature weights given by:
w i , j = E l i , j = Ξ l i , j y ϱ ( y ) d y .
Quadrature schemes for the estimation of statistics other than the expected value can be derived in a similar way, by applying the expectation operator along with the associated functional ϕ .

2.3. Leja Interpolation Nodes

As pointed out in Section 2.1 and Section 2.2, the univariate interpolation nodes employed in Lagrange interpolation schemes must satisfy certain requirements. First, for the interpolation to become more accurate with increasing levels, the Lebesgue constant associated with the interpolation nodes must increase at a rate lower than the decrease in the polynomial approximation error [6,16]. Second, the choice of nodes must result in accurate quadrature schemes, similar to Equation (8), to be used in the computation of statistics. Third, interpolation nodes which form nested grids can be used to derive hierarchical interpolation schemes, which result in reduced computations per added interpolation level. Additionally, as discussed in Section 2.4, sparse interpolations depend crucially on hierarchical univariate interpolation rules. Finally and most importantly in the context of this work, the interpolation must be constructed with respect to arbitrary weight functions, i.e., input PDFs. All requirements are addressed here by using weighted Leja sequences [30], discussed in the following.

2.3.1. Unweighted Leja Nodes

A sequence of standard, unweighted Leja nodes [41] y j j 0 , y j 1.1 , is defined by solving the optimization problem:
y j = argmax y 1 , 1 k = 0 j 1 y y k ,
for each node y j , j 1 . The initial node y 0 1.1 can be chosen arbitrarily, therefore Leja sequences are not unique. Using simple scaling rules, the unweighted Leja sequences defined in Equation (9) are directly applicable to Lagrange interpolation involving constant weight functions, equivalently, uniform PDFs in the UQ context. The extension to non-uniform PDFs is discussed in Section 2.3.2. With respect to the remaining node requirements, it has been shown that the Lebesgue constant associated with unweighted Leja nodes increases subexponentially [42]. Regarding Leja-based quadrature rules, an extensive discussion can be found in [30]. Considering uniform PDFs, a number of works [30,36,37] show that Leja-based quadrature schemes are sufficiently accurate, albeit suboptimal compared to more standard choices, e.g., based on Clenshaw–Curtis nodes. Finally, the optimization problem of Equation (9) results in nested node sequences irrespective of the employed level-to-nodes function m ( i ) . This is an additional advantage of using Leja nodes, since the nested node sequences can be as granular as the user wishes. In this work, we use m ( i ) = i + 1 to get the minimum of one extra node per interpolation level, i.e., # Z i Z i 1 = 1 . Another popular choice is to use m ( i ) = 2 i + 1 [36], which is typically employed for the construction of symmetric Leja sequences, where y 0 = 0 , y 1 = 1 , y j is given as in Equation (9) for odd j > 1 , and y j = y j 1 for even j. In all cases, Leja sequences are more granular compared to other nested node sequences, e.g., the level-to-nodes function m ( i ) = 2 i + 1 must be used for nested Clenshaw–Curtis nodes.

2.3.2. Weighted Leja Nodes

In more general UQ settings, e.g., for the arbitrary input distributions considered in this work, the definition of Leja sequences must be adjusted according to the input PDF ϱ y , which acts as a weight function. To that end, we employ the definition of weighted Leja sequences given in [30]. Using this formulation, weighted Leja sequences are constructed by incorporating the PDF ϱ y into the optimization problem of Equation (9), which is transformed into
y j = argmax y Ξ ϱ y k = 0 j 1 y y k .
We note that other formulations exist as well [43,44], however, the form given in Equation (10) is preferred due to the fact that the resulting Leja nodes are asymptotically distributed as weighted Gauss quadrature nodes [30]. While this is a promising property, it is not necessarily sufficient to produce an accurate interpolation rule. In some cases, e.g., for weight functions resembling the Gaussian PDF [44], it can be shown that weighted Leja nodes are also associated with a subexponentially growing Lebesgue constant. While more general theoretical results are currently not available, weighted Leja interpolation has been found to perform very well in practice [33,37], as also demonstrated in this paper by the results of the numerical experiments presented in Section 3. The other beneficial properties of Leja sequences, i.e., granularity and nestedness, are preserved in the weighted case. Moreover, since the interpolation rule is now tailored to the specific weight function that is the PDF, the corresponding quadrature rules will be tailored to that PDF as well. For a more detailed discussion on the properties of weighted Leja sequences, see [30].

2.4. Sparse Adaptive Leja Interpolation

The multivariate interpolation is based on suitable combinations of the univariate interpolation rules discussed in Section 2.2. We denote with i = i 1 , i 2 , , i N a multi-index of interpolation levels per input RV, with Λ a multi-index set, and with Δ i the tensor-product difference operator given by:
Δ i = Δ 1 , i 1 Δ 2 , i 2 Δ N , i N ,
where the univariate difference operators Δ n , i n are defined as in Equation (4), to be recalled in the following subsections. In Section 2.4.1, we discuss interpolation on generalized sparse grids. A dimension-adaptive interpolation scheme based on Leja sequences is presented in Section 2.4.2. Post-processing multivariate interpolations for UQ purposes is discussed in Section 2.4.3.

2.4.1. Generalized Sparse Grid Interpolation

Given a multi-index set Λ , the interpolation-based approximation of the multivariate input-output map g ( y ) reads:
I Λ g y = i Λ Δ i g y .
We note that Equation (11) is not in general interpolatory, i.e., the interpolation property I Λ g y * = g y * , where y * is a multivariate interpolation node, does not necessarily hold. The interpolation property holds only if Λ is a downward-closed set and the underlying univariate interpolation rules are based on nested nodes [6]. Downward-closed sets, also known as monotone or lower sets, satisfy the property:
i Λ i e n Λ , n = 1 , 2 , , N , with i n > 0 ,
where e n denotes the unit vector in the nth dimension. If, additionally, the multi-index set Λ satisfies some sparsity constraint, then Equation (11) is a sparse interpolation formula, while the corresponding sparse grid is given by:
Z Λ = i Λ Z i = i Λ Z 1 , i 1 × Z 2 , i 2 × × Z N , i N .
A commonly used sparsity constraint is:
Λ = i : n = 1 N i n i max ,
resulting in the so-called isotropic sparse grids [5,6]. Compared to full isotropic tensor grids, i.e., those given by:
Λ = i : max n = 1 , , N i n i max ,
the complexity of which is O m i max N , isotropic sparse grids delay significantly the curse of dimensionality, since their complexity is reduced to O m i max log m i max N 1 [45]. Nonetheless, isotropic sparse grids still grow exponentially with respect to the number of parameters.
In most practical cases, the influence of the input parameters upon the QoI is anisotropic. For example, the QoI may be more sensitive to some parameters compared to others, in the sense that variations in the first set of parameters result in comparably greater variations in the QoI. Accordingly, the functional dependence of the QoI on a certain parameter might be significantly more difficult to approximate compared to another, e.g., a highly nonlinear versus a linear relation.
The interpolation of Equation (11) can be constructed such that this anisotropy is represented in its terms, equivalently in the multi-indices comprising the set Λ [16,17]. This is accomplished via an anisotropic refinement of the parameter space according to the contribution of each parameter to the interpolation accuracy. Compared to isotropic schemes, anisotropic interpolations typically result in great computational savings for the same approximation accuracy, or, equivalently, in greater interpolation accuracy for an equal cost. Since, in most cases, the parameter anisotropy cannot be a priori estimated in an accurate way [46], anisotropic interpolations are usually constructed using greedy, adaptive algorithms [18,47,48]. For the particular case of Leja nodes, such an algorithm is discussed in Section 2.4.2. Leja-based, adaptive, anisotropic sparse-grid interpolation algorithms are also available in [16,30].

2.4.2. Adaptive Anisotropic Leja Interpolation

We will base the adaptive construction of anisotropic sparse interpolations on the dimension-adaptive algorithm first presented in [47] for quadrature purposes. Variants of this algorithm appear in a number of later works [16,30,36,37,48]. In this work, we assume the all underlying univariate interpolation rules employ Leja nodes, presented in Section 2.3, along with the level-to-nodes function m ( i ) = i + 1 . Our approach is depicted in Algorithm 1, while a detailed presentation follows.
Algorithms 13 00051 i001
Let us assume that a sparse interpolation I Λ , based on a downward-closed multi-index set Λ and on univariate Leja sequences, is given as in Equation (11). We define the set of admissible multi-indices, i.e., those that satisfy the downward-closedness property of Equation (12) if added to Λ , as:
Λ adm = i : i Λ and Λ i is downward closed .
Due to the fact that all univariate Leja rules employ the level-to-nodes function m ( i ) = i + 1 , each multi-index i Λ adm is associated with a single new interpolation node y i = Z i Z Λ . Therefore, should an admissible multi-index i Λ adm be added to Λ , we obtain the hierarchical interpolation scheme:
I Λ i g y = I Λ g y + s i L i y ,
where, similarly to Equation (7), the hierarchical surpluses s i are given by:
s i = g y i I Λ g y i , i Λ adm ,
and L i are multivariate Lagrange polynomials given by:
L i ( y ) = n = 1 N l n , i n ( y n ) ,
where, due to the use of nested Leja sequences with the level-to-nodes function m ( i ) = i + 1 , the univariate Lagrange polynomials can now be defined hierarchically [16], such that,
l n , i n y n = k = 0 i n 1 y n y n , k y n , i n y n , k .
Similar to [47], we expect that coefficients of small magnitudes correspond to terms with relatively small contributions to the available interpolation, which can thus be omitted. Accordingly, terms whose coefficients have large magnitudes are expected to enhance the accuracy of the interpolation and should therefore be added. Based on this argument, we define for each multi-index i Λ adm the contribution indicator:
η i = s i .
Naturally, the multi-index set Λ should be enriched with the admissible multi-index:
i * = argmax i Λ adm η i ,
that is the index corresponding to the maximum contribution. This procedure can be continued iteratively until a simulation budget B is reached, or until the total contribution of the admissible set is below a tolerance ϵ . At any given step of the algorithm, the total number of simulations, equivalently, model evaluations or function calls, is equal to # Z Λ Λ adm . After the termination of the iterations, the final interpolation is constructed using all multi-indices in Λ Λ adm , since the hierarchical surpluses corresponding to the admissible nodes have already been computed.

2.4.3. Post-Processing

Once available, the sparse interpolation can be used as an inexpensive surrogate model that can replace the original model in computationally demanding tasks, such as sampling-based estimation of statistics. Specific statistics can be computed directly after the interpolation terms. One possibility is to first transform the Lagrange basis into a gPC basis [49] and then use the well-known formulas of gPC approximations for the estimation of moments or sensitivity indices [8,9]. Another approach is to apply the expectation operator along with the functional corresponding to the statistic directly upon the interpolation, as to derive an appropriate quadrature scheme, similar to the univariate case presented in Section 2.2.3. For example, assuming a Leja-based interpolation given by:
I Λ g y = i Λ s i L i y ,
the expected value of the QoI can be estimated as:
E g E I Λ g = i Λ s i w i ,
where the quadrature weights w i are given as products of the respective univariate weights, i.e.,
w i = E L i = E n = 1 N l n , i n = n = 1 N E l n , i n = n = 1 N w n , i n .
Clearly, analytical formulas similar to Equation (13) can be derived for other statistics as well.

3. Results

In this section, we consider models with relatively atypical input distributions, for which the application of interpolation-based UQ methods is not straightforward. In particular, we consider input RVs following truncated normal and Gumbel distributions, the latter also known as the generalized extreme value distribution of type 1 or the log-Weibull distribution. We denote the truncated normal distribution with TN μ , σ 2 , l , u , where μ and σ 2 refer to the mean value and the variance of a normal distribution, while l and u are the lower and upper truncation limits. The Gumbel distribution is denoted with G , β , where is the location parameter and β the scaling parameter. Dedicated nested interpolation nodes are not readily available with respect to the corresponding input PDFs, therefore, interpolation-based UQ methods have not been considered so far in the literature for such random inputs. Here, we compute Leja nodes tailored to truncated normal and Gumbel distributions simply by applying Equation (10) with weight functions that are equal to the corresponding PDFs. Exemplarily, Figure 1 shows the first 10 computed Leja points with respect to two truncated normal and two Gumbel distributions, each with different distribution parameters.
Based on the numerical results presented in this section, we demonstrate that weighted Leja interpolation can reliably be used for such atypical input PDFs, in terms of both approximation and UQ. For further verification, we compare the performance of the weighted Leja interpolation against gPC approximations with numerically constructed orthogonal polynomials [24]. All models feature multiple input parameters, hence, we use adaptive algorithms resulting in sparse approximations. The dimension-adaptive Algorithm 1 based on weighted Leja nodes is implemented in our in-house developed DALI (Dimension-Adaptive Leja Interpolation) software [14,37]. The sparse gPC approximations are constructed with a well-established, degree-adaptive algorithm based on least angle regression (LAR) [9] and implemented in the UQLab [50] software. Quasi-random experimental designs based on Sobol sequences are used in the LAR-gPC approach, while numerically constructed orthogonal polynomials are used to tackle the input PDFs [24].

3.1. Error Metrics

Let us assume that an interpolation or gPC-based polynomial approximation g ˜ g is available. The following errors are used to estimate the performance of g ˜ for approximation and UQ purposes.
First, we want to estimate the performance of g ˜ in terms of approximation accuracy, i.e., its suitability to act as a surrogate model and reliably replace the input-output map g, equivalently, the original computational model. To that end, we use a validation sample y q q = 1 Q , which is randomly drawn from the joint input PDF ϱ ( y ) and compute the root-mean-square (RMS) validation error:
ϵ RMS = 1 Q q = 1 Q g ˜ y q g y q 2 .
From an interpretation point of view, ϵ RMS indicates the average (expected) approximation accuracy of g ˜ , while at the same time being sensitive to outliers by penalizing large errors. In all numerical experiments, the size of the validation sample is set to Q = 10 5 .
Secondly, we want to estimate the performance of g ˜ in terms of UQ quality, i.e., estimate the accuracy of the statistics values provided by post-processing the approximation. For that purpose, we use the expected value of the QoI as a representative statistic and compute the relative error:
ϵ rel , E = E g E g ˜ E g ,
where E g is a reference value and E g ˜ is an estimate computed by directly post-processing the terms of g ˜ . In all numerical experiments, the reference expected value is computed with a quasi-MC integration scheme based on Sobol sequences, where the size of the quasi-MC sample is equal to 10 8 .

3.2. Accuracy versus Costs

In the following numerical experiments, the performance of both polynomial approximations is measured by the respective error-cost relation. In most engineering applications, the cost of running simulations for different sets of parameter values typically outweighs any other numerical operation necessary for the construction of the approximation. In this context, the cost of a method refers solely to the number of simulations, equivalently, solver calls or model evaluations, that are needed until the sought approximation accuracy is reached. This notion of cost is also used in this section, despite the fact that computationally inexpensive models are employed. In particular, in each numerical experiment and for both methods, we compute 19 polynomial approximations corresponding to increasing simulation budgets, i.e., for B = 10 , 20 , , 100 , 200 , , 1000 , and compare the errors of Equations (14) and (15) for the same costs.
We should note that regression-based gPC methods, such as the LAR-gPC [9] method employed here for comparison purposes, are significantly more expensive than interpolation-based methods in terms of the neglected costs. This is due to the fact that the solution of possibly expensive LS problems is necessary. On the other hand, regression-based gPC methods allow to pre-compute the QoI for different parameter values, i.e., run the required simulations offline before the algorithm for the construction of the gPC approximation is used. Moreover, this offline task is embarrassingly parallelizable. On the contrary, as can easily be observed from Algorithm 1, dimension-adaptive sparse interpolation is based on sequential computations and therefore does not allow the use of parallelization to that extent. Despite it not being addressed here, those differences must be taken into account by practitioners, considering the problem and the computational resources at hand.

3.3. Borehole Model

We consider the 8-dimensional parametric function:
g y = 2 π T u H u H l ln r r w 1 + T u T l + 2 L T u ln r r w r w 2 K w ,
which models the water flow through a borehole [51]. The water flow is given in m3/y and the parameter vector is y = r w , r , T u , H u , T l , H l , L , K w , where r w and r respectively denote the radius of the borehole and the radius of influence (in m), T u and T l the transmissivity of the upper and lower aquifer (in m2/y), H u and H l the potentiometric head of the upper and lower aquifer (in m), L the length of the borehole (in m), and K w the hydraulic conductivity of the borehole (in m/y).
Regarding the parameter distributions, we follow the setting given in [51], where specific value ranges and probability distributions are given for each parameter and use this information to recast all parameters as to follow truncated normal distributions. In this numerical experiment, the input parameters are modeled as follows.
  • The parameter r w originally follows the normal distribution N μ = 0.1 , σ = 0.0161812 2 . The distribution is now truncated to the range 0.05.0.15 ;
  • The parameter r originally follows the log-normal distribution LN μ LN = 7 . 71 , σ LN = 1.0056 . Therefore, the parameter r has a mean value equal to exp μ LN + σ LN 2 2 and a variance equal to exp σ LN 2 1 exp 2 μ LN + σ LN 2 . The corresponding truncated normal distribution is defined by these mean and variance values, as well as by the truncation range 100.50000 l
  • The remaining parameters are originally uniformly distributed, such that T u 63070 , 115600 , H u 990 , 1110 , T l 63.1 , 116 , H l 700 , 820 , L 1120 , 1680 , and K w 9855 , 12045 . Assuming a uniform distribution with support in a , b , the corresponding truncated normal distribution is given as TN μ = a + b 2 , σ 2 = ( b a ) 2 12 , l = a , u = b , i.e., the mean value and variance of the normal distribution correspond to those of the original uniform distribution, while the truncation limits coincide with the uniform distribution’s support boundaries.
The borehole model of Equation (16) is approximated by both the dimension-adaptive weighted Leja interpolation and the degree-adaptive LAR-gPC expansion. Figure 2 shows the error-cost relation for both approximations, with respect to the errors in Equations (14) and (15). Regarding the ϵ RMS error, Leja interpolation outperformed the LAR-gPC method by approximately one order of magnitude over the whole cost-range. Leja interpolation was also advantageous with respect to the ϵ rel , E error, again always outperforming the LAR-gPC method and stagnating much sooner. We note that this error stagnation was observed due to the fact that both methods reached the accuracy of the quasi-MC estimate. Therefore, for the case of the borehole model, the weighted Leja interpolation approach was not only found to perform well for the considered truncated normal input PDFs, but was also superior to the LAR-gPC method for both considered error metrics.

3.4. Steel Column Limit State Function

We consider the 10-dimensional parametric limit state function:
g y = F s P t 1 2 B D + F 0 E b B D H E b P t ,
which relates the reliability of a steel column to its cost [51,52]. On the right hand side of Equation (17), P t = P d + P 1 + P 2 is the total load and E b = π 2 E B D H 2 2 L 2 is known as the Euler buckling load. Thereby, the parameter vector y consists of ten parameters, namely the yield stress F s (in MPa), the dead weight load P d (in N), the variable loads P 1 and P 2 (in N), the flange breadth B (in mm), the flange thickness D (in mm), the profile height H (in mm), the initial deflection F 0 (in mm), Young’s modulus E (in MPa), and the column length L (in mm).
In the original setting given in [51,52], P 1 and P 2 are modeled with Gumbel distributions, E follows a Weibull distribution, F s , B, D, and H are log-normally distributed, and P d and F 0 are normally distributed. In this numerical experiment, we replace all log-normal and normal distributions by truncated normal distributions and use the Gumbel distribution to model the remaining three parameters. The corresponding distributions are given in Table 1. With the exception of the parameter L, for which the truncation limits are set to μ ± 4 σ , all other truncated normal distributions have support in μ 3 σ , μ + 3 σ . For all parameters that follow a truncated normal distribution, μ and σ 2 coincide with the original mean and variance values given in [51,52]. Accordingly, the location and scale parameters of the Gumbel distributions are chosen such that the original mean values and variances given in [51,52] are preserved.
We approximate the steel column function Equation (17) using the dimension-adaptive weighted Leja interpolation and the degree-adaptive, LAR-based gPC method. For both approximations and for both error metrics in Equations (14) and (15), the error-cost relation is presented in Figure 3. Once more, the Leja interpolation method performs very well for the given input PDFs. However, contrary to the results of Section 3.3, no advantage is observed over the LAR-gPC method. With respect to the ϵ RMS error, the performance of both methods is comparable. This can be probably attributed to a reduced regularity of the QoI defined in Equation (17), in comparison to the QoI given in Equation (16). The Leja interpolation method has a slight edge for costs greater than 200 simulations, however, the difference is minor. With the exception of costs greater than 800 simulations, the LAR-gPC method is superior to the Leja interpolation in terms of the ϵ rel , E error. Overall, the weighted Leja interpolation is again able to yield accurate approximations and statistics and it may be regarded as competitive to the LAR-gPC method.

3.5. Meromorphic Function

We consider the 16-dimensional meromorphic function:
g y = 1 1 + w · y ,
where w is a vector of positive weights. The weight vector in Equation (18) is given by w = w ^ 2 w ^ 1 , where w ^ = 1 , 0.5 , 0.1 , 0.05 , 0.01 , , 5 · 10 8 [10]. The input vector y takes values in the image space of the random vector Y = Y 1 , Y 2 , , Y 16 , where the single RVs are given as:
Y n TN μ = 0 , σ 2 = 1 , l = 0 , u = 3 , n = 1 , 3 , , 15 , TN μ = 0 , σ 2 = 1 , l = 3 , u = 0 , n = 2 , 4 , , 16 .
While a normal distribution truncated exactly at its (untruncated) mean value is not very likely to be encountered in practical applications, the selected truncation limits result in very atypical PDFs which fit very well within the concept of arbitrary input PDFs.
Figure 4 shows the error-cost relations corresponding to the error metrics of Equations (14) and (15) for both approximations of the meromorphic function in Equation (18), i.e., using the dimension-adaptive weighted Leja interpolation algorithm and the degree-adaptive LAR-gPC algorithm. The results resemble those of Section 3.3, i.e., the Leja interpolation method had a clear advantage over the LAR-gPC approximation with respect to both error metrics. Regarding the ϵ RMS error, the difference between the two approximations was typically greater than one order of magnitude, while it increased for an increasing computational budget. Regarding the ϵ rel , E error, the two methods showed a comparable performance for costs up to 100 simulations, however, the Leja interpolation was again clearly superior as the simulation budget increased. Hence, in this numerical experiment, the weighted Leja interpolation was able to accurately approximate a model with very atypical input PDFs, and provide accurate estimations of statistics. Moreover, it was found to have a clear edge over the well-established LAR-gPC method.

3.6. Dielectric Inset Waveguide

We consider a rectangular waveguide with a dispersive dielectric inset, similar to the one investigated in [53]. A 2D illustration of the waveguide is depicted in Figure 5, where w denotes its width, is the length of the dielectric material, and d is a vacuum offset. The waveguide is further extended in the y direction by its height h, which is not shown in Figure 5. The dielectric material has permittivity ε = ε 0 ε r and permeability μ = μ 0 μ r , where the subscripts “0” and “ r ” refer to the absolute value of the material property in vacuum and to its relative value for the given dielectric material, respectively. The relative material values are given by Debye relaxation models of second order [54], such that:
ε r = ε + ε s , 1 ε 1 + ı ω τ ε , 1 + ε s , 2 ε 1 + ı ω τ ε , 2 , μ r = μ + μ s , 1 μ 1 + ı ω τ μ , 1 + μ s , 2 μ 1 + ı ω τ μ , 2 ,
where τ ε / μ , 1 / 2 are relaxation time constants, the subscript “∞” refers to a very high frequency value of the relative material property, the subscript “s” to a static value of the relative material property, and ı denotes the imaginary unit. The waveguide’s input and output boundaries are denoted with Γ in and Γ out , respectively. The remaining waveguide walls are assumed to be perfect electric conductors (PECs) and the corresponding boundary is denoted with Γ PEC .
We assume that the input port boundary Γ in is excited by an incoming plane wave. We further assume that the excitation coincides with the fundamental transverse electric (TE) mode and that all other propagation modes can be neglected, e.g., are quickly attenuated. Under these assumptions, the electric field E can be computed by solving the boundary value problem:
(19a) × μ 1 × E ω 2 ε E = 0 , in D , (19b) n Γ PEC × E = 0 , on Γ PEC , (19c) n Γ in × × E + ı k z n Γ in × n Γ in × E = U i , on Γ in , (19d) n Γ out × × E + ı k z n Γ out × n Γ out × E = 0 , on Γ out ,
where D is the computational domain, ω = 2 π f the angular frequency, f the frequency, U i the incoming plane wave, n Γ in / Γ out / Γ PEC are outwards-pointing normal vectors, and k = 0 , 0 , k z the wavevector. The QoI is the reflection coefficient at the input port Γ in , r = E Γ in E Γ in + 0 , 1 . Usually, Equation (19) is solved numerically, e.g., using the finite element method (FEM). For this simple model, a semi-analytical solution exists for the reflection coefficient r. Therefore, errors due to spatial discretization can be neglected.
Equation (19) is transformed into a stochastic problem by recasting all geometrical and Debye material model parameters, as well as the frequency, to RVs. In the nominal configuration, the parameter values are f ¯ = 6 G Hz , w ¯ = 30 m m , h ¯ = 3 m m , ¯ = 7 m m , d ¯ = 1 m m , ε ¯ s , 1 = 2 , ε ¯ s , 2 = 2.2 , ε ¯ = 1 , μ ¯ s , 1 = 2 , μ ¯ s , 2 = 3 , μ ¯ = 1 , τ ¯ ε , 1 = 1 , τ ¯ ε , 2 = 1 . 1 , τ ¯ μ , 1 = 1 , and τ ¯ μ , 2 = 2 . Each parameter is now assumed to follow a truncated normal distribution, such that Y n TN y ¯ n , 0.01 y ¯ n 2 , 0.95 y ¯ n , 1.05 y ¯ n , n = 1 , , 15 , i.e., the standard deviation of the Gaussian distribution is equal to 1 % of the nominal parameter value and a maximum deviation of ± 5 % from the nominal parameter value is imposed.
The comparison between the dimension-adaptive Leja interpolation and the degree-adaptive LAR-gPC for both error metrics in Equations (14) and (15) is presented in Figure 6. Similarly to the results obtained in Section 3.3 and Section 3.5, the Leja interpolation method was significantly superior to the LAR-gPC approximation. For both error metrics, the difference was typically one order of magnitude, with an increasing tendency for larger simulation budgets.

4. Discussion

In this work, we investigated the use of a sparse interpolation method based on weighted Leja node sequences in order to address the problems of approximation and uncertainty quantification for models with input random variables characterized by PDFs of arbitrary shapes. For that purpose, we applied a weighted Leja-based, dimension-adaptive interpolation algorithm to four models featuring 8, 10, 16, and 15 parameters, respectively. Truncated normal and extreme value input distributions have been used to model the random input parameters. The suggested approach has also been compared to a well-established adaptive gPC method, which uses numerically constructed orthogonal polynomials to address arbitrary input PDFs.
The numerical results presented in Section 3 indicated that the weighted Leja interpolation performed very well in all four numerical experiments, both in terms of approximation as well as of statistics estimation accuracy. Moreover, the suggested approach showed either superior or equivalent performance to the gPC method used for comparison purposes. The comparison results could be seen as an extension of those reported in [27,28], where uniform, normal, and log-normal distributions were considered, to the case of arbitrary input PDFs. Overall, we may conclude that the approach presented in this work posed a viable and reliable alternative to arbitrary gPC methods [24,25,26].
An obvious limitation of this work is the assumption that the input RVs could be described by continuous PDFs. Data-driven approaches, such as those presented in [31,32], are able to construct gPC approximations based only on moment values computed after a dataset, without any other assumption regarding a PDF, which might rely on subjective interpretations of the data and result in the introduction of severe biases. The extension of interpolation-based UQ methods to problems where the input PDFs are not explicitly given in closed-form would be an interesting path for further research, to be pursued in a later study.

Author Contributions

Conceptualization, D.L.; Data curation, D.L.; Funding acquisition, H.D.G.; Investigation, D.L.; Methodology, D.L.; Project administration, H.D.G.; Resources, H.D.G.; Software, D.L.; Supervision, H.D.G.; Validation, D.L.; Writing – original draft, D.L.; Writing – review & editing, H.D.G. All authors have read and agreed to the published version of the manuscript.

Funding

The first author would like to acknowledge the support of the German Federal Ministry for Education and Research (Bundesministerium für Bildung und Forschung -BMBF) through the research contract 05K19RDB.

Acknowledgments

Both authors acknowledge the support of the Graduate School of Excellence Computational Engineering at the Technische Universität Darmstadt. The authors would like to thank the two anonymous reviewers for their comments and suggestions, which contributed significantly to the improvement of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Caflisch, R. Monte Carlo and quasi-Monte Carlo methods. Acta Numer. 1998, 7, 1–49. [Google Scholar] [CrossRef] [Green Version]
  2. Loh, W.L. On latin hypercube sampling. Ann. Stat. 1996, 24, 2058–2080. [Google Scholar] [CrossRef]
  3. Le Maître, O.P.; Knio, O.M. Spectral Methods for Uncertainty Quantification: With Applications to Computational Fluid Dynamics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  4. Xiu, D. Numerical Methods for Stochastic Computations: A Spectral Method Approach; Princeton University Press: Princeton, NJ, USA, 2010. [Google Scholar]
  5. Babuška, I.; Nobile, F.; Tempone, R. A stochastic collocation method for elliptic partial differential equations with random input data. SIAM Rev. 2010, 2, 317–355. [Google Scholar] [CrossRef] [Green Version]
  6. Barthelmann, V.; Novak, E.; Ritter, K. High dimensional polynomial interpolation on sparse grids. Adv. Comput. Math. 2000, 12, 273–288. [Google Scholar] [CrossRef]
  7. Xiu, D.; Karniadakis, G.E. The Wiener-Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput. 2002, 24, 619–644. [Google Scholar] [CrossRef]
  8. Blatman, G.; Sudret, B. An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis. Probab. Eng. Mech. 2010, 25, 183–197. [Google Scholar] [CrossRef]
  9. Blatman, G.; Sudret, B. Adaptive sparse polynomial chaos expansion based on least angle regression. J. Comput. Phys. 2011, 230, 2345–2367. [Google Scholar] [CrossRef]
  10. Migliorati, G. Adaptive polynomial approximation by means of random discrete least squares. In Numerical Mathematics and Advanced Applications - ENUMATH 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 547–554. [Google Scholar]
  11. Conrad, P.R.; Marzouk, Y.M. Adaptive Smolyak Pseudospectral Approximations. SIAM J. Sci. Comput. 2013, 35, A2643–A2670. [Google Scholar] [CrossRef]
  12. Constantine, P.G.; Eldred, M.S.; Phipps, E.T. Sparse pseudospectral approximation method. Comput. Methods Appl. Mech. Eng. 2012, 229, 1–12. [Google Scholar] [CrossRef] [Green Version]
  13. Le Maître, O.P.; Knio, O.M.; Najm, H.N.; Ghanem, R.G. A Stochastic Projection Method for Fluid Flow: I. Basic Formulation. J. Comput. Phys. 2001, 173, 481–511. [Google Scholar] [CrossRef] [Green Version]
  14. Loukrezis, D.; De Gersem, H. Adaptive Sparse Polynomial Chaos Expansions via Leja Interpolation. arXiv 2019, arXiv:1911.08312. [Google Scholar]
  15. Bellman, R. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
  16. Chkifa, A.; Cohen, A.; Schwab, C. High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs. Found. Comput. Math. 2014, 14, 601–633. [Google Scholar] [CrossRef] [Green Version]
  17. Nobile, F.; Tempone, R.; Webster, C.G. An anisotropic sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J. Numer. Anal. 2008, 46, 2411–2442. [Google Scholar] [CrossRef]
  18. Stoyanov, M.; Webster, C.G. A dynamically adaptive sparse grids method for quasi-optimal interpolation of multidimensional functions. Comput. Math. Appl. 2016, 71, 2449–2465. [Google Scholar] [CrossRef] [Green Version]
  19. Ernst, O.; Sprungk, B.; Tamellini, L. Convergence of sparse collocation for functions with countably many Gaussian random variables. SIAM J. Numer. Anal. 2018, 56, 877–905. [Google Scholar] [CrossRef] [Green Version]
  20. Clenshaw, C.W.; Curtis, A.R. A method for numerical integration on an automatic computer. Numer. Math. 1960, 2, 197–205. [Google Scholar] [CrossRef]
  21. Genz, A.; Keister, B. Fully symmetric interpolatory rules for multiple integrals over infinite regions with Gaussian weight. J. Comput. Appl. Math. 1996, 71, 299–309. [Google Scholar] [CrossRef] [Green Version]
  22. Burkardt, J. The Truncated Normal Distribution, 2014. Department of Scientific Computing, Florida State University. Available online: https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf (accessed on 25 February 2020).
  23. Sommariva, A. Fast construction of Fejér and Clenshaw–Curtis rules for general weight functions. Comput. Math. Appl. 2013, 65, 682–693. [Google Scholar] [CrossRef]
  24. Soize, C.; Ghanem, R. Physical systems with random uncertainties: Chaos representations with arbitrary probability measure. SIAM J. Sci. Comput. 2004, 26, 395–410. [Google Scholar] [CrossRef] [Green Version]
  25. Wan, X.; Karniadakis, G.E. Beyond Wiener-Askey expansions: Handling arbitrary PDFs. J. Sci. Comput. 2006, 27, 455–464. [Google Scholar] [CrossRef] [Green Version]
  26. Wan, X.; Karniadakis, G.E. Multi-element generalized polynomial chaos for arbitrary probability measures. SIAM J. Sci. Comput. 2006, 28, 901–928. [Google Scholar] [CrossRef]
  27. Eldred, M. Recent advances in non-intrusive polynomial chaos and stochastic collocation methods for uncertainty analysis and design. In Proceedings of the 50th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference 17th AIAA/ASME/AHS Adaptive Structures Conference 11th AIAA, Palm Springs, CA, USA, 4–7 May 2009. [Google Scholar]
  28. Eldred, M.; Burkardt, J. Comparison of non-intrusive polynomial chaos and stochastic collocation methods for uncertainty quantification. In Proceedings of the 47th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, Orlando, FL, USA, 5–8 January 2009. [Google Scholar]
  29. Ng, L.W.T.; Eldred, M. Multifidelity uncertainty quantification using non-intrusive polynomial chaos and stochastic collocation. In Proceedings of the 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference 20th AIAA/ASME/AHS Adaptive Structures Conference 14th AIAA, Honolulu, HI, USA, 23–26 April 2012. [Google Scholar]
  30. Narayan, A.; Jakeman, J.D. Adaptive Leja Sparse Grid Constructions for Stochastic Collocation and High-Dimensional Approximation. SIAM J. Sci. Comput. 2014, 36, A2952–A2983. [Google Scholar] [CrossRef] [Green Version]
  31. Ahlfeld, R.; Belkouchi, B.; Montomoli, F. SAMBA: Sparse approximation of moment-based arbitrary polynomial chaos. J. Comput. Phys. 2016, 320, 1–16. [Google Scholar] [CrossRef] [Green Version]
  32. Oladyshkin, S.; Nowak, W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliab. Eng. Syst. Saf. 2012, 106, 179–190. [Google Scholar] [CrossRef]
  33. Ernst, O.; Sprungk, B.; Tamellini, L. On Expansions and Nodes for Sparse Grid Collocation of Lognormal Elliptic PDEs. arXiv 2019, arXiv:1906.01252. [Google Scholar]
  34. Georg, N.; Loukrezis, D.; Römer, U.; Schöps, S. Uncertainty quantification for nanoplasmonics with adjoint-based Leja adaptive collocation and conformal maps. arXiv 2019, arXiv:1807.07485. [Google Scholar]
  35. Nobile, F.; Tamellini, L.; Tempone, R. Comparison of Clenshaw-Curtis and Leja quasi-optimal sparse grids for the approximation of random PDEs. In Spectral and High Order Methods for Partial Differential Equations, ICOSAHOM 2014; Springer International Publishing: Cham, Switzerland, 2015; pp. 475–482. [Google Scholar]
  36. Schillings, C.; Schwab, C. Sparse, adaptive Smolyak quadratures for Bayesian inverse problems. Inverse Probl. 2013, 29, 065011. [Google Scholar] [CrossRef] [Green Version]
  37. Loukrezis, D.; Römer, U.; De Gersem, H. Assessing the performance of Leja and Clenshaw-Curtis collocation for computational electromagnetics with random input data. Int. J. Uncertain. Quantif. 2019, 9, 33–57. [Google Scholar] [CrossRef]
  38. Loukrezis, D. Adaptive Approximations for High-Dimensional Uncertainty Quantification in Stochastic Parametric Electromagnetic Field Simulations. Ph.D. Thesis, TU Darmstadt, Darmstadt, Germany, 2019. [Google Scholar]
  39. Feinberg, J.; Eck, V.G.; Langtangen, H.P. Multivariate Polynomial Chaos Expansions with Dependent Variables. SIAM J. Sci. Comput. 2018, 40, A199–A223. [Google Scholar] [CrossRef]
  40. Lebrun, R.; Dutfoy, A. A generalization of the Nataf transformation to distributions with elliptical copula. Probab. Eng. Mech. 2009, 24, 172–178. [Google Scholar] [CrossRef]
  41. Leja, F. Sur certaines suites liées aux ensembles plans et leur application à la représentation conforme. Ann. Pol. Math. 1957, 4, 8–13. [Google Scholar] [CrossRef] [Green Version]
  42. Taylor, R.; Totik, V. Lebesgue constants for Leja points. IMA J. Numer. Anal. 2010, 30, 462–486. [Google Scholar] [CrossRef]
  43. Marchi, S.D. On Leja sequences: some results and applications. Appl. Math. Comput. 2004, 152, 621–647. [Google Scholar] [CrossRef]
  44. Jantsch, P.; Webster, C.; Zhang, G. On the Lebesgue constant of weighted Leja points for Lagrange interpolation on unbounded domains. IMA J. Numer. Anal. 2019, 39, 1039–1057. [Google Scholar] [CrossRef] [Green Version]
  45. Bungartz, H.; Griebel, M. Sparse grids. Acta Numer. 2004, 13, 147–269. [Google Scholar] [CrossRef]
  46. Beck, J.; Tempone, R.; Nobile, F.; Tamellini, L. On the optimal polynomial approximation of stochastic PDEs by Galerkin and collocation methods. Math. Model. Methods Appl. Sci. 2012, 22, 1250023. [Google Scholar] [CrossRef] [Green Version]
  47. Gerstner, T.; Griebel, M. Dimension-adaptive tensor-product quadrature. Computing 2003, 71, 65–87. [Google Scholar] [CrossRef]
  48. Klimke, A.; Wohlmuth, B.I. Algorithm 847: Spinterp: Piecewise multilinear hierarchical sparse grid interpolation in MATLAB. ACM Trans. Math. Softw. 2005, 31, 561–579. [Google Scholar] [CrossRef]
  49. Buzzard, G.T. Efficient Basis Change for Sparse-Grid Interpolating Polynomials with Application to T-Cell Sensitivity Analysis. Comput. Biol. J. 2013, 2013. Available online: https://www.hindawi.com/journals/cbj/2013/562767/ (accessed on 25 February 2020). [CrossRef] [Green Version]
  50. Marelli, S.; Sudret, B. UQLab User Manual Polynomial Chaos Expansions, 2015. Available online: https://www.researchgate.net/profile/Bruno_Sudret/publication/279532922_UQLab_user_manual_-_Polynomial_chaos_expansions/links/5595b6c308ae793d137b26a5/UQLab-user-manual-Polynomial-chaos-expansions.pdf (accessed on 25 February 2020).
  51. Surjanovic, S.; Bingham, D. Virtual Library of Simulation Experiments: Test Functions and Datasets. Available online: http://www.sfu.ca/$\sim$ssurjano (accessed on 1 March 2019).
  52. Kuschel, N.; Rackwitz, R. Two basic problems in reliability-based structural optimization. Math. Methods Oper. Res. 1997, 46, 309–333. [Google Scholar] [CrossRef]
  53. Loukrezis, D.; Galetzka, A.; De Gersem, H. Robust Adaptive Least Squares Polynomial Chaos Expansions in High-Frequency Applications. Int. J. Numer. Model. El. 2020, e2725. Available online: https://onlinelibrary.wiley.com/doi/abs/10.1002/jnm.2725 (accessed on 25 February 2020).
  54. Xu, J.; Koledintseva, M.Y.; Zhang, Y.; He, Y.; Matlin, B.; DuBroff, R.E.; Drewniak, J.L.; Zhang, J. Complex permittivity and permeability measurements and finite-difference time-domain simulation of ferrite materials. IEEE Trans. Electromagn. Compat. 2010, 52, 878–887. [Google Scholar] [CrossRef]
Figure 1. A total of 10 first univariate Leja nodes with respect to different truncated normal and Gumbel probability distributions. The truncated normal distributions are denoted with TN μ , σ 2 , l , u , where μ and σ 2 refer to the mean value and the variance of a normal distribution, while l and u are the lower and upper truncation limits. The Gumbel distributions are denoted with G , β , where is the location parameter and β the scaling parameter. The annotation above each node denotes the order of appearance in the Leja sequence, where “0” refers to the initial node: (a) TN μ = 0 , σ 2 = 1 , l = 0 , u = 3 , (b) TN μ = 0 , σ 2 = 1 , l = 2 , u = 3 , (c) G = 3 , β = 4 , (d) G = 0.5 , β = 2 .
Figure 1. A total of 10 first univariate Leja nodes with respect to different truncated normal and Gumbel probability distributions. The truncated normal distributions are denoted with TN μ , σ 2 , l , u , where μ and σ 2 refer to the mean value and the variance of a normal distribution, while l and u are the lower and upper truncation limits. The Gumbel distributions are denoted with G , β , where is the location parameter and β the scaling parameter. The annotation above each node denotes the order of appearance in the Leja sequence, where “0” refers to the initial node: (a) TN μ = 0 , σ 2 = 1 , l = 0 , u = 3 , (b) TN μ = 0 , σ 2 = 1 , l = 2 , u = 3 , (c) G = 3 , β = 4 , (d) G = 0.5 , β = 2 .
Algorithms 13 00051 g001
Figure 2. Cost-error relation for the approximations of the borehole model. The approximations are constructed with a dimension-adaptive weighted Leja interpolation algorithm [37] and with a degree-adaptive LAR-gPC (least angle regression-generalized polynomial chaos) algorithm [9,50]. The size of the validation sample is Q = 10 5 . The reference expected value is computed with quasi-MC (Monte Carlo) integration based on a Sobol sample size equal to 10 8 : (a) RMS (root-mean-square) validation error, (b) Expected value relative error.
Figure 2. Cost-error relation for the approximations of the borehole model. The approximations are constructed with a dimension-adaptive weighted Leja interpolation algorithm [37] and with a degree-adaptive LAR-gPC (least angle regression-generalized polynomial chaos) algorithm [9,50]. The size of the validation sample is Q = 10 5 . The reference expected value is computed with quasi-MC (Monte Carlo) integration based on a Sobol sample size equal to 10 8 : (a) RMS (root-mean-square) validation error, (b) Expected value relative error.
Algorithms 13 00051 g002
Figure 3. Cost-error relation for the approximations of the steel column function. The approximations are constructed with a dimension-adaptive weighted Leja interpolation algorithm [37] and with a degree-adaptive LAR-gPC algorithm [9,50]. The size of the validation sample is Q = 10 5 . The reference expected value is computed with quasi-MC integration based on a Sobol sample size equal to 10 8 : (a) RMS validation error, (b) Expected value relative error.
Figure 3. Cost-error relation for the approximations of the steel column function. The approximations are constructed with a dimension-adaptive weighted Leja interpolation algorithm [37] and with a degree-adaptive LAR-gPC algorithm [9,50]. The size of the validation sample is Q = 10 5 . The reference expected value is computed with quasi-MC integration based on a Sobol sample size equal to 10 8 : (a) RMS validation error, (b) Expected value relative error.
Algorithms 13 00051 g003
Figure 4. Cost-error relation for the approximations of the meromorphic function. The approximations are constructed with a dimension-adaptive weighted Leja interpolation algorithm [37] and with a degree-adaptive LAR-gPC algorithm [9,50]. The size of the validation sample is Q = 10 5 . The reference expected value is computed with quasi-MC integration based on a Sobol sample size equal to 10 8 : (a) RMS validation error, (b) Expected value relative error.
Figure 4. Cost-error relation for the approximations of the meromorphic function. The approximations are constructed with a dimension-adaptive weighted Leja interpolation algorithm [37] and with a degree-adaptive LAR-gPC algorithm [9,50]. The size of the validation sample is Q = 10 5 . The reference expected value is computed with quasi-MC integration based on a Sobol sample size equal to 10 8 : (a) RMS validation error, (b) Expected value relative error.
Algorithms 13 00051 g004
Figure 5. View of the waveguide in the x z plane. The input and output ports are shown in black, the PEC (perfect electric conductors) walls in dark gray, the vacuum-filled area in white, and the dielectric material in light gray.
Figure 5. View of the waveguide in the x z plane. The input and output ports are shown in black, the PEC (perfect electric conductors) walls in dark gray, the vacuum-filled area in white, and the dielectric material in light gray.
Algorithms 13 00051 g005
Figure 6. Cost-error relation for the approximations of the dielectric inset waveguide model. The approximations are constructed with a dimension-adaptive weighted Leja interpolation algorithm [37] and with a degree-adaptive LAR-gPC algorithm [9,50]. The size of the validation sample is Q = 10 5 . The reference expected value is computed with quasi-MC integration based on a Sobol sample size equal to 10 8 : (a) RMS validation error, (b) Expected value relative error.
Figure 6. Cost-error relation for the approximations of the dielectric inset waveguide model. The approximations are constructed with a dimension-adaptive weighted Leja interpolation algorithm [37] and with a degree-adaptive LAR-gPC algorithm [9,50]. The size of the validation sample is Q = 10 5 . The reference expected value is computed with quasi-MC integration based on a Sobol sample size equal to 10 8 : (a) RMS validation error, (b) Expected value relative error.
Algorithms 13 00051 g006
Table 1. Parameter distributions of the steel column function.
Table 1. Parameter distributions of the steel column function.
ParameterDistribution
F s TN μ = 400 , σ 2 = 35 2 , l = 295 , u = 505
P d TN μ = 500000 , σ 2 = 50000 2 , l = 350000 , u = 650000
P 1 G = 559495 , β = 70173
P 2 G = 559495 , β = 70173
B TN μ = 300 , σ 2 = 3 2 , l = 291 , u = 309
D TN μ = 20 , σ 2 = 2 2 , l = 14 , u = 26
H TN μ = 300 , σ 2 = 5 2 , l = 285 , u = 315
F 0 TN μ = 30 , σ 2 = 10 2 , l = 0 , u = 60
E G = 208110 , β = 3275
L TN μ = 7500 , σ 2 = 7.5 2 , l = 7470 , u = 7530

Share and Cite

MDPI and ACS Style

Loukrezis, D.; De Gersem, H. Approximation and Uncertainty Quantification of Systems with Arbitrary Parameter Distributions Using Weighted Leja Interpolation. Algorithms 2020, 13, 51. https://doi.org/10.3390/a13030051

AMA Style

Loukrezis D, De Gersem H. Approximation and Uncertainty Quantification of Systems with Arbitrary Parameter Distributions Using Weighted Leja Interpolation. Algorithms. 2020; 13(3):51. https://doi.org/10.3390/a13030051

Chicago/Turabian Style

Loukrezis, Dimitrios, and Herbert De Gersem. 2020. "Approximation and Uncertainty Quantification of Systems with Arbitrary Parameter Distributions Using Weighted Leja Interpolation" Algorithms 13, no. 3: 51. https://doi.org/10.3390/a13030051

APA Style

Loukrezis, D., & De Gersem, H. (2020). Approximation and Uncertainty Quantification of Systems with Arbitrary Parameter Distributions Using Weighted Leja Interpolation. Algorithms, 13(3), 51. https://doi.org/10.3390/a13030051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop