Next Article in Journal
The Rotating Flow of Magneto Hydrodynamic Carbon Nanotubes over a Stretching Sheet with the Impact of Non-Linear Thermal Radiation and Heat Generation/Absorption
Previous Article in Journal
Ca/Si and Si/Al Ratios of Metakaolinite-Based Wastes: Their Influence on Mineralogy and Mechanical Strengths
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Multiple-Update-Infill Sampling Method Using Minimum Energy Design for Sequential Surrogate Modeling

1
Department of Civil & Environmental Engineering, KAIST, DaeJeon 34141, Korea
2
Department of Civil & Environmental Engineering, Sejong University, Seoul 05006, Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2018, 8(4), 481; https://doi.org/10.3390/app8040481
Submission received: 30 January 2018 / Revised: 14 March 2018 / Accepted: 16 March 2018 / Published: 22 March 2018
(This article belongs to the Section Mechanical Engineering)

Abstract

:
Computer experiments are widely used to evaluate the performance and reliability of engineering systems with the lowest possible time and cost. Sometimes, a high-fidelity model is required to ensure predictive accuracy; this becomes computationally intensive when many computational analyses are required (for example, inverse analysis or uncertainty analysis). In this context, a surrogate model can play a valuable role in addressing computational issues. Surrogate models are fast approximations of high-fidelity models. One efficient way for surrogate modeling is the sequential sampling (SS) method. The SS method sequentially adds samples to refine the surrogate model. This paper proposes a multiple-update-infill sampling method using a minimum energy design to improve the global quality of the surrogate model. The minimum energy design was recently developed for global optimization to find multiple optima. The proposed method was evaluated with other multiple-update-infill sampling methods in terms of convergence, accuracy, sampling efficiency, and computational cost.

1. Introduction

The traditional development of an engineering system requires repeated experiments involving evaluation of variability in material properties and external conditions (for example, loading and excitation). Consequently, such development is a time-consuming and cost-expensive development process undertaken with limited resources, and some experiments are infeasible due to technical limitations (for example, extreme loadings such as earthquakes and collisions). In this regard, computer experiments, using the lowest amount of time and the lowest cost possible, are widely used to evaluate the performance reliability of engineering systems [1]. The underlying idea of a computer experiment is to simulate the performance and reliability of the engineering systems using computational models instead of time-consuming experiments. In order to ensure the predictive accuracy of computational models, a high-fidelity model is commonly required. High-fidelity models are typically time-consuming and computationally intensive. In addition, many computational analyses are required for some applications (that is, uncertainty or inverse analysis). As a result, applications using high-fidelity models become computationally expensive and infeasible under limited resources.
A surrogate model can play a valuable role in addressing these computational issues [2,3,4,5]. The surrogate model is constructed by fitting the input–output relations of a high-fidelity model, thus constructing a fast approximation of the high-fidelity model. Once the surrogate model is accurately constructed, the high-fidelity model is replaced by this constructed surrogate model to produce outputs at certain inputs. Constructing an accurate surrogate model needs adequate input–output samples (training samples) that can properly capture the characteristics of the response surface. Conventionally, training samples are generated by space-filling designs including Latin hypercube designs (LHDs) [6], orthogonal LHDs [7], uniform designs [8], and generalized LHDs [9]. A surrogate model is constructed at once using training samples (one-stage method).
Another way of constructing a surrogate model is the sequential sampling (SS) method. The SS method is an efficient way to construct a surrogate model. If the number of model analyses is not limited and a minimal number of training samples (unknown) is desired, the SS method is preferable for space-filling designs (that is, the one-stage method). The SS method typically involves two stages. Firstly, the surrogate model is constructed using initial training samples from space-filling designs. The second stage sequentially selects an additional sample (infill sample) and places it among the current training samples to update the surrogate model [10,11,12]. For the SS method, infill criteria have been developed to update the surrogate model by learning information about the response surface. The SS method can involve single-update-infill sampling [5,12,13,14] and multiple-update-infill sampling [10,15]. Single-update-infill sampling consists in the selection of only a single infill sample at each infill stage, whereas the multiple-update-infill sampling method consists in the selection of multiple infill samples to accelerate the improvement of global surrogate modeling.
This paper proposes a multiple-update-infill sampling method using a minimum energy design (MED) to improve the global quality of the surrogate model. The MED was recently developed for global optimization to find multiple optima [12]. The feasibility of the multiple-update-infill sampling method using an MED was evaluated using a mathematical test function (the Friedman model), a toy test function (the borehole model), and an engineering application (the finite element model of the 2D beam frame structure). The proposed method was compared to the multiple-update-infill sampling method using mean squared errors (MSEs) [14] and expected improvement (EI) [16]. Based on the comparison, we drew conclusions in terms of convergence, accuracy, sampling efficiency, and computational cost for global surrogate modeling. The multiple-update-infill sampling method using an MED seems to be promising and efficient with a minimum number of final training samples for general cases, except for response surfaces with noisy components or strong non-stationarity.
The remainder of this paper is organized as follows. Section 2 provides a brief introduction of the kriging model and space-filling design. Section 3 presents the concept of the SS method. Thereafter, three infill criteria and their multiple-update-infill sampling methods are described. In Section 4, three case studies (the Friedman model, the borehole model, and the finite element model of the frame structure) are considered to evaluate the performance of the multiple-update-infill sampling method. Finally, Section 5 provides conclusions from the case studies. Hereafter, the boldface letters indicate vectors or matrices. In the remaining of this paper, the bold format denotes a vector or matrix.

2. The Surrogate Model for Computer Experiments

The surrogate model is a fast approximation of the computational model. It is also known as the response surface, the meta-model, and the emulator. The procedure of surrogate modeling is shown in Figure 1. Firstly, input variables and target outputs are defined. Then, training samples (input variables) are generated by the design of experiments (for example, space-filling design). Based on the training samples, computational model analysis is performed to obtain target outputs. A mathematical model is used to construct the surrogate model by fitting the input–output relationships (response surface). Surrogate models can be categorized by regression-based models (for example, polynomial functions) and interpolation-based models (for example, radial basis functions and kriging). In this study, the response surfaces of interest are confined to those from the deterministic computation model. Stated differently, the response surfaces of interest do not contain noisy components (that is, small oscillations with a high-frequency component), discontinuity, or strong non-stationarity. In view of the response surfaces of interest and deterministic computation model, the kriging model was chosen to represent the interpolation-based model.

2.1. The Kriging Model

The kriging model was originally introduced in Geostatistics to estimate the properties of the most likely distribution of gold based on a given set of sampled sites [17]. The kriging model is also known as the Gaussian process model. The kriging model is an interpolation method for which the unknown predictions are modeled by the Gaussian process, which is expressed by a prior covariance. Stated differently, the unknown predictions ( y ^ ( x new ) ) are modeled as a realization of the Gaussian process with the covariance conditional on the training samples. The kriging model can be expressed as
y ^ ( x ) = f ( x ) + Z ( x )   with   Z ( x )   ~   G P ( 0 ,   C ( x , x ) )
where x R k is a sample with k input variables; f ( x ) can be constant or polynomial function to approximate a global trend over the entire parameter space; Z ( x ) is a stochastic component to represent a local spatial deviation and assumed to be a stationary Gaussian process with a zero-mean and covariance ( C ( · , · ) ) as
C ( Z ( x i ) , Z ( x j ) ) = σ 2 ψ ( x i , x j )
where σ 2 is the process variance; the subscript i and j indicate the i th and j th training sample, respectively; ψ ( x i , x j ) is a spatial correlation function to control the smoothness of the kriging model. A typical choice of ψ ( x i , x j ) is the k -dimensional Gaussian correlation function given by the following:
ψ ( x i , x j ) = e x p ( p = 1 k θ p x i ( p ) x j ( p ) 2 )
where x i ( p ) x j ( p ) 2 is the Euclidean distance measure between two samples. θ = [ θ 1 , , θ p ] is an unknown correlation parameter vector to scale the correlation length in each input. The correlation length also reflects the relative significance of each input [2].
Different types of the kriging model can be modeled as follows: (1) the ordinary kriging model assumes a constant mean only ( f ( x ) = μ ( x ) ) over the neighborhood of x , and (2) the universal kriging model assumes a general polynomial trend model ( f ( x ) = i = 0 k β i x i ). Since the ordinary kriging model is used in this study, the ordinary kriging model is briefly described. Hereafter, the kriging model denotes the ordinary kriging model for simplicity.
Suppose that we have n observed inputs X n = [ x 1 , , x n ] T and the corresponding outputs Y n =   [ y ( x 1 ) , , y ( x n ) ] T . To construct an ordinary kriging model using X n and Y n , the unknown parameters of the kriging model ( μ , σ 2 and θ ) are estimated by maximum likelihood estimation. The log-likelihood function (Equation (4)) is expressed with these unknown parameters:
ln ( L ( μ , σ 2 , θ | X n , Y n ) ) n 2 ln ( σ 2 ) 1 2 ln | Ψ | ( Y t 1 μ ) T Ψ 1 ( Y t 1 μ ) 2 σ 2
where Ψ is an n   ×   n correlation matrix with the element ψ ( x i , x j ) and 1 is an n-by-1 unit vector. By taking the derivatives of Equation (4) with respect to μ and σ 2 and setting the derivatives to zero, their maximum likelihood estimators are derived as
μ ^ = ( 1 T Ψ 1 1 ) 1 1 T Ψ 1 Y t
σ ^ 2 ( Y n 1 μ ^ ) T Ψ 1 ( Y n 1 μ ^ ) n .
Equations (5) and (6) show that the only parameter is the correlation parameters ( θ ) in Ψ . By substituting Equations (5) and (6) into Equation (4), the concentrated log-likelihood function is obtained as
ln ( L ( θ | X n , Y n ) ) n 2 ln ( σ ^ 2 ) 1 2 ln | Ψ | .
The optimal correlation parameters ( θ ^ ) are estimated by maximizing Equation (7) as
θ ^ = arg max θ ln ( L ( θ | X n , Y n ) ) .
Once the optimal correlation parameters ( θ ^ ) are estimated, the unknown predictions ( y ^ ( x new ) ) can be estimated by a weighted linear combination of the training samples ( X n and Y n ) as
y ^ ( x new ) = μ ^ + ψ ^ T Ψ 1 ( Y n 1 μ ^ )
where ψ ^ = [ ψ ( x new , x 1 ) , , ψ ( x new , x n ) ] T represents the correlation vector between the unknown sample ( x new ) and X n . Equation (9) is the mean of the posterior Gaussian distribution. The prediction variance will be presented in Section 3.1. For more information about the kriging model, please refer to Jones [11] and Forrester et al. [2].

2.2. Space-Filling Design

Conventional surrogate modeling uses the training samples at once to fit a response surface. The training samples are generated by design of experiments (DOE). A space-filling design is the most widely used in computer experiments and it uniformly fills the samples over the input space by maximizing the distance between the samples as far apart as possible (that is, uniformity). The space-filling designs include Latin hypercube designs (LHDs) [6], maximum LHDs [18], orthogonal LHDs [7], uniform designs [8], generalized LHDs [9], and maximum projection designs [19], according to their own criterion for uniformity. Among the various space-filling designs, the maximum LHD (Mm LHD) seems to be the most commonly used space-filling design in practice thanks to its simplicity and availability in software packages [19]. Mm LHD searches an LHD by maximizing the minimum distance among the training samples, and this can be obtained by minimizing the following criterion (Morris–Mitchel criterion).
min D { i = 1 n 1 j = i + 1 n 1 d k ( x i , x j ) } 1 / k
where d k ( x i , x j ) = ( ( ( x i x j ) k ) ) 1 / k .
It is worth noting that space-filling designs were not originally designed for sequential surrogate modeling. Space-filling designs have been widely employed under a specific number of the training samples (due to a limited number of the model evaluation). In the sequential sampling method, the space-filling designs are typically used to generate initial training samples. Within the initial training samples, an initial surrogate model is constructed and used to select an infill sample for the sequential sampling method.

3. The Sequential Sampling Method

The concept of the sequential sampling (SS) method is illustrated in Figure 2, and this shows that the surrogate model converges to a true function by way of adding the infill samples to the current training samples. A core technique in the SS method is for the infill criteria to search the infill samples and refine the surrogate models for its own purpose (for example, the optimization or global quality of the surrogate model). There are numerous studies to evaluate and compare the infill criteria for sequential surrogate sampling [11,15,20,21,22]. These infill criteria include the mean squared error (MSE) [14], expected improvement (EI) [16], the probability of improvement (PoI) [23], statistical lower bound (SLB) [24], and so on. The PoI and SLB criteria are not suitable for the global surrogate modeling since they were developed to improve the prediction of the best value (pure exploitation). The MSE criterion can be used to add infill samples to the region that has high uncertainty in the predictions of the surrogate model (pure exploration), whereas the EI criterion can add infill samples for both exploitation and exploration of the response surface (balanced exploitation and exploration). Both criteria can be used as multiple-update-infill sampling methods to accelerate the surrogate modeling [10,15]. In this study, a minimum energy design (MED) is investigated to evaluate its feasibility for the SS method. The MED was recently developed to find multiple alternative optima rather than to accelerate the search for global optimum [12]. This section describes three infill criteria for the multiple-update-infill sampling method to improve the global quality of the surrogate model. These infill criteria are provided with their concepts in the following sequential order: (1) the mean squared error (MSE), (2) expected improvement (EI), and (3) the minimum energy design (MED).

3.1. Infill Criterion I: Mean Squared Error

The mean squared error (MSE) criterion is the prediction variance ( s ^ 2 ( x ) ) of the kriging model. The MSE is derived from the curvature of the augmented log-likelihood function as [2,11,13,14]
s ^ 2 ( x ) = σ ^ 2 ( 1   ψ ^ T Ψ 1 ψ ^ ) .
As shown in Figure 3, the predictions of the unknown samples can be represented as the Gaussian process with a mean ( y ^ ( x new ) ) in Equation (9) and prediction variance ( s ^ 2 ( x ) ) in Equation (11). The MSE can be used as an infill criterion to search the infill sample that has a large prediction uncertainty (larger s ^ 2 ( x ) ). In Figure 3, there are two unknown samples ( x = 3 and x = 4 ) with their prediction distributions (represented by purple lines). Based on Equation (11), s ^ 2 ( 3 ) is much smaller than s ^ 2 ( 4 ) . This indicates that the bigger MSE is obtained far apart from the training samples. Stated differently, The MSE can be used as a measure of the sparseness of the input space. The infill sample of the MSE ( x M S E n + 1 ) is chosen by maximizing the MSE as
x M S E n + 1 = arg max x s ^ 2 ( x )

3.2. Infill Criterion II: Expected Improvement

The expected improvement (EI) is an infill criterion to compute how much improvement of the current kriging model can be achieved if an infill sample is added. Let Y ( x ) be a random variable representing the uncertainty in the prediction of sample x . The uncertainty in the prediction is represented by the Gaussian distribution with a mean ( y ^ ( x ) ) and prediction variance ( s ^ 2 ( x ) ). If the current best value is the minimum value ( y m i n ) thus far, then the improvement on the minimum ( I ( x ) ) can be defined as
I ( x ) = y m i n Y ( x ) .
Since Y ( x ) is the Gaussian distribution, Equation (13) can be integrated to compute the expectation of I ( x ) . Then, EI on minimum ( E I m i n ( x ) ) can be defined as
E I m i n ( x ) = ( y m i n y ^ ( x ) ) [ 1 2 + 1 2 erf ( y m i n y ^ ( x ) s ^ ( x ) 2 ) ] + s ^ ( x ) 1 2 π exp [ ( y m i n y ^ ( x ) ) 2 2 s ^ 2 ( x ) ]
where erf ( · ) denotes the error function for the cumulative Gaussian distribution of the Kriging model at sample x . The first term in Equation (14) tends to be bigger when the mean ( y ^ ( x ) ) is likely to be better than the current minimum (that is, exploitation). On the other hand, the second term in Equation (14) plays a role in exploration by getting bigger with higher prediction variance values ( s ^ 2 ( x ) ). In this context, E I m i n ( x ) searches an infill sample which improves both exploitation and exploration in the response surface. The infill sample of the E I m i n ( x ) is chosen as
x E I m i n n + 1 = arg max x E I m i n ( x ) .
EI on maximum ( E I m a x ( x ) ) can be defined by replacing y m i n Y ( x ) with Y ( x ) y m a x . Then, E I m a x ( x ) can be defined as the following equation:
E I m a x ( x ) = ( y ^ ( x ) y m a x ) [ 1 2 + 1 2 erf ( y ^ ( x ) y m a x s ^ ( x ) 2 ) ] + s ^ ( x ) 1 2 π exp [ ( y ^ ( x ) y m a x ) 2 2 s ^ 2 ( x ) ] .
The infill sample of the E I m a x ( x ) is chosen as
x E I m i n n + 1 = arg max x E I m a x ( x ) .
Figure 4 shows the E I m i n ( x ) and E I m a x ( x ) from the kriging model constructed by the three training samples. The values of E I m i n ( x ) are becoming larger near x = 5 since this region has the greatest prediction variance ( s ^ 2 ( x ) ) and is near the best observed value ( y ( 6 ) ) thus far. In other words, the region near x = 5 maximizes both terms in Equation (14) (Figure 4a). A similar observation can be found in E I m a x ( x ) (Figure 4b).

3.3. Infill Criterion III: Minimum Energy Design

The minimum energy design (MED) was recently developed as a new space-filling design to explore unknown regions of particular interest [12]. The key idea of the MED originates from the laws of electrostatics. In the MED, samples in the input space are considered as charged particles inside a box. Let these charged particles be positively charged. By setting the same charge for all particles, they will repel each other and occupy the positions inside the box to minimize the total potential energy (Figure 5). Let q ( x i ) be the charge of the i th particle. The potential energy between the i th particle and j th particle ( E i j ) is proportional to the distance and charge as
E i j = q ( x i ) q ( x j ) d 2 ( x i , x j )
where d 2 ( x i , x j ) is the Euclidean distance between the i th particle and the j th particle. The charge function q ( x ) will be described later since this function is differently defined depending on the desired value of the output (that is, the maximum or minimum). For n -charged particles, the total potential energy is defined by
E = i = 1 n 1 j = i + 1 n q ( x i ) q ( x j ) d 2 ( x i , x j ) .
In the MED, the box represents the input space, the positions taken by the charged particles are the training samples, and the charge of each particle reflects the value of the target output. Unlike the conventional space-filling designs, the MED tries to place the training samples as far apart from each other in the region of interest by reflecting their values (that is, the numerator of Equation (19)).
Joseph et al. [12] proposed a sequential strategy for implementing an MED. Suppose that we have n observed inputs X ( n ) = [ x 1 , , x n ] T in a k -dimensional input space and the corresponding outputs Y ( n ) = [ y ( x 1 ) , , y ( x n ) ] T . The kriging model of the n th stage ( K R G ( n ) ( x ) ) can be constructed using the training samples ( X ( n ) and Y ( n ) ). Since the value of the output can be negative, an auxiliary function ( g ^ ( x ) ) should be used instead of directly using a value of the output ( y ^ ( x ) ).
g ^ ( x ) = y ^ ( x ) y ^ m i n ( n )
where y ^ ( x ) is the mean value from K R G ( n ) ( x ) for the sample x , and y ^ m i n ( n ) is the estimated minimum value from K R G ( n ) ( x ) . Then, the ( n + 1 ) th infill sample ( x M E D n + 1 ) is obtained by
x M E D n + 1 = arg min x i = 1 n 1 j = i + 1 n q ( x i ) q ( x j ) d 2 ( x i , x j ) + i = 1 n ( q ( x ) q ( x j ) d 2 ( x , x j ) ) .
Since the first term of Equation (21) is constant regardless of the sample x , this term can be ignored. Thus, Equation (21) is reformulated as
x M E D n + 1 = arg min x i = 1 n ( q ( x ) q ( x j ) d 2 ( x , x j ) ) .
The charge function ( q ( x ) ) can be divided into two types according to the desired value of the output. When the smaller value of the output (the minimum) is the region of interest, the charge ( q ( x i ) ) is proportional to g ^ ( x ) as
q m a x ( x ) = g ^ ( x ) 2 k .
For the large value of the output (the maximum) is described, the charge ( q ( x i ) ) is inversely proportional to g ^ ( x ) as
q m a x ( x ) = 1 g ^ ( x ) 2 k .
The infill samples of the M E D m i n ( x ) and M E D m a x ( x ) are obtained by Equations (25) and (26), respectively.
x M E D m i n n + 1 = arg min x i = 1 n ( q m i n ( x ) q m i n ( x j ) d 2 ( x , x j ) ) .
x M E D m a x n + 1 = arg min x i = 1 n ( q m a x ( x ) q m a x ( x j ) d 2 ( x , x j ) ) .
Figure 6 shows the M E D m i n ( x ) and M E D m a x ( x ) from the kriging model constructed by the three training samples. Both infill criteria search the regions that are highly populated near the global and local optima (exploitation), and they also try to be as far away from each other as possible (exploration).

3.4. The Multiple-Update-Infill Sampling Method

The multiple-update-infill sampling method can be defined as a combination of the infill criteria. For efficient global surrogate modeling, the SS method should contain two conflicting parts [10,25,26]: (1) global exploration and (2) local exploitation. The global exploration discovers interesting regions that have not been visited before. By doing so, it can reduce the bias in sampling due to the incomplete exploration of the entire input space. On the contrary, the local exploitation plays a key role in finding the regions of particular interest (maximum and minimum values or large prediction errors). Our previous study proposed the multiple-update-infill sampling method with three infill criteria ( M S E + E I m i n + E I m a x ) to contain both the global exploration and local exploitation for global surrogate modeling [10]. The rationale of the previous multiple-update-infill sampling method is as follows: (1) M S E can select infill samples from the regions where training samples are spares, and (2) E I m i n and E I m a x can explore and exploit the potential regions near maximum and minimum to update the surrogate model.
The proposed method uses the MED to improve the capability for the balanced exploration and exploitation (that is, a similar role of E I m i n and E I m a x ). The potential problem in using the EI criteria is that the infill samples are sometimes clustered, and this results in the waste of the SS sampling and potential numerical instability. This is the motivation of considering the MED instead of the EI criteria in the proposed multiple-update-infill sampling method. The MED can select infill samples from the regions of particular interest (that is, maximum and minimum) by maximizing the inter-distance among the training samples. This feature of the MED is good for global surrogate modeling in order to avoid ill-conditioning due to clustering of the infill samples to some regions in the input space.
In this study, five multiple-update-infill sampling methods are considered to add multiple infill samples to ensure the global exploration and local exploitation. As a representative of the single-update-infill sampling method, the MSE is used to compare the multiple-update-infill sampling methods. The multiple-update-infill sampling methods are considered as follows: (1) M S E , (2) E I m a x + E I m i n , (3) M E D m a x + M E D m i n , (4) M S E + E I m a x + E I m i n , (5) M S E + M E D m a x + M E D m i n , and (6) M S E + E I m a x + E I m i n + M E D m a x + M E D m i n (All).

4. Case Study on the Multiple-Update-Infill Sampling Methods

This section evaluates the sampling efficiency and performance of the SS methods based on six infill criteria combinations. Since the purpose of this study is to evaluate the feasibility of the proposed method for a minimal number of the training samples, three case studies with stationary responses surfaces are considered as follows: (1) one mathematical test function with 5 inputs, (2) one toy test function with 8 inputs, and (3) an engineering application with 13 inputs. Firstly, this section presents the preliminary description of the case study and four performance measures. Then, the three case studies are described with the landscapes of their response surface (that is, contour plots). Lastly, the results of the three case studies are discussed, and we draw conclusions in terms of convergence rate, accuracy, sampling efficiency, and computational cost to provide a guideline for less experienced users.

4.1. Preliminary Description of the Case Studies

Three case studies (that is, the Friedman model, the borehole model, and the 2D beam frame structure) were performed to evaluate the performances of the proposed SS methods. In surrogate modeling, it is important to normalize the input space (for example, [ 1   1 ] p or [ 0   1 ] p ). By doing so, domination by a certain input (that is, with larger magnitude) can be avoided. In other words, normalization of the input space is employed to have an equal effect on the output of all case studies.
All multiple-update-infill sampling methods were performed under identical conditions 10 times. At each repetition, the different initial samples were generated, and these initial samples were identically used for all SS methods. To evaluate the performance of the SS method, the same validation samples (the number and their location) were used for each case study. The performance measures were calculated at each infill stage for all SS methods until the maximum number of the training samples was reached. For searching the infill sample and finding the minimum of the current kriging model in the MED ( y ^ m i n ( n ) of Equation (20)), the firefly algorithm was used as an effective global optimizer [27]. The flowchart of the multiple-update-infill sampling methods is tabulated in Table 1.
This study considers four performance measures as follows: (1) root mean squared error ( R M S E ), (2) minimum distance ( d m i n ), (3) pairwise correlation ( ρ 2 ), and (4) computational efficiency ( T n o r m ). The R M S E aggregates the magnitude of the residuals (that is, squared difference) into a single measure of the prediction accuracy. A smaller R M S E indicates a higher prediction accuracy, as shown by the following equation:
R M S E = ( Y v a l Y ^ v a l ) T ( Y v a l Y ^ v a l ) n v a l
where Y ¯ v a l is the average value of Y v a l , and n v a l is the number of the validation samples. d m i n is the minimum value of the pairwise Euclidean distance between pairs of the final training samples ( X i ( f i n a l ) ) as given in Equation (28). Larger values of d m i n indicate that the training samples are placed as far apart from each other as possible. In this context, a larger value of d m i n indicates efficient sampling performance in terms of the uniformity.
d m i n = min x i , x j X i ( f i n a l ) d 2 ( x i , x j ) .
ρ 2 evaluates the goodness of the space-filling with respect to pairwise correlations [28]. A higher value of ρ 2 indicates that the projection capability in all subspace of the input variables is becoming poor. Therefore, the lower value of ρ 2 represents superior uniformity in the input space. ρ 2 is calculated as
ρ 2 = i = 2 k j = 1 i 1 ρ i j 2 k ( k 1 ) / 2
where ρ i j 2 is the linear correlation between columns i and j . The last performance measure is T n o r m . Since single-update-infill sampling (that is, M S E ) requires more infill stages than others, this can be used as the reference of the computational time. This measure is the normalized time by the average of the computational time based on the MSE:
T n o r m , i = T i 1 / N k = 1 10 T k M S E
where T i is the computational time of the SS method, and T k M S E denotes the computation time for the k th SS method based on MSE.

4.2. Case Study I: The Friedman Function

The Friedman function has been widely used to evaluate the performance of surrogate modeling (Friedman 1991). This test function has five input variables as given in Equation (31):
f ( x ) = 10 sin ( π x 1 x 2 ) + 20 ( x 3 0.5 ) 2 + 10 x 4 + 5 x 5 .
Table 2 shows a baseline value and the lower and upper bound (range) of the input variables in the Friedman function.
The baseline values and ranges were used to generate a filled contour plot of the Friedman function. Each subplot in Figure 7 shows a contour of the Friedman function versus two of the five variables by fixing the remaining three variables to the baseline value. Figure 7 shows that all input variables sufficiently affect the output. There is a strong interaction effect due to the component of 10 sin ( π x 1 x 2 ) .

4.3. Case Study II: The Borehole Model

A borehole model simulates water flow through a borehole drilled from the ground surface through two aquifers. This model is commonly used for testing various methods in computer experiments [29,30,31]. The input variables and their ranges are tabulated in Table 3. The output of this model is defined as
f ( x ) = 2 π x 3 ( x 4 x 6 ) ln ( x 2 / x 1 ) ( 1 + 2 x 7 x 3 ln ( x 2 / x 1 ) x 1 2 x 8 + x 3 x 5 )
where the output is water flow rate.
Figure 8 shows the contour plots for the response surfaces of the borehole model. Some of the input variables ( x 2 ,   x 3 , and x 5 ) seem to be non-influential variables for the output. As a result, some of the contour plots are flat (for example, x 2 versus x 3 ). The contour plot between x 1 and x 8 exhibits the non-linear response surface, while the remaining contour plots show the linear response surface. Therefore, the borehole model seems as though the complex response surface is made in the eight-dimensional space.

4.4. Case Study III: The FE Model Based on a 3-Bay-5-Story Frame Structure

As an example of an engineering application, a three-bay–five-story frame structure is investigated. The frame structure was modeled using the Euler–Bernoulli beam with a lumped mass system and the finite element model of the frame structure was constructed in MATLAB [32]. As a target output, the displacement of the top floor is considered. For input variables, 13 model parameters and their lower/upper bounds were selected as tabulated in Table 4. Figure 9 shows the frame structure with the input variables and the target output.
Figure 10 shows the contour plots of the response surfaces. The input variable P1 is the most influential variable among the input variables since it has larger values than the others and it acts from the third to fifth story of the frame. The response surfaces are smooth without any discontinuity. Therefore, the finite element model seems to be such that the smooth response surfaces are generated in the high-dimensional space.

4.5. Results and Discussion

All methods were performed with 30 initial samples for the case studies. To generate different initial samples, Mm LHD was performed 10 times. For the validation samples, 20,000 samples were generated using a quasi-random low-discrepancy sequence (that is, a sobol sequence). Based on the initial samples of each repetition, all SS methods were conducted until the number of the final samples was reached. The number of the final samples is tabulated in Table 5.
Based on the results of the case studies, the performances of all methods were evaluated in terms of convergence, accuracy, sampling efficiency, and computational cost. Firstly, we evaluated the convergence rate by the decreasing rate of the R M S E values. A rapid decreasing rate of the R M S E values indicated that the method accelerates the accuracy of global surrogate modeling. Secondly, the accuracy of the surrogate model was evaluated by the R M S E values at the final samples. The lower R M S E value indicated the superior global accuracy of the surrogate model. Next, the pairwise correlation and minimum distance were used to evaluate the measure of clustering the infill samples for sampling efficiency. Lastly, the computational cost was compared in terms of normalized time.
Convergence: The results of the case studies are shown in Figure 11, Figure 12, and Figure 13, respectively. For all case studies, M E D m a x + M E D m i n and M S E + M E D m a x + M E D m i n showed the best convergence performance with small variability. In Case Study 2, M S E and E I m a x + E I m i n were the worst SS methods in convergence (that is, with slow convergence and larger variability). In Case Study 3, E I m a x + E I m i n was the worst performance in terms of convergence (that is, with divergence and larger variability). M S E + E I m a x + E I m i n and “All” also provided convergence performances that were superior to those of M S E and E I m a x + E I m i n . However, their convergence performance was not consistent for all case studies.
Accuracy: Accuracy was evaluated based on R M S E values. In Figure 11b, Figure 12b, and Figure 13b, the last value of the R M S E indicates the measure of the accuracy. Stated differently, lower R M S E values indicate superior accuracy. For simplicity, the R M S E values of the final surrogate model were used. M E D m a x + M E D m i n and M S E + M E D m a x + M E D m i n provided minimal R M S E values for all case studies. In addition, their R M S E values were less dispersed than the other SS methods as shown in Figure 11, Figure 12, and Figure 13. “All” also provided minimal R M S E values but had a larger dispersion of R M S E values than M E D m a x + M E D m i n and M S E + M E D m a x + M E D m i n . The other SS methods were not as good as the abovementioned SS methods. In Case Study 3, M S E showed excellent performance in terms of convergence; however, it showed poor performance in accuracy.
Sampling efficiency: The sampling efficiency is measured by the uniformity of the samples in the final surrogate model, as shown in Figure 14, Figure 15, and Figure 16. As expected, M S E had the smallest values of ρ 2 with large d m i n values. Similar performances can be observed in M E D m a x + M E D m i n and M S E + M E D m a x + M E D m i n . This indicates that the final samples are located over the input space as far apart from each other. On the contrary, the other SS methods have larger values of ρ 2 and smaller values of d m i n . This implies that some of the training samples are closely placed. In other words, some samples are clustered to certain regions in the input space. This may result in the numerical instability in the construction of the surrogate model.
Computational efficiency: The computational cost of each SS method is shown in Figure 14, Figure 15, and Figure 16. As expected, the values of T n o r m became smaller in accordance with the increase of the infill samples at each infill stage. The optimization scheme (firefly algorithm) was implemented to search the infill samples, and numerous evaluations of the current surrogate model were required. The computational cost for the optimization scheme would be negligible in applications with computationally intensive models.
General discussion: M S E + M E D m a x + M E D m i n showed the best performance in terms of convergence, accuracy, sampling, and computational efficiency, while E I m a x + E I m i n showed the worst performance. Since E I m a x and E I m i n were originally developed to find the global optima, they tend to locate more infill samples in the regions near the optima. As the surrogate model becomes more accurate, infill criteria make the infill samples near the global optima for exploitation. On the contrary, M E D m a x and M E D m i n can allocate the infill samples that are far apart from the existing samples. Since more multiple infill criteria can save computational costs as discussed above, M S E + M E D m a x + M E D m i n can accelerate the learning capability of the unknown response surface and can avoid the numerical instability caused by samples being too close to each other. It is worth noting that MED does not require a specific surrogate model. For example, many infill criteria (for example, MSE and EI) can be applied to only the kriging model. On the contrary, the multiple-update-infill sampling method using the MED criteria (that is M E D m a x + M E D m i n ) can be applied to any surrogate models (penalized spline regression model [33], radial-basis function [34], and multivariate adaptive regression spline (MARS) [22]).
Limitations: The proposed method has a limitation for some applications. Since the kriging model in this study is based on the stationary correlation specification, it cannot suitably represent strong non-stationary response surfaces or noisy response surfaces (that is, high-frequency components due to approximation and truncation errors). Under such response surfaces, the kriging model with stationary correlation specification may result in numerical instabilities and poor predictive performances by violating the stationary assumption. To deal with such response surfaces, treed GPs [35] or a non-stationary covariance-based kriging method [36] are needed to ensure the predictive performance of the kriging model.

5. Conclusions

Many engineering applications sometimes require a larger number of time-consuming and computationally intensive models (that is, a high-fidelity model). Surrogate models are solutions that address this computational burden. The SS methods add the infill samples to the current training samples using infill criteria and refine the current surrogate model in a sequential framework. One advantage of SS methods is the generative learning capability on the underlying and unknown response surface. Numerous infill criteria have been developed for SS methods for their exploitation, exploration, and balanced exploitation and exploration. Multiple-update-infill sampling simultaneously uses multiple infill criteria to accelerate global surrogate modeling.
This study conducted case studies on several multiple-update-infill sampling methods for the global quality of the surrogate model. Three infill criteria were selected as follows: MSEs (pure exploration), EI, and an MED (balanced exploitation and exploration). Based on these infill criteria, six combinations of these infill criteria were considered. The performances of six different infill sampling methods have been compared through three case studies by increasing the dimensionality of the inputs.
Based on the results of the case studies, some observations have been made: Firstly, reducing the sparseness of the input space (that is M S E ) may not result in the improvement on the global quality of the final surrogate model. In Case Study 2, M S E consistently provides higher R M S E values than other SS methods, except for E I m a x + E I m i n . This indicates that it is more efficient to make infill samples from the particular regions of interest (maximum and minimum) rather than from pure exploration ( M S E ). Secondly, some combinations of the infill criteria may result in the numerical instability because of samples that are too closely spaced. Lower values of d m i n with higher values of ρ 2 were observed in the three combinations, including E I m a x and E I m i n . These infill criteria can increase the risk of making a singularity in the matrix manipulation in the surrogate model construction since they occasionally locate the infill samples near the optima to exploit the response surface. In this context, M S E , M E D m a x + M E D m i n , and M S E + M E D m a x + M E D m i n are preferred in terms of numerical stability and sampling uniformity. Thirdly, multiple infill criteria can save computational time. As expected, more infill samples at each stage can further reduce computational time. In addition, one further benefit of the multiple infill criteria is a utilization of parallel computing in the high-fidelity model. Considering that the analysis of the high-fidelity model is the most intensive and expensive part in the SS methods, the availability of the parallel implementation can significantly reduce the computational time for the SS methods.
Based on the three case studies, the multiple-update-infill sampling methods using MED ( M E D m a x + M E D m i n and M S E + M E D m a x + M E D m i n ) show the best performances in terms of convergence, accuracy, sampling, and computational efficiency for the global quality of the surrogate model. Implementation of both M E D m a x and M E D m i n can simultaneously generate infill samples with larger and smaller output values with larger inter-sample distances so that they can generate a dense set in the particular region by avoiding the risk of numerical instability. M S E + M E D m a x + M E D m i n can save more computational time than M E D m a x + M E D m i n with the parallel implementation of the high-fidelity model. It is worth noting that the MSE and EI are only available for the kriging model, but the MED can be applied to any surrogate model. In this context, the MED can be good infill criteria for global surrogate modeling to be automatically customized for various response surfaces ranging from smooth to complex ones.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) and its Grant was funded by the Korean Government (MSIP) (No. 2017R1A5A1014883) and supported by a grant (17CTAP-C132730-01) from Technology Advancement Research Program (TARP) funded by Ministry of Land, Infrastructure and Transport of Korean government.

Author Contributions

S.-S.J. and H.-J.J. conceived and designed the research; Y.H. and S.K. performed the computational experiments and analyzed the results; S.-S.J. contributed materials/analysis tools; Y.H. and S.-L.C. wrote the paper under the supervision of S.-S.J. and H.-J.J.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jung, B.C.; Park, J.; Oh, H.; Kim, J.; Youn, B.D. A framework of model validation and virtual product qualification with limited experimental data based on statistical inference. Struct. Multidiscip. Optim. 2015, 51, 573–583. [Google Scholar] [CrossRef]
  2. Forrester, A.; Keane, A. Engineering Design via Surrogate Modelling: A Practical Guide; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  3. Queipo, N.V.; Haftka, R.T.; Shyy, W.; Goel, T.; Vaidyanathan, R.; Tucker, P.K. Surrogate-based analysis and optimization. Prog. Aerosp. Sci. 2005, 41, 1–28. [Google Scholar] [CrossRef]
  4. Ren, W.-X.; Chen, H.-B. Finite element model updating in structural dynamics by using the response surface method. Eng. Struct. 2010, 32, 2455–2465. [Google Scholar] [CrossRef]
  5. Yang, X.; Guo, X.; Ouyang, H.; Li, D. A kriging model based finite element model updating method for damage detection. Appl. Sci. 2017, 7, 1039. [Google Scholar] [CrossRef]
  6. McKay, M.D.; Beckman, R.J.; Conover, W.J. Comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 1979, 21, 239–245. [Google Scholar]
  7. Ye, K.Q. Orthogonal column latin hypercubes and their application in computer experiments. J. Am. Stat. Assoc. 1998, 93, 1430–1439. [Google Scholar] [CrossRef]
  8. Fang, K.T.; Lin, D.K.J.; Winker, P.; Zhang, Y. Uniform design: Theory and application. Technometrics 2000, 42, 237–248. [Google Scholar] [CrossRef]
  9. Dette, H.; Pepelyshev, A. Generalized latin hypercube design for computer experiments. Technometrics 2010, 52, 421–429. [Google Scholar] [CrossRef]
  10. Jin, S.S.; Jung, H.J. Self-adaptive sampling for sequential surrogate modeling of time-consuming finite element analysis. Smart Struct. Syst. 2016, 17, 611–629. [Google Scholar] [CrossRef]
  11. Jones, D.R. A taxonomy of global optimization methods based on response surfaces. J. Glob. Optim. 2001, 21, 345–383. [Google Scholar] [CrossRef]
  12. Joseph, V.R.; Dasgupta, T.; Tuo, R.; Wu, C.F.J. Sequential exploration of complex surfaces using minimum energy designs. Technometrics 2015, 57, 64–74. [Google Scholar] [CrossRef]
  13. Jin, S.S.; Jung, H.J. Sequential surrogate modeling for efficient finite element model updating. Comput. Struct. 2016, 168, 30–45. [Google Scholar] [CrossRef]
  14. Sacks, J.; Welch, W.J.; Mitchell, T.J.; Wynn, H.P. Design and analysis of computer experiments. Stat. Sci. 1989, 4, 409–423. [Google Scholar] [CrossRef]
  15. Parr, J.; Keane, A.; Forrester, A.I.; Holden, C. Infill sampling criteria for surrogate-based optimization with constraint handling. Eng. Optim. 2012, 44, 1147–1166. [Google Scholar] [CrossRef]
  16. Schonlau, M.; Welch, W.J.; Jones, D.R. Global versus local search in constrained optimization of computer models. Lect. Notes Monogr. Ser. 1998, 34, 11–25. [Google Scholar] [CrossRef]
  17. Krige, D.G. A statistical approach to some basic mine valuation problems on the witwatersrand. J. South. Afr. Inst. Min. Metall. 1951, 52, 119–139. [Google Scholar]
  18. Morris, M.D.; Mitchell, T.J. Exploratory designs for computational experiments. J. Stat. Plan. Inference 1995, 43, 381–402. [Google Scholar] [CrossRef]
  19. Joseph, V.R. Space-filling designs for computer experiments: A review. Qual. Eng. 2016, 28, 28–35. [Google Scholar] [CrossRef]
  20. Liu, H.; Cai, J.; Ong, Y.-S. An adaptive sampling approach for kriging metamodeling by maximizing expected prediction error. Comput. Chem. Eng. 2017, 106, 171–182. [Google Scholar] [CrossRef]
  21. Liu, J.; Han, Z.; Song, W. Comparison of infill sampling criteria in kriging-based aerodynamic optimization. In Proceedings of the 28th Congress of the International Council of the Aeronautical Sciences, Brisbane, Australia, 23–28 September 2012; pp. 23–28. [Google Scholar]
  22. Wang, C.; Duan, Q.; Gong, W.; Ye, A.; Di, Z.; Miao, C. An evaluation of adaptive surrogate modeling based optimization with two benchmark problems. Environ. Model. Softw. 2014, 60, 167–179. [Google Scholar] [CrossRef]
  23. Stuckman, B.E. A global search method for optimizing nonlinear systems. IEEE Trans. Syst. Man Cybern. 1988, 18, 965–977. [Google Scholar] [CrossRef]
  24. Cox, D.D.; John, S. A statistical method for global optimization. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Chicago, IL, USA, 18–21 October 1992; pp. 1241–1246. [Google Scholar]
  25. Deschrijver, D.; Crombecq, K.; Nguyen, H.M.; Dhaene, T. Adaptive sampling algorithm for macromodeling of parameterized s-parameter responses. IEEE Trans. Microw. Theory Tech. 2011, 59, 39–45. [Google Scholar] [CrossRef]
  26. Liu, H.T.; Xu, S.L.; Ma, Y.; Chen, X.D.; Wang, X.F. An adaptive bayesian sequential sampling approach for global metamodeling. J. Mech. Des. 2016, 138, 011404. [Google Scholar] [CrossRef]
  27. Yang, X.-S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspir. Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  28. Tang, B. Selecting latin hypercubes using correlation criteria. Stat. Sin. 1998, 8, 965–977. [Google Scholar]
  29. Moon, H.; Dean, A.M.; Santner, T.J. Two-stage sensitivity-based group screening in computer experiments. Technometrics 2012, 54, 376–387. [Google Scholar] [CrossRef]
  30. Morris, M.D.; Mitchell, T.J.; Ylvisaker, D. Bayesian design and analysis of computer experiments: Use of derivatives in surface prediction. Technometrics 1993, 35, 243–255. [Google Scholar] [CrossRef]
  31. Zhou, Q.; Qian, P.Z.; Zhou, S. A simple approach to emulation for computer models with qualitative and quantitative factors. Technometrics 2011, 53, 266–273. [Google Scholar] [CrossRef]
  32. Khennane, A. Introduction to Finite Element Analysis Using Matlab® and Abaqus; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  33. Vu-Bac, N.; Lahmer, T.; Zhuang, X.; Nguyen-Thoi, T.; Rabczuk, T. A software framework for probabilistic sensitivity analysis for computationally expensive models. Adv. Eng. Softw. 2016, 100, 19–31. [Google Scholar] [CrossRef]
  34. Gutmann, H.M. A radial basis function method for global optimization. J. Glob. Optim. 2001, 19, 201–227. [Google Scholar] [CrossRef]
  35. Gramacy, R.B.; Lee, H.K.H. Adaptive design and analysis of supercomputer experiments. Technometrics 2012, 51, 130–145. [Google Scholar] [CrossRef]
  36. Xiong, Y.; Chen, W.; Apley, D.; Ding, X. A non-stationary covariance-based kriging method for metamodelling in engineering design. Int. J. Numer. Methods Eng. 2007, 71, 733–756. [Google Scholar] [CrossRef]
Figure 1. An illustration of surrogate modeling.
Figure 1. An illustration of surrogate modeling.
Applsci 08 00481 g001
Figure 2. An illustration of the sequential sampling method.
Figure 2. An illustration of the sequential sampling method.
Applsci 08 00481 g002
Figure 3. The mean squared error ( M S E ).
Figure 3. The mean squared error ( M S E ).
Applsci 08 00481 g003
Figure 4. (a) The expected improvement on the minimum ( E I m i n ) and (b) the maximum ( E I m a x ).
Figure 4. (a) The expected improvement on the minimum ( E I m i n ) and (b) the maximum ( E I m a x ).
Applsci 08 00481 g004
Figure 5. The repulsion principle for minimum potential energy in electrostatics.
Figure 5. The repulsion principle for minimum potential energy in electrostatics.
Applsci 08 00481 g005
Figure 6. (a) Minimum energy design on minimum ( M E D m i n ) and (b) maximum ( M E D m a x ).
Figure 6. (a) Minimum energy design on minimum ( M E D m i n ) and (b) maximum ( M E D m a x ).
Applsci 08 00481 g006
Figure 7. The contour plots for the response surfaces of the Friedman function.
Figure 7. The contour plots for the response surfaces of the Friedman function.
Applsci 08 00481 g007
Figure 8. The contour plots for the response surfaces of the borehole model (water flow rate).
Figure 8. The contour plots for the response surfaces of the borehole model (water flow rate).
Applsci 08 00481 g008
Figure 9. The three-bay–five-story frame structure.
Figure 9. The three-bay–five-story frame structure.
Applsci 08 00481 g009
Figure 10. The contour plots for the response surfaces of the five-story frame structure (axial displacement).
Figure 10. The contour plots for the response surfaces of the five-story frame structure (axial displacement).
Applsci 08 00481 g010
Figure 11. The convergence rates of root mean square errors ( R M S E ) in Case Study 1: (a) mean; (b) mean (zoom-in); (c) standard deviation; (d) standard deviation (zoom-in).
Figure 11. The convergence rates of root mean square errors ( R M S E ) in Case Study 1: (a) mean; (b) mean (zoom-in); (c) standard deviation; (d) standard deviation (zoom-in).
Applsci 08 00481 g011
Figure 12. The convergence rates of R M S E in Case Study 2: (a) mean; (b) mean (zoom-in); (c) standard deviation; (d) standard deviation (zoom-in).
Figure 12. The convergence rates of R M S E in Case Study 2: (a) mean; (b) mean (zoom-in); (c) standard deviation; (d) standard deviation (zoom-in).
Applsci 08 00481 g012aApplsci 08 00481 g012b
Figure 13. The convergence rates of R M S E in Case Study 3: (a) mean; (b) mean (zoom-in); (c) standard deviation; (d) standard deviation (zoom-in).
Figure 13. The convergence rates of R M S E in Case Study 3: (a) mean; (b) mean (zoom-in); (c) standard deviation; (d) standard deviation (zoom-in).
Applsci 08 00481 g013
Figure 14. The performance evaluations in Case Study 1: (a) d m i n versus R M S E ; (b) R M S E versus ρ 2 ; (c) d m i n versus ρ 2 ; (d) R M S E versus T n o r m ; (e) d m i n versus T n o r m .
Figure 14. The performance evaluations in Case Study 1: (a) d m i n versus R M S E ; (b) R M S E versus ρ 2 ; (c) d m i n versus ρ 2 ; (d) R M S E versus T n o r m ; (e) d m i n versus T n o r m .
Applsci 08 00481 g014
Figure 15. The performance evaluations in Case Study 2; (a) d m i n versus R M S E ; (b) R M S E versus ρ 2 ; (c) d m i n versus ρ 2 ; (d) R M S E versus T n o r m ; (e) d m i n versus T n o r m .
Figure 15. The performance evaluations in Case Study 2; (a) d m i n versus R M S E ; (b) R M S E versus ρ 2 ; (c) d m i n versus ρ 2 ; (d) R M S E versus T n o r m ; (e) d m i n versus T n o r m .
Applsci 08 00481 g015
Figure 16. The performance evaluations in Case Study 3: (a) d m i n versus R M S E ; (b) R M S E versus ρ 2 ; (c) d m i n versus ρ 2 ; (d) R M S E versus T n o r m ; (e) d m i n versus T n o r m .
Figure 16. The performance evaluations in Case Study 3: (a) d m i n versus R M S E ; (b) R M S E versus ρ 2 ; (c) d m i n versus ρ 2 ; (d) R M S E versus T n o r m ; (e) d m i n versus T n o r m .
Applsci 08 00481 g016
Table 1. The flowchart of the multiple-update-infill sampling methods in the case studies.
Table 1. The flowchart of the multiple-update-infill sampling methods in the case studies.
Input:Number of initial training samples n 0 ; Validation samples X v a l ;
Number of final training samples n f i n a l
1. Generate validation samples over input space
 (1) Generate X v a l using Mm LHD (Equation (10))
 (2) Compute outputs Y v a l using the computational model of X v a l
2. Perform sequential sampling method
 For i = 1 , , 10
 (1) Set current infill stage to n s = 1
 (2) Generate X i ( n s ) using Mm LHD (Equation (10)) with n 0
 (3) Compute outputs Y i ( n s ) using the computational model of X i ( n s )
 (4) Construct kriging model ( K R G i ( n s ) ) using X i ( n s ) and Y i ( n s )
 (5) Set # of training sample to n t r a i n = n 0
  Until ( n t r a i n = n f i n a l )
  (6) Predict outputs of X v a l ( Y ^ v a l ) using K R G i ( n s )
  (7) Evaluate and save performance measures using Y v a l and Y ^ v a l
  (8) Search the infill sample(s) ( X i n f i l l ) based on K R G i ( n s )
  (9) Compute outputs Y i n f i l l using computational model of X i n f i l l
  (10) Add X i n f i l l and Y i n f i l l to X i ( n s ) and Y i ( n s )
  (11) Update kriging model ( K R G i ( n s ) ) using X i ( n s ) and Y i ( n s )
  (12) Set n t r a i n = n t r a i n + N ( X i n f i l l ) and n s = n s + 1
  End
  (13) Save final results: K R G i ( f i n a l ) , X i ( f i n a l ) , and history of the performance measure
End
Note: N ( X i n f i l l ) denotes the number of the infill sample(s).
Table 2. The nomenclature of the Friedman function.
Table 2. The nomenclature of the Friedman function.
Input VariableBaselineLower BoundUpper Bound
x 1 0.501
x 2 0.501
x 3 0.501
x 4 0.501
x 5 0.501
Table 3. The nomenclature of the borehole model [30].
Table 3. The nomenclature of the borehole model [30].
Input ValuesBaselineLower BoundUpper Bound
x 1 Radius of borehole0.10.050.15
x 2 Radius of influence25,05010050,000
x 3 Transmissivity of upper aquifer89,33563,070115,600
x 4 Potentiometric head of upper aquifer10509901110
x 5 Transmissivity of lower aquifer89.5563.1116
x 6 Potentiometric head of lower aquifer760700820
x 7 Length of borehole140011201680
x 8 Hydraulic conductivity of borehole8250985512,045
Table 4. The five-story frame structure variables.
Table 4. The five-story frame structure variables.
Input VariableBaselineLower BoundUpper Bound
P1Load (KN)133.4553.38333.63
P288.9735.59222.43
P371.1828.47177.95
E4Young’s Modulus (MPa)21,73810,86932,606
E523,79611,89835,659
I6Moment of Inertia ( m 4 )0.00810.00410.0122
I70.01150.00580.0173
I80.02320.01160.0348
I90.02590.01300.0389
A10Sectional Area ( m 2 )0.03120.01560.0468
A110.37160.18580.5574
A120.37250.18630.5588
A130.41810.20910.6272
Table 5. The preliminary setups for all SS methods.
Table 5. The preliminary setups for all SS methods.
Case Study# Initial Samples# Final Samples# Validation Samples
1Friedman Function3021020,000
2Borehole Model240
3FE Model Based on 3-bay-5- story Frame Structure300

Share and Cite

MDPI and ACS Style

Hwang, Y.; Cha, S.-L.; Kim, S.; Jin, S.-S.; Jung, H.-J. The Multiple-Update-Infill Sampling Method Using Minimum Energy Design for Sequential Surrogate Modeling. Appl. Sci. 2018, 8, 481. https://doi.org/10.3390/app8040481

AMA Style

Hwang Y, Cha S-L, Kim S, Jin S-S, Jung H-J. The Multiple-Update-Infill Sampling Method Using Minimum Energy Design for Sequential Surrogate Modeling. Applied Sciences. 2018; 8(4):481. https://doi.org/10.3390/app8040481

Chicago/Turabian Style

Hwang, Yongmoon, Sang-Lyul Cha, Sehoon Kim, Seung-Seop Jin, and Hyung-Jo Jung. 2018. "The Multiple-Update-Infill Sampling Method Using Minimum Energy Design for Sequential Surrogate Modeling" Applied Sciences 8, no. 4: 481. https://doi.org/10.3390/app8040481

APA Style

Hwang, Y., Cha, S. -L., Kim, S., Jin, S. -S., & Jung, H. -J. (2018). The Multiple-Update-Infill Sampling Method Using Minimum Energy Design for Sequential Surrogate Modeling. Applied Sciences, 8(4), 481. https://doi.org/10.3390/app8040481

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop