Next Article in Journal
Second Phase of the Adaptation Process of the Mathematics Self-Efficacy Survey (MSES) for the Mexican–Spanish Language: The Confirmation
Next Article in Special Issue
Machine Learning-Based Models for Shear Strength Prediction of UHPFRC Beams
Previous Article in Journal
Use of Spherical and Cartesian Features for Learning and Recognition of the Static Mexican Sign Language Alphabet
Previous Article in Special Issue
A Hybrid Approach Based on Principal Component Analysis for Power Quality Event Classification Using Support Vector Machines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Blind Kriging Surrogate Model for Design Optimization Problems

1
Deep Learning Architectural Research Center, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul 05006, Korea
2
Faculty of Mechanical Technology, Industrial University of Ho Chi Minh City, Ho Chi Minh City 72308, Vietnam
3
School of Architecture, Yeungnam University, 280, Daehak-ro, Gyeongsan 38541, Korea
4
CIRTech Institute, HUTECH University, Ho Chi Minh City 72308, Vietnam
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(16), 2906; https://doi.org/10.3390/math10162906
Submission received: 15 June 2022 / Revised: 1 August 2022 / Accepted: 10 August 2022 / Published: 12 August 2022

Abstract

:
Surrogate modeling techniques are widely employed in solving constrained expensive black-box optimization problems. Therein, Kriging is among the most popular surrogates in which the trend function is considered as a constant mean. However, it also encounters several challenges related to capturing the overall trend with a relatively limited number of function evaluations as well as searching feasible points with complex or discontinuous feasible regions. To address this above issue, this paper presents an improved surrogate blind Kriging (IBK) and a combined infill strategy to find the optimal solution. According to enhancing the prediction accuracy of metamodels of objective and constraints, the high-order effects of regression function in the blind Kriging are identified by promising a variable selection technique. In addition, an infill strategy is developed based on the probability of feasibility, penalization, and constrained expected improvement for updating blind Kriging metamodels of the objective and constraints. At each iteration, two infill sample points are allocated at the positions to achieve improvement in optimality and feasibility. The IBK metamodels are updated by the newly-added infill sample points, which leads the proposed framework search to rapidly converge to the optimal solution. The performance and applicability of the proposed model are tested on several numerical benchmark problems via comparing with other metamodel-based constrained optimization methods. The obtained results indicate that IBK generally has a greater efficiency performance and outperforms the competitors in terms of a limited number of function evaluations. Finally, IBK is successfully applied to structural design optimization. The optimization results show that IBK is able to find the best feasible design with fewer evaluation functions compared with other studies, and this demonstrates the effectiveness and practicality of the proposed model for solving the constrained expensive black-box engineering design optimization problems.

1. Introduction

The optimization process is critical in the engineering design, which requires lower computational cost, robustness, stability, and accuracy. However, most conventional optimization techniques face several challenges in solving the black-box global optimization problem, such as the unavailable expressions, gradient information of objective function or constraints, and the time-consuming effort. One way to circumvent these issues is based on the surrogate model (SM) [1,2]. Here, SM is an invaluable tool that is used to approximate the expensive computational models during the optimization process indirectly or directly. There are commonly used metamodels, such as response surface methodology [3,4], radial basis function (RBF) [5,6], Kriging [7,8], support vector regression (SVR) [9], neural network (NN) [10,11,12,13], inverse distance weighting (IDW) [14], and so on. To achieve the optimal solution, the sampling technique and infill strategy are employed to build and iteratively refine the metamodel in order to ameliorate the solution found during the performance process. Therein, the sampling technique, which is also known as the design of the experiment, creates a set of points over the domain, and the initial metamodel is fitted to the observed points. Various sampling techniques for generating the sample points are developed, including Monte Carlo, random, and Latin hypercubic sampling (LHS) techniques. For more details, interested readers are suggested to refer to ref. [15].
In the past decade, variants of the metamodel-based optimization method have been successfully developed for solving the optimization problems with expensive simulations. For instance, Jones et al. [16] proposed an efficient global optimization (EGO) method where Kriging and expected improvement (EI) are introduced to find the solution. In addition, Huang et al. [17,18] extended the EGO algorithm by using the augmented EI function to determine the next sampling point. Additionally, a new infill strategy with adaptive radius and direction search is developed by Dong et al. [19] to indicate all local optimal values. To reduce the dimension of hyper-parameters, Zhao [20] incorporated the maximal information coefficient into Kriging. Gutmann [21] associated the measure of bumpiness in the RBF model to find the global minimum. Regis and Shoemaker [22,23,24] proposed a stochastic response surface to identify the promising points. More recently, Shepard [14] introduced the IDW, and then Joseph [25] added a linear regression function to improve accuracy. A conjugate between IDW and RBF is suggested by Bemporad [26].
The constrained black-box optimization problem is popular in many practical engineering designs. Hence, it has attracted the remarkable attention of researchers in recent years. They have been successfully applied to address this issue, such as ConstrLMSRBF [27], COBRA [28], RCGO [6], CARS [29], and so on. Among the surrogate models, Kriging has attracted more attention than other models due to estimating the prediction error and capacity to approximate the highly nonlinear functions. Therein, its regression function can be considered as the part trying to catch the general trend and thus the largest variations of the data. In recent times, Li et al. [30] proposed a new Kriging-based constrained global optimization algorithm in which the global optimal solution is obtained by two pivotal phases. An enhanced approach based on the modification of the probability of an improvement algorithm is presented by Carpio et al. Carpio et al. [31]. Additionally, Qian et al. [32] illustrated a new infill strategy, in which the position of the new sampling point is the intersection between the confidence interval and the constraint boundary. A combination of Kriging and the mixture of experts is proposed by Bartoli et al. [33] to improve the model’s accuracy. In addition, Forrester and Keane [34] presented a constrained expected improvement criterion to attain a new sample for updating the surrogate model. Similarly, Shi et al. [35] developed a probability of constrained improvement based on filter technology. However, there are several challenges due to multimodal, nonlinear functions, limited sample size, and a strong overall trend exists. In addition, identifying the right regression function for a set of data with interactions between variables is a difficult task. To address the above issues, researchers have tried to improve the prediction accuracy of Kriging by adjusting either the regression function, the stochastic process, or both. A well-known author, Joseph et al. [36], presented the blind Kriging model, in which the optimal basis functions are estimated by the Bayesian variable selection technique. Additionally, Kersaudy et al. [37] introduced a combination of polynomial chaos expansions and universal Kriging. Zhang et al. [38,39] proposed the regularization method for constructing a trend function and the penalized blind likelihood Kriging. Nevertheless, the above-mentioned models are only established to solve the unconstrained optimization problem. Furthermore, to the best of our knowledge, it has still not been yet utilized for constrained optimization thus far.
In this study, an improved surrogate blind Kriging is first presented to handle computational expensive constrained optimization problems. In our work, the higher-order effects of the trend function are estimated by a Bayesian variable selection technique. Simultaneously, a new infill strategy is developed based on the probability of feasibility, penalization, and constrained expected improvement to refine the surrogate models and improve the solution found during the optimization process. According to this scheme, two sample points will be indicated corresponding to the exploration and exploitation stages for each iteration. It helps the IBK metamodel to learn the complex or discontinuous feasible domains. The performance and applicability of the proposed approach are demonstrated through several benchmark optimization problems. The outcomes of the proposed model are compared with other models to evaluate its efficiency and reliability. The obtained results showed that our work is able to find the best feasible design with fewer evaluation functions.
The rest of the paper is organized as follows. Section 2 introduces the IBK model. Next, an infill strategy based on the combination is presented in Section 3. Afterward, several numerical examples are investigated to demonstrate the efficiency of the proposed model in Section 4. Finally, the conclusions are outlined in Section 5.

2. Improved Blind Kriging

2.1. Basics

The primary objective of a metamodel is to search for the approximate function from a set of data points. A brief summary of Kriging and some basic formulas are described in this subsection.
Consider the function y ( x ) as defined in the domain Ω . Let the n sample points X = x ( 1 ) , x ( 2 ) , . . . , x ( n ) T with x R d and the true response values y = y ( 1 ) , y ( 2 ) , . . . , y ( n ) T with y R , respectively. The universal Kriging (UK) model postulates a combination of the regression function and a stochastic process, as demonstrated in Equation (1)
Y x = f x + Z x ,
where Y x represents a black-box function; f x = ν x T β m denotes the regression function; Z x is characterized by Gaussian process with zero mean and stationary covariance. For the blind Kriging, the trend function utilizes a regression function, as defined as Equation (2)
f x = ν x T β m ,
in which ν x = 1 , ν 1 x , . . . , ν m x T is the vector that contains known basic functions; m is the number of basis functions, and  β m = β 0 , β 1 , . . . , β m T denotes the unknown coefficients [40]. All of them are identified through the feature selection methods in Section 2.2.
Let Ψ be the correlation matrix of the samples, ψ x the correlation vector between a point x and the sample point in the data set, and  F m the model matrix of the sample points. Then the predicted value at any point x by the blind Kriging model is given as,
y ^ x = ν x T β ^ m + ψ x T Ψ 1 y F m β ^ m ,
where the variance σ ^ m 2 and mean value β ^ m of variance can be calculated as follows
β ^ m = F m T Ψ 1 y F m T Ψ 1 F m σ ^ m 2 = y F m β ^ m T Ψ 1 y F m β ^ m n .
To indicate the optimal hyper-parameters θ , in this paper, the differential evolution (DE) algorithm is employed to maximize the likelihood function. Their searching space is restricted to ( 10 3 , 10 2 ) [34,41]. It should be noted that θ is estimated two times. The first time, they are found at m = 0, and after choosing m, the hyper-parameters will be estimated again. The interested reader is referred to [36,39,40] for more details about deriving them.
Mean square error (MSE) at points in the design domain is given by [42]
s ^ 2 x = σ ^ m 2 1 ψ x T Ψ 1 ψ x + F m T Ψ 1 ψ x ν x T F m T Ψ 1 F m 1 F m T Ψ 1 ψ x ν x .
To provide insight, a single variable test function from five initial samples, which is assumed as a black-box objective function, is investigated here, as shown in Figure 1. In which the solid black line represents the true function, the blue dashed line shows the prediction value, the red dots denote the set of sample points, and the solid red line shows the root mean squared error. It is easily seen that the predicted value at sample points is equal to the real value, and its MSE is zero. Note that MSE is used as the measure of uncertainty in each prediction and to estimate the approximate accuracy of the BK model.

2.2. Variable Selection

As mentioned by Couckuyt [40], BK can capture the most variance in the data set by data analysis methods. A collection of candidate functions is considered for selection by using cross-validation prediction error (CVPE). Now, we consider a trend function that combines the regression function with the set of candidate functions to fit the data in the following linear model
f x = i = 0 m β i ν i x + i = 0 t α i u i x ,
where t is the number of potential functions; u x = 1 , u 1 x , . . . , u t x denotes the set of candidate functions, and  α is the vector of corresponding coefficients. Note that β has already been determined independently α . When the transfer function f can be highly nonlinear, the number of candidate features to approximate are larger than the number of sample points. Therefore, we cannot determine all the coefficients α i . In this study, a function prior to all parameters in the linear model is utilized to overcome this issue. Additionally, the trend function is improved by considering the high-order effect of the Bayesian variable selection technique, including linear, quadratic, cubic, quartic, and two-factor interaction effects. Consequently, the total of candidate variables and the mean term is t = 4 d 2 + 2 i = 1 d 1 i . Let us consider five-level, spaced factors with possible levels l 1 , l 2 , l 3 , l 4 , and  l 5 , which are defined as follows
l 1 = min ( X ) , l 2 = m e a n ( X ) 2 , l 3 = m e a n ( X ) , l 4 = 3 m e a n ( X ) 2 , l 5 = max ( X ) .
Therefore, the model matrix using orthogonal polynomial coding, which is the column lengths of 5 , is
U j = 1 2 10 7 1 2 1 14 1 1 2 5 14 2 8 7 1 0 10 7 0 18 7 1 1 2 5 14 2 8 7 1 2 10 7 1 2 1 14 .
The corresponding correlation matrix is
Ψ j = 1 ψ j l j 2 ψ j l j 3 ψ j l j 4 ψ j l j 5 ψ j l j 2 1 ψ j l j 2 ψ j l j 3 ψ j l j 4 ψ j l j 3 ψ j l j 2 1 ψ j l j 2 ψ j l j 3 ψ j l j 4 ψ j l j 3 ψ j l j 2 1 ψ j l j 2 ψ j l j 5 ψ j l j 4 ψ j l j 3 ψ j l j 2 1 ,
and the R j 5 × 5 variance-covariance matrix is calculated as follows
R j = U j 1 Ψ j U j 1 T .
Note that we only use the diagonal elements of R j [43]. Hence, the ith element of the diagonal matrix R t + 1 × t + 1 can be written as
R i , i = j = 1 d r j l l i j r j q q i j r j c c i j r j q r q r i j ,
in which l i j = 1 if α i includes the factor’s linear effect j and 0 otherwise. Similarly, q i j , c i j , q r i j are assigned to the quadratic, cubic, and quartic effects, respectively.
r j l = R j 2 , 2 R j 1 , 1 , r j q = R j 3 , 3 R j 1 , 1 , r j c = R j 4 , 4 R j 1 , 1 , r j q r = R j 5 , 5 R j 1 , 1 , 0 l i j + q i j + c i j + q r i j 2 .
The sample data will be normalized to the interval [0, 1]. The encoded samples for the linear, quadratic, cubic, and quartic effects are expressed as follows
x j , l = 1 2 x j l j ( 5 ) , x j , q = 5 14 x j l j ( 5 ) 2 2 , x j , c = 5 6 2 x j l j ( 5 ) 3 17 5 x j l j ( 5 ) , x j , q r = 35 12 14 x j l j ( 5 ) 4 31 7 x j l j ( 5 ) 2 + 72 35 ,
where x j defines the jth column X. The two-factor interaction terms can be built from these basic effects. After the completion of the construction of R as shown above, the posterior mean of α is defined as
α ^ = τ m σ m 2 R F c Ψ 1 y F m β m ,
var α ^ = τ m 2 R τ m σ m 2 R F c Ψ 1 F c R ,
where F c is the matrix of all candidate variables. As shown in refs. [36,40,43], the absolute of α ^ is used instead of the standardized coefficient for variable selection. Hence, variables selected at each step correspond to the largest value α ^ . It should be noted that τ m σ m is a constant, which is set to 1 for simplicity of the computation. To be specific, for the best value of m, the blind Kriging model has to go through five stages, which are described in detail as follows.
Step 1: The LHS technique is used to create initial sample points X and the obtained response values, respectively.
Step 2: Construct the ordinary Kriging surrogate model, and the leave-one-out cross-validation prediction error C V P E ( m = 0 ) is estimated to measure accuracy.
Step 3: Determine the coefficients α ^ corresponding to each promising feature, and arrange α ^ i from largest to smallest.
Step 4: If improving the accuracy of prediction has not been satisfied,
Step 4.1: Add a candidate function corresponding to each α ^ i coefficient to the regression function.
Step 4.2: Construct the intermediate Kriging surrogate model with the new regression function.
Step 4.3: C V P E ( m ) is estimated to measure the accuracy of the model,
C V P E ( m ) = i = 1 n c v i 2 n , c v i = y i x i y ^ i x i .
Step 5: The best set of features to minimize C V P E ( m ) , will be chosen to build the final BK model. Finally, the new hyper-parameters are indicated for the new trend function [40] of the surrogate model.

3. Infill Strategy

The infill strategy is employed to iteratively refine the metamodel and guide the algorithm toward a promising region to improve the solution found. Its trade-off between exploration and exploitation decides the success of the implementation. Although there are several available infill strategies, such as expected improvement (EI), expected violation, probability of improvement function (PI), lower confidence bounding, etc., these procedures may be difficult to find feasible points or feasible disconnected regions [6,28]. In this section, a new combined infill strategy, which consists of the probability of feasibility, penalization, and a constrained expected improvement, is introduced to overcome the above limitations.
The mathematical formulation of the constrained expensive black-box global optimization problem
min x R d f x , s . t . g j x 0 , j = 1 , 2 , . . . , m , x l b x x u b ,
where f and g j denote the objective function and inequality constraints, respectively. Normally, if there is one equality constraint, it is converted to two inequality constraints using a very small relaxation factor. x u b and x l b are the upper and lower bounds of the design variables, respectively. It should be noted that the above problem belongs to the class of derivative-free optimization, so its derivatives are not available. In order to solve Equation (17), the constrained EI is usually employed as the infill criterion and expressed as follows
E I C x = E I C i x + E I C r x ,
with
E I C i x = f x * μ n x Φ f x * μ n x σ n x P F x , E I C r x = σ n x ϕ f x * μ n x σ n x P F x , P F x = j = 1 m Pr j g j x 0 , Pr j g j x 0 = Φ μ g j x σ g j x ,
where μ g j x and σ g j x are the posterior mean and covariance function of the jth constraint, respectively.
As indicated by Haftka et al. [44], E I C i x indicates the exploitation sample points where the posterior mean of the objective function is small and has low uncertainty, and the constraints are likely to be near the boundaries of feasible operation. In contrast, E I C r x aims to identify exploration sample points, which tend to be biased towards regions that are more likely to satisfy the constraints and high uncertainty of the objective function. However, the standard EI C often fails when there is no initial feasible point in the data set [44]. To circumvent this, P F x is one of the most used methods to favor the feasible region. Note that the value of P F x is easy to be zero when one of the constraints violates the design specification [44]. Consequently, the high-priority data points are missed or overlooked.
To tackle these limitations, a combined computation strategy is first introduced in this study to achieve an initial feasible point as well as find the optimal solution. This scheme helps to reduce the number of constraint violations and explore other feasible regions, as follows:
P F C x = σ n x P F x ,
P I C x = j = 1 m max 0 , μ g j x ,
I S C 14 x = j = 1 m Pr j g j x 0 . D j ,
where P F C x is combined by the covariance function and the probability of feasibility function; P I C x is the penalization function corresponding to the constraint set; I S C 14 x is the infill sampling criterion for disconnected feasible regions; D is the distance to the nearest feasible point, and given by:
D = min x f e a s x f e a s x r a n g e ,
in which range is the lag distance at which it reaches the sill from the variogram model.
According to the proposed strategy, if the feasible point is not found in the data, P F C x and P I C x will be used as an alternative for obtaining two infill sample points. In this case, providing that there exists a feasible region, two feasible sample points will be identified. More specifically, the first point is located in the sparsely sampled area by maximizing P F C x , so σ n x characterizes the sample density of the objective function in the design space. The other point indicates the corresponding minimum value of the total constraints P I C x . When the models fit poorly and do not have a feasible region, P I C x is utilized to improve the locations that violated constraints. On the contrary, once an initial feasible point is found, Equations (18) and (21) are employed to determine infill sample points. I S C 14 x aims to effectively explore other feasible regions. The algorithm framework is provided in Algorithm 1.
Algorithm 1 The combined infill strategy for the expensive constrained optimization problem
Input: 
d        : number of design variables
 
    m        : number of constraints
 
    N m a x : maximum expensive evaluation number
 
    x l b ; x u b : design space
Output: 
[ x b e s t , f b e s t ] : the optimum solution
1:
Generate n 0 sample points using LHS from the design space, evaluate the fitness values, and collect a set of initial observations D n 0 = x i , y i , g i i = 1 , . . . , n 0
2:
Determine D f e s from D n 0
3:
Set n = n 0
4:
while n N max do
5:
   Build surrogate models for the objective and each of the constraint functions
6:
   if  D n f e s =  then
7:
     Find x n + 1 by maximizing Equation (19)
8:
     Find x n + 2 by maximizing Equation (20)
9:
   else
10:
     Find x n + 1 by maximizing Equation (18)
11:
     Find x n + 2 by maximizing Equation (21)
12:
   end if
13:
   Evaluate function values at x n + 1 , 2 , append D n + 2 = D n x n + 1 , 2 , y n + 1 , 2 , G n + 1 , 2 , and update D n + 1 , 2 f e s
14:
    n = n + 2
15:
   Update [ x b e s t , f b e s t ]
16:
end while

4. Numerical Examples

4.1. Study in High-Order Effects of the Trend Function

To evaluate the high-order efficiency of the trend function, a planar truss structure for maximum passive vibration isolation [45], as shown in Figure 2, is introduced. The authors of Keane and Bright [46] achieved results for this problem through analysis and experiment. The structures consist of 42 members with the same material properties. The right-hand end nodes are fixed. The unit force excitation is subjected to node 11 over a frequency range 100–200Hz. The x- and y-coordinates of nodes seventh) and eighteenth are considered as design variables, and the other nodes are fixed as per the regular structure. The objective function is stress at the left-hand end node. The interested reader is referred to [45,46] for more details. LHS is used to obtain data for building metamodels that differ in the number of sample points, and 100 validation runs were performed to validate the results. In this case, the average Euclidean error (AEE) is used to measure the prediction error using
A E E ( y ^ , y ) = 1 k i = 1 k y ^ y 2 .
As shown in Figure 3, it is easily seen that the BK model improves compared to the Kriging model. Clearly, the CVPE is smaller than the Kriging model. It is easy to explain that the linear, quadratic effect causes a significant rise in accuracy. On the other hand, the difference in AEE scores between the models was an insignificant amount with the number of sample points less than 30. This is explained by the fact that if the sample size is small, then the high-order effect of the regression function will not be much. However, the high-order BK model achieved a vast improvement over another model when the data size increased. Then, the regression function with the high-order effects is represented as Equations (24) and (25) for 100 and 200 sample points, respectively. Clearly, IBK can capture the overall trend to the largest variations of the data.
1 + x 1 x 3 + x 1 + x 4 + x 1 x 4 + x 3 x 4 + x 2 + x 1 x 2 3 + x 1 4 x 2 4 + x 2 x 4 + x 1 3 x 2 + x 1 4 x 3 + x 1 2 x 3 + x 1 2 x 4 .
1 + x 1 x 4 + x 1 + x 1 x 3 + x 1 4 x 3 + x 1 x 2 + x 3 x 4 + x 1 4 x 2 4 + x 2 + x 1 4 x 3 2 + x 2 4 x 4 + x 1 x 3 2 + x 2 4 x 4 2 .

4.2. Synthetic Test Problems

In this section, nine well-known benchmark problems, in which the number of variables, constraints, and the best know solutions are given Table 1, are investigated to evaluate the efficiency of the proposed model. More information about these problems can be seen in Appendix A and in ref. [28]. Although these optimization problems are not expensive to evaluate, they are treated as all computationally expensive functions to conduct meaningful comparisons of the performance of the alternative methods. The obtained results will be compared with the result of RCGO, FLT-AKM, COBRA-local, COBRA-global, and ConstrLMSRBF. Therein, the two algorithms RCGO and FLT-AKM are implemented by the authors following Wu et al. [6] and Shi et al. [35]. In the RCGO method, the distance coefficient in the first phase is Π = [ 0.001 , 0.005 , 0.01 , 0.05 , 0.1 ] , and Γ = [ 0 , 1 , 2 , 3 , 4 ] is the value of the exponent in the second phase. In order to obtain a fair comparison between the different methods, all the tests are performed in a MatlabTM environment using an Intel® Core(TM) i5-8500 CPU 3.0 GHz desktop machine (Microsoft, New York, NY, USA). Each metamodel is run ten times on each test problem to reduce the effect of random error. Otherwise, the searching process of the model is terminated when either the maximum number of evaluation functions is reached or the best result does not improve after 10 iterations. LHS created ten different initial data sets. The differential evolution algorithm (DE) is employed to find hyper-parameters and the next points in the infill strategy.
The obtained results are summarized in Table 2, which provides the average number of evaluation functions (NEF), objective values of the best solution, median solutions, and mean solutions found by the six metamodels. Firstly, it is easily seen that RCGO has failed to indicate the feasible solution for G24, SR in all trials, and G5MOD in three trials. These are mainly due to the small feasible region, all the initial points are infeasible, or the incoherent distance coefficients. Clearly, IBK outperformed other algorithms for G24, G8, Two-bars, G4, Ibeam, and Hesse with the best solution. In addition, the IBK requires fewer evaluations than other metamodels. Figure 4 provides a graphical illustration of the overall process of identifying the infill samples of the G24 problem. Firstly, the feasible domain is discontinuous and includes one global and two local minimums corresponding to peaks of the feasible domain. All initial sample points are quite analogous to the metamodels for a fair comparison. Clearly, the infill samples obtained by IBK are distributed near the peaks of the feasible region as well as the boundary of the constraint set. On the contrary, the obtained result from FLT-AKM divided two groups of the infill samples, and the first group concentrates on the infeasible region far from the constrained boundary. This can easily be explained by the fact that the order of constraints is polynomials of degree four. Hence, IBK can easily capture the high-order effects compared with Kriging of the FLT-AKM. Furthermore, our infill strategy allows us to reduce the number of constraint violations and explore other feasible domains. More specifically, our model requires only 32 evaluation functions, while FLT-AKM requires 45 analyses for the convergence performance. This shows that the features of IBK and the combination infill strategy outperform other well-known existing algorithms.
For the Four-bars test, the obtained results of all these models are similar, and there are not many differences between the models. It is interesting that, among the problems, it requires few evaluation functions. Hence, IBK will not show much effect with small sample points or a simple, feasible domain. It can be observed that the optimal results found by the COBRA provide the best solution to the G5MOD and SR problems. However, IBK performs better than FLT-AKM and RCGO in terms of median, mean and best values of feasible optimum. Note that COBRA and RCGO require the initial data that contain the feasible points. However, it is difficult to find a feasible point for highly nonlinear problems, which then leads to failure. Clearly, RCGO could not find a feasible point in all trials for the SR problem. Although the result achieved from SR by IBK is close to FLT-AKM, IBK shows stability when the median and mean are close to the best value.

4.3. Structural Design Optimization

Finally, the proposed model is applied to perform the shape optimization of truss structures. This benchmark problem has been previously analyzed by Shao [48] and Suprayitno [49] using the clustering-based surrogate model and Kriging, respectively. A 21-bars planar truss shown in Figure 5 is investigated for design optimization. Coordinates of all free nodes are treated as continuous design variables. The design aim is to maximize the truss performance, which is defined as the maximum load ratio ( F max ) that the truss can carry to the self-weight of the truss (W). The structure is subjected to stress and stability ratio constraints. The stress limitations of all members are the same for tension and compression and set to 200 MPa. The cross-sectional areas of all bars are 25 cm 2 . The number of initial sample points is set at 14 [48]. Here, the critical buckling stress S c r , i = π 2 E / L e , i / ρ 2 . Whereas E and ρ are Young’s modulus of the truss material and the radius of gyration of the truss bar, respectively. It can be formulated as follows
M a x i m i z e F max W = f x 1 , x 2 , x 3 , y 3 , x 4 , y 4 , y 6 , s u b j e c t e d t o σ t , σ c σ a , σ c , i S c r , i 1 , 0.8 x 1 , x 3 , y 3 , y 4 , y 6 3 m , 3.3 x 2 , x 4 5.5 m .
The number of maximum evaluation functions is set to 100 for all models. The obtained optimal results are listed in Table 3 and Figure 6. It can easily be seen that IBK found the best feasible solution, where the objective value is 25.8541, while the maximum tensile stress and maximum compressive stress are 94.21 and -115.701 Mpa, respectively. In addition, IBK significantly decreased the number of simulations. These have demonstrated that IBK can solve expensive black-box structure design optimization problems effectively.

5. Conclusions

In this study, an improved surrogate blind Kriging combined with the new infill strategy is introduced to solve the constrained expensive black-box optimization problems. According to that core idea, the high-order effects of the trend function are identified from the training data using the Bayesian variable selection method. Simultaneously, the novel infill strategy based on the probability of feasibility, penalization, and constrained expected improvement is constructed to reduce the number of constraint violations, explore other feasible domains, and handle the discontinuous design domain. The efficiency of the proposed approach is demonstrated through several numerical examples for synthetic test problems and shapes the optimization of the truss structure. The obtained results have indicated that the optimum solution obtained by this work is in good agreement with ten of the eleven tests. The proposed model saves a number of evaluation functions in almost all problems in comparison with other metamodels. In addition, it is also shown that the IBK predictor is simpler to interpret and is more robust than an ordinary Kriging predictor. This proved to be an efficient model for solving real-world engineering problems. Hence, it promises to be robust and effective at resolving complex problems in the expensive optimization of a complex design domain.

Author Contributions

H.T.M.: conceptualization, formal analysis, investigation, methodology, software, writing—original draft, visualization, writing—review and editing. J.L. (Jaewook Lee): data curation, validation. J.K.: data curation, validation. H.N.-X.: review, validation. J.L. (Jaehong Lee): conceptualization, methodology, supervision, funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by a grant (NRF-2020R1A4A2002855) from NRF (National Research Foundation of Korea) funded by MEST (Ministry of Education and Science Technology) of the Korean government.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

G5MOD [28]:
min f ( x ) = 3 x 1 + 10 6 x 1 3 + 2 x 2 + ( 2 × 10 6 / 3 ) x 2 3 s . t . g 1 ( x ) = x 3 x 4 0.55 0 g 2 ( x ) = x 4 x 3 0.55 0 g 3 ( x ) = 1000 s i n ( x 3 0.25 ) + 894.8 x 1 + 1000 s i n ( x 4 0.25 ) 0 g 4 ( x ) = 1000 s i n ( x 3 0.25 ) + 894.8 x 2 + 1000 s i n ( x 3 x 4 0.25 ) 0 g 5 ( x ) = 1000 s i n ( x 4 0.25 ) + 1294.8 + 1000 s i n ( x 4 x 3 0.25 ) 0 0 x 1 , x 2 1200 ; 0.55 x 3 , x 4 0.55
Hesse [28]:
min f ( x ) = 25 x 1 2 2 x 2 2 2 x 3 1 2 x 4 4 2 x 5 1 2 x 6 4 2 s . t . g 1 ( x ) = 2 x 1 x 2 / 2 0 g 2 ( x ) = x 1 + x 2 6 / 6 0 g 3 ( x ) = 2 x 1 + x 2 / 2 0 g 4 ( x ) = x 1 3 x 2 2 / 2 0 g 5 ( x ) = 4 x 4 x 3 3 2 / 4 0 g 6 ( x ) = 4 x 6 x 5 3 2 / 4 0 0 x 1 5 ; 0 x 2 4 ; 1 x 3 , 5 5 ; 0 x 4 6 ; 0 x 6 10
G4 [28]:
min f ( x ) = 5.3578547 x 3 2 + 0.8356891 x 1 x 5 + 37.293239 x 1 40792.141
s . t . g 1 ( x ) = u 0 ; g 2 ( x ) = u 92 0 g 3 ( x ) = v + 90 0 ; g 4 ( x ) = v 110 0 g 5 ( x ) = w + 2 0 0 ; g 6 ( x ) = w 25 0 u = 85.334407 + 0.0056858 x 2 x 5 + 0.0006262 x 1 x 4 0.0022053 x 3 x 5 v = 80.51249 + 0.0071317 x 2 x 5 + 0.0029955 x 1 x 2 + 0.0021813 x 2 3 w = 9.300961 + 0.0047026 x 3 x 5 + 0.0012547 x 1 x 3 + 0.0019085 x 3 x 4 78 x 1 102 ; 33 x 2 45 ; 27 x i 45 f o r i = 3 , 4 , 5
G24 [28]:
min f ( x ) = x 1 x 2 s . t . g 1 ( x ) = 2 x 1 4 + 8 x 1 3 8 x 1 2 + x 2 2 0 g 2 ( x ) = 4 x 1 4 + 32 x 1 3 88 x 1 2 + 96 x 1 + x 2 36 0 0 x 1 3 ; 0 x 2 4
Ibeam [28]:
min f ( x ) = 5000 1 12 x 3 x 1 2 x 4 3 + 1 6 x 2 x 4 3 + 2 x 2 x 4 x 1 x 4 2 2 s . t . g 1 ( x ) = 2 x 2 x 4 + x 3 x 1 2 x 4 300 g 2 ( x ) = 180000 x 1 x 3 x 1 2 x 4 3 + 2 x 2 x 4 4 x 4 2 + 3 x 1 ( x 1 2 x 4 ) + 15000 x 2 x 1 2 x 4 x 3 3 + 2 x 2 3 x 4 6 10 x 1 80 ; 10 x 2 50 ; 0.9 x 3 , 4 5
SR [28]:
min f ( x ) = 0.7854 x 1 x 2 2 A 1.508 x 1 B + 7.477 C + 0.7854 D s . t . g 1 ( x ) = ( 27 x 1 x 2 2 x 3 ) / 27 0 g 2 ( x ) = ( 397.5 x 1 x 2 2 x 3 2 ) / 397.5 0 g 3 ( x ) = ( 1.93 ( x 2 x 6 4 x 3 ) / x 4 2 ) / 1.93 0 g 4 ( x ) = ( 1.93 ( x 2 x 7 4 x 3 ) / x 5 3 ) / 1.93 0 g 5 ( x ) = ( ( A 1 / B 1 ) 1100 ) / 1100 0 g 6 ( x ) = ( ( A 2 / B 2 ) 850 ) / 850 0 g 7 ( x ) = ( x 2 x 3 40 ) / 40 0 g 8 ( x ) = ( 5 ( x 1 / x 2 ) ) / 5 0 g 9 ( x ) = ( ( x 1 / x 2 ) 12 ) / 12 0 g 10 ( x ) = ( 1.9 + 1.5 x 6 x 4 ) / 1.9 0 g 11 ( x ) = ( 1.9 + 1.1 x 7 x 5 ) / 1.9 0
A = 3.3333 x 3 2 + 14.9334 x 3 43.0934 ; B = x 6 2 + x 7 2 ; C = x 6 3 + x 7 3 ; D = x 4 x 6 2 + x 5 x 7 2 A 1 = [ ( 745 x 4 / ( x 2 x 3 ) ) 2 + ( 16.91 × 10 6 ) ] 0.5 A 2 = [ ( 745 x 5 / ( x 2 x 3 ) ) 2 + ( 157.5 × 10 6 ) ] 0.5 B 1 = 0.1 x 3 ; B 2 = 0.1 x 3 2.6 x 1 3.6 ; 0.7 x 2 0.8 17 x 3 28 ; 7.3 x 4 , x 5 8.3 2.9 x 6 3.9 ; 5.0 x 7 5.5
G8 [28]:
min f ( x ) = s i n 3 ( 2 π x 1 ) s i n ( 2 π x 2 ) x 1 3 ( x 1 + x 2 ) s . t . g 1 ( x ) = x 1 2 x 2 + 1 0 g 2 ( x ) = 1 x 1 + ( x 2 4 ) 2 0 0 x 1 , x 2 10
Four-bar truss [50]:
The objective here is to minimize the total structural volume subject to the stress constraints on the members as in Figure A1.
min 2 a 1 + 2 a 2 + 2 a 3 + a 4 s . t . F L E 2 a 1 + 2 2 a 2 2 2 a 3 + 2 a 4 0.04 1 a 1 , 4 3 ; 2 a 2 , 3 3 L = 200 c m ; F = 10 k N ; E = 200 , 000 k N / c m 2
Figure A1. Four-bar truss.
Figure A1. Four-bar truss.
Mathematics 10 02906 g0a1
Two-bar truss [50]:
The objective here is to minimize the total structural volume under stress constraints on member cross-sectional areas as in Figure A2.
min a 1 b 2 + L 2 + a 2 L 2 + 2 L b 2 s . t . W 2 b 2 L b + W 1 L b 2 L 2 a 1 S max W 2 b W 1 L L 2 + 2 L b 2 2 L 2 a 2 S max 5.16 a 1 , 2 19.4 c m 2 ; 9.144 b 27.432 m W 1 = 445 k N ; W 2 = 4450 k N L = 18.288 m ; S max = 3.79 × 10 6 k N / m 2
Figure A2. Two-bar truss.
Figure A2. Two-bar truss.
Mathematics 10 02906 g0a2

References

  1. Amine Bouhlel, M.; Bartoli, N.; Regis, R.G.; Otsmane, A.; Morlier, J. Efficient global optimization for high-dimensional constrained problems by using the Kriging models combined with the partial least squares method. Eng. Optim. 2018, 50, 2038–2053. [Google Scholar] [CrossRef]
  2. Gu, J.; Zhang, H.; Zhong, X. Hybrid meta-model-based global optimum pursuing method for expensive problems. Struct. Multidiscip. Optim. 2019, 61, 543–554. [Google Scholar] [CrossRef]
  3. Myers, R.H.; Montgomery, D.C.; Anderson-Cook, C.M. Response Surface Methodology: Process and Product Optimization Using Designed Experiments; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  4. Box, G.E.; Draper, N.R. Empirical Model-Building and Response Surfaces; Wiley: New York, NY, USA, 1987; Volume 424. [Google Scholar]
  5. Powell, M. The Theory of Radial Basis Function Approximation in 1990, Advances in Numerical Analysis II: Wavelets, Subdivision, and Radial Functions (WA Light, ed.); Oxford University Press: Oxford, UK, 1992; Volume 105, p. 210. [Google Scholar]
  6. Wu, Y.; Yin, Q.; Jie, H.; Wang, B.; Zhao, J. A RBF-based constrained global optimization algorithm for problems with computationally expensive objective and constraints. Struct. Multidiscip. Optim. 2018, 58, 1633–1655. [Google Scholar] [CrossRef]
  7. Sacks, J.; Welch, W.J.; Mitchell, T.J.; Wynn, H.P. Design and analysis of computer experiments. Stat. Sci. 1989, 4, 409–423. [Google Scholar] [CrossRef]
  8. Sakata, S.; Ashida, F.; Zako, M. Structural optimization using Kriging approximation. Comput. Methods Appl. Mech. Eng. 2003, 192, 923–939. [Google Scholar] [CrossRef]
  9. Clarke, S.M.; Griebsch, J.H.; Simpson, T.W. Analysis of support vector regression for approximation of complex engineering analyses. J. Mech. Des. 2005, 127, 1077–1087. [Google Scholar] [CrossRef]
  10. Berke, L.; Hajela, P. Applications of artificial neural nets in structural mechanics. In Shape and Layout Optimization of Structural Systems and Optimality Criteria Methods; Springer: Berlin/Heidelberg, Germany, 1992; pp. 331–348. [Google Scholar]
  11. Mai, H.T.; Kang, J.; Lee, J. A machine learning-based surrogate model for optimization of truss structures with geometrically nonlinear behavior. Finite Elem. Anal. Des. 2021, 196, 103572. [Google Scholar] [CrossRef]
  12. Zhang, W.; Li, X.; Ma, H.; Luo, Z.; Li, X. Federated learning for machinery fault diagnosis with dynamic validation and self-supervision. Knowl.-Based Syst. 2021, 213, 106679. [Google Scholar] [CrossRef]
  13. Zhang, W.; Li, X.; Ma, H.; Luo, Z.; Li, X. Universal domain adaptation in fault diagnostics with hybrid weighted deep adversarial learning. IEEE Trans. Ind. Inform. 2021, 17, 7957–7967. [Google Scholar] [CrossRef]
  14. Shepard, D. A two-dimensional interpolation function for irregularly-spaced data. In Proceedings of the 1968 23rd ACM National Conference, New York, NY, USA, 27–29 August 1968; pp. 517–524. [Google Scholar]
  15. Shields, M.D.; Zhang, J. The generalization of Latin hypercube sampling. Reliab. Eng. Syst. Saf. 2016, 148, 96–108. [Google Scholar] [CrossRef]
  16. Jones, D.R.; Schonlau, M.; Welch, W.J. Efficient global optimization of expensive black-box functions. J. Glob. Optim. 1998, 13, 455–492. [Google Scholar] [CrossRef]
  17. Huang, D.; Allen, T.T.; Notz, W.I.; Zeng, N. Global optimization of stochastic black-box systems via sequential kriging meta-models. J. Glob. Optim. 2006, 34, 441–466. [Google Scholar] [CrossRef]
  18. Huang, D.; Allen, T.T.; Notz, W.I.; Miller, R.A. Sequential kriging optimization using multiple-fidelity evaluations. Struct. Multidiscip. Optim. 2006, 32, 369–382. [Google Scholar] [CrossRef]
  19. Dong, H.; Song, B.; Wang, P.; Huang, S. A kind of balance between exploitation and exploration on kriging for global optimization of expensive functions. J. Mech. Sci. Technol. 2015, 29, 2121–2133. [Google Scholar] [CrossRef]
  20. Zhao, L.; Wang, P.; Song, B.; Wang, X.; Dong, H. An efficient kriging modeling method for high-dimensional design problems based on maximal information coefficient. Struct. Multidiscip. Optim. 2019, 61, 39–57. [Google Scholar] [CrossRef]
  21. Gutmann, H.M. A radial basis function method for global optimization. J. Glob. Optim. 2001, 19, 201–227. [Google Scholar] [CrossRef]
  22. Regis, R.G.; Shoemaker, C.A. Improved strategies for radial basis function methods for global optimization. J. Glob. Optim. 2007, 37, 113–135. [Google Scholar] [CrossRef]
  23. Regis, R.G.; Shoemaker, C.A. A stochastic radial basis function method for the global optimization of expensive functions. INFORMS J. Comput. 2007, 19, 497–509. [Google Scholar] [CrossRef]
  24. Regis, R.G.; Shoemaker, C.A. Parallel stochastic global optimization using radial basis functions. INFORMS J. Comput. 2009, 21, 411–426. [Google Scholar] [CrossRef]
  25. Joseph, V.R.; Kang, L. Regression-based inverse distance weighting with applications to computer experiments. Technometrics 2011, 53, 254–265. [Google Scholar] [CrossRef]
  26. Bemporad, A. Global optimization via inverse distance weighting. arXiv 2019, arXiv:1906.06498. [Google Scholar]
  27. Regis, R.G. Stochastic radial basis function algorithms for large-scale optimization involving expensive black-box objective and constraint functions. Comput. Oper. Res. 2011, 38, 837–853. [Google Scholar] [CrossRef]
  28. Regis, R.G. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points. Eng. Optim. 2014, 46, 218–243. [Google Scholar] [CrossRef]
  29. Nuñez, L.; Regis, R.G.; Varela, K. Accelerated random search for constrained global optimization assisted by radial basis function surrogates. J. Comput. Appl. Math. 2018, 340, 276–295. [Google Scholar] [CrossRef]
  30. Li, Y.; Wu, Y.; Zhao, J.; Chen, L. A Kriging-based constrained global optimization algorithm for expensive black-box functions with infeasible initial points. J. Glob. Optim. 2017, 67, 343–366. [Google Scholar] [CrossRef]
  31. Carpio, R.R.; Giordano, R.C.; Secchi, A.R. Enhanced surrogate assisted framework for constrained global optimization of expensive black-box functions. Comput. Chem. Eng. 2018, 118, 91–102. [Google Scholar] [CrossRef]
  32. Qian, J.; Yi, J.; Cheng, Y.; Liu, J.; Zhou, Q. A sequential constraints updating approach for Kriging surrogate model-assisted engineering optimization design problem. Eng. Comput. 2019, 36, 993–1009. [Google Scholar] [CrossRef]
  33. Bartoli, N.; Lefebvre, T.; Dubreuil, S.; Olivanti, R.; Priem, R.; Bons, N.; Martins, J.R.; Morlier, J. Adaptive modeling strategy for constrained global optimization with application to aerodynamic wing design. Aerosp. Sci. Technol. 2019, 90, 85–102. [Google Scholar] [CrossRef]
  34. Forrester, A.; Sobester, A.; Keane, A. Engineering Design via Surrogate Modelling: A Practical Guide; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  35. Shi, R.; Liu, L.; Long, T.; Wu, Y.; Tang, Y. Filter-based adaptive Kriging method for black-box optimization problems with expensive objective and constraints. Comput. Methods Appl. Mech. Eng. 2019, 347, 782–805. [Google Scholar] [CrossRef]
  36. Joseph, V.R.; Hung, Y.; Sudjianto, A. Blind kriging: A new method for developing metamodels. J. Mech. Des. 2008, 130, 031102. [Google Scholar] [CrossRef]
  37. Kersaudy, P.; Sudret, B.; Varsier, N.; Picon, O.; Wiart, J. A new surrogate modeling technique combining Kriging and polynomial chaos expansions–Application to uncertainty analysis in computational dosimetry. J. Comput. Phys. 2015, 286, 103–117. [Google Scholar] [CrossRef]
  38. Zhang, Y.; Yao, W.; Ye, S.; Chen, X. A regularization method for constructing trend function in Kriging model. Struct. Multidiscip. Optim. 2019, 59, 1221–1239. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Yao, W.; Chen, X.; Ye, S. A penalized blind likelihood Kriging method for surrogate modeling. Struct. Multidiscip. Optim. 2019, 61, 457–474. [Google Scholar] [CrossRef]
  40. Couckuyt, I.; Forrester, A.; Gorissen, D.; De Turck, F.; Dhaene, T. Blind Kriging: Implementation and performance analysis. Adv. Eng. Softw. 2012, 49, 1–13. [Google Scholar] [CrossRef]
  41. Forrester, A.I.; Keane, A.J. Recent advances in surrogate-based optimization. Prog. Aerosp. Sci. 2009, 45, 50–79. [Google Scholar] [CrossRef]
  42. Palar, P.S.; Shimoyama, K. On efficient global optimization via universal Kriging surrogate models. Struct. Multidiscip. Optim. 2018, 57, 2377–2397. [Google Scholar] [CrossRef]
  43. Joseph, V.R.; Delaney, J.D. Functionally induced priors for the analysis of experiments. Technometrics 2007, 49, 1–11. [Google Scholar] [CrossRef]
  44. Haftka, R.T.; Villanueva, D.; Chaudhuri, A. Parallel surrogate-assisted global optimization with expensive functions—A survey. Struct. Multidiscip. Optim. 2016, 54, 3–13. [Google Scholar] [CrossRef]
  45. Forrester, A.; Jones, D. Global optimization of deceptive functions with sparse sampling. In Proceedings of the 12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Victoria, BC, Canada, 10–12 September 2008; p. 5996. [Google Scholar]
  46. Keane, A.; Bright, A. Passive vibration control via unusual geometries: Experiments on model aerospace structures. J. Sound Vib. 1996, 190, 713–719. [Google Scholar] [CrossRef]
  47. Couckuyt, I.; De Turck, F.; Dhaene, T.; Gorissen, D. Automatic surrogate model type selection during the optimization of expensive black-box problems. In Proceedings of the 2011 Winter Simulation Conference (WSC), Phoenix, AZ, USA, 11–14 December 2011; IEEE: New York, NY, USA, 2011; pp. 4269–4279. [Google Scholar]
  48. Suprayitno; Yu, J. C. Evolutionary reliable regional Kriging surrogate for expensive optimization. Eng. Optim. 2019, 51, 247–264. [Google Scholar] [CrossRef]
  49. Shao, T.; Krishnamurty, S. A clustering-based surrogate model updating approach to simulation-based engineering design. J. Mech. Des. 2008, 130, 041101. [Google Scholar] [CrossRef]
  50. Andrei, N.; Andrei, N. Nonlinear Optimization Applications Using the GAMS Technology; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
Figure 1. Mean square error of the IBK model.
Figure 1. Mean square error of the IBK model.
Mathematics 10 02906 g001
Figure 2. Planar truss structure. Adapted from [47].
Figure 2. Planar truss structure. Adapted from [47].
Mathematics 10 02906 g002
Figure 3. The relationship between flesh elements and thickness values. (a) Cross-validation prediction error of metamodels with different sample size. (b) Validation error of metamodels with different sample size.
Figure 3. The relationship between flesh elements and thickness values. (a) Cross-validation prediction error of metamodels with different sample size. (b) Validation error of metamodels with different sample size.
Mathematics 10 02906 g003
Figure 4. Optimization process-based infill strategy using IBK and FLT-AKM.
Figure 4. Optimization process-based infill strategy using IBK and FLT-AKM.
Mathematics 10 02906 g004
Figure 5. A 21-bars planar truss structure.
Figure 5. A 21-bars planar truss structure.
Mathematics 10 02906 g005
Figure 6. Comparison of optimal shapes of the 21-bars planar truss obtained by IBK.
Figure 6. Comparison of optimal shapes of the 21-bars planar truss obtained by IBK.
Mathematics 10 02906 g006
Table 1. Constrained optimization benchmark problems.
Table 1. Constrained optimization benchmark problems.
ProblemsBest Known ValueDimensionalityNo. Constraints
G24−5.50822
G8−0.095822
Two_bars0.030932
Four_bars140041
Ibeam0.013142
G5MOD5126.545
G4−30,665.53956
Hesse−31066
SR2994.42711
Table 2. Comparison of the obtained results for the test problems.
Table 2. Comparison of the obtained results for the test problems.
Test Problems
G24G8Four BarsTwo BarsIbeamG5MODG4HesseSR
IBKBest−5.4669−0.09514000.0310.0135126.5−30,665.5−3103042.2
Median−5.4075−0.09114000.0310.0145135.2−30,665.5−3103051.3
Mean−5.3832−0.08914000.0310.0145167.1−30,665.5−3103053.3
NEF33.246.817.436.26074.024.238.668.4
FLT-AKMBest−5.46−0.09514000.0310.0135739.9−30,611.7−306.5533024.4
Median−5.4173−0.08414000.0310.0156157.9−30,416.8−297.343061
Mean−5.415−0.08514000.0310.0186187.8−30,411.2−297.0933058.5
NEF3454.817.537.874.859.235.258.659.6
RCGOBest −0.09614000.0310.0135323.5−30,628.8−306.428
Median −0.09614000.0310.0146099.8−30,389.5−294.812
MeanN/A(10)−0.07314000.0330.0165992.9−30,395.3−296.239N/A(10)
NEF 51.717.240.467.865.4(3)3573.2
COBRA-LocalBest-−0.1---5126.5−30,665.5−309.942994.4
Median −0.1 5126.51−30,665.2−297.872994.7
[28]Mean −0.09 5126.51−30,665.1−296.252994.7
NEF 50 50505050
COBRA-GlobalBest-−0.1---5126.5−30,665.4−309.972994.7
Median −0.1 5126.51−30,664.9−297.872994.7
[28]Mean −0.09 5126.62−30,664.9−296.252994.7
NEF 50 50505050
Table 3. Comparison of the obtained results for the 21-bars planar truss.
Table 3. Comparison of the obtained results for the 21-bars planar truss.
Design VariablesIBKFLT-AKMRCGOCMLS [48]EORKS [49]
x 1 332.982.373
x 2 4.314.2514.194.1744.209
x 3 1.9071.911.911.8771.902
y 3 2.2672.2752.2781.7212.285
x 4 4.2184.2184.2374.0244.222
y 4 1.7061.7081.7051.4381.709
y 6 2.7232.7022.6242.1862.644
Best25.8525.4624.7222.7925.66
Median25.2725.2123.46--
Mean25.3625.3323.75--
NFE55.665.772.39587
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mai, H.T.; Lee, J.; Kang, J.; Nguyen-Xuan, H.; Lee, J. An Improved Blind Kriging Surrogate Model for Design Optimization Problems. Mathematics 2022, 10, 2906. https://doi.org/10.3390/math10162906

AMA Style

Mai HT, Lee J, Kang J, Nguyen-Xuan H, Lee J. An Improved Blind Kriging Surrogate Model for Design Optimization Problems. Mathematics. 2022; 10(16):2906. https://doi.org/10.3390/math10162906

Chicago/Turabian Style

Mai, Hau T., Jaewook Lee, Joowon Kang, H. Nguyen-Xuan, and Jaehong Lee. 2022. "An Improved Blind Kriging Surrogate Model for Design Optimization Problems" Mathematics 10, no. 16: 2906. https://doi.org/10.3390/math10162906

APA Style

Mai, H. T., Lee, J., Kang, J., Nguyen-Xuan, H., & Lee, J. (2022). An Improved Blind Kriging Surrogate Model for Design Optimization Problems. Mathematics, 10(16), 2906. https://doi.org/10.3390/math10162906

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop