Next Article in Journal
Light-Convolution Dense Selection U-Net (LDS U-Net) for Ultrasound Lateral Bony Feature Segmentation
Next Article in Special Issue
Analytical and Numerical Reliability Analysis of Certain Pratt Steel Truss
Previous Article in Journal
Robustness of PET Radiomics Features: Impact of Co-Registration with MRI
Previous Article in Special Issue
Sensitivity Analysis of Reliability of Low-Mobility Parallel Mechanisms Based on a Response Surface Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Selecting Composite Functions Based on Polynomials for Responses Describing Extreme Magnitudes of Structures

by
Bartłomiej Pokusiński
and
Marcin Kamiński
*
Department of Structural Mechanics, Faculty of Civil Engineering, Architecture and Environmental Engineering, Łódź University of Technology, 90-924 Łódź, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(21), 10179; https://doi.org/10.3390/app112110179
Submission received: 30 August 2021 / Revised: 26 October 2021 / Accepted: 27 October 2021 / Published: 30 October 2021
(This article belongs to the Special Issue Probabilistic Methods in Design of Engineering Structures)

Abstract

:

Featured Application

This work may be used for selection of a structural response in, for example, reliability analysis.

Abstract

The main aim of this work was to investigate a numerical error in determining limit state functions, which describe the extreme magnitudes of steel structures with respect to random variables. It was assisted here by the global version of the response function method (RFM). Various approximations of trial points generated on the basis of several hundred selected reference composite functions based on polynomials were analyzed. The final goal was to find some criterion—between approximation and input data—for the selection of the response function leading to relative a posteriori errors less than 1%. Unlike the classical problem of curve fitting, the accuracy of the final values of probabilistic moments was verified here as they can be used in further reliability calculations. The use of the criterion and the associated way of selecting the response function was demonstrated on the example of steel diagrid grillages. It resulted in quite high correctness in comparison with extended FEM tests.

1. Introduction

Diagrid structural systems have come out as some of the most efficient, most adaptable and most innovative approaches to structural buildings of this century [1,2]. Due to an esthetic potential and to the increasing architectural popularity, it seems, therefore, essential to examine their realizations and additionally compare results with those obtained for the traditional orthogonal structures. On the other hand, a variety of natural sources of uncertainties and human activity-driven reasons necessitate a series of analyses and tests to determine the impact of random parameters on the reliability of such structures.
A solution of the structural problem including randomness can be accompanied by a verification of the reliability indices calculated, for example, according to Cornell [3]—or using more sophisticated indicators [4,5,6,7,8]—by using a simple limit state function g defined as the difference between structural responses fa(b) and their given thresholds fmax (Figure 1). Then, one may apply polynomial chaos [9], Monte-Carlo simulation [10], approximated principal deformation modes [11] and/or the probability transformation method [12], for instance, to compute the basic probabilistic moments of the limit state functions. Independently of the method choice, the determination of structural responses—with respect to some input parameter subjected to uncertain fluctuations—is required. In practical cases, it is one of the crucial issues in the reliability assessment of structures, and therefore the main aim of this work since the exact dependence is usually unavailable. It can only have discrete results with some error, obtaining which takes additional time.
A solution to the above difficulties is the most accurate approximation of the real dependence with an analytical metamodel [13,14,15,16] based on a properly selected set of numerical experiments [17,18]. This approach is widely used in many technical fields [19,20]. One may apply many different established metamodeling techniques: sparse polynomial chaos expansion [21,22], artificial neural network [23], multivariate adaptive regression splines [24] and radial basis function approximations [25], support vector regression [26] and Kriging models [27], for instance. The choice is a compromise between accuracy and computational efficiency.
To increase the former while maintaining the latter, the determination of structural responses was assisted here by the global version of the response function method (RFM) [28,29,30], which is quite similar to the response surface methods (RSM) applicable frequently in reliability analysis [31,32,33,34,35,36]. It is necessary to determine such a function using multiple solutions of the given boundary, the initial value problem about the expectation of the random parameter to complete this task. The resulting discrete values obtained traditionally from the finite element method experiments series [37,38,39,40,41] are in the first phase alternatively generated here from several hundred selected polynomial-based functions. They were given some predefined analytical forms and can be used as reference responses to examine various approximations of the same structural behavior; additionally, the numerical error of such modeling was under investigation here. Moreover, the generation of discrete data allows obtaining them in a very short time with any known accuracy and mathematical complexity.
Unlike the classical problem of curve fitting, the influence of these approximation choice is verified here on the final values of the probabilistic moments as they can be used in further reliability calculations (Figure 1). Attempts were also made to find—as much as possible—marked dependences of the above value on some criterion value of the input discrete data with the proposed approximation based on curve fitting error measures. The final goal was to find a limit value of such a criterion, above which the satisfactory correctness is achieved—to use it for selecting a structural response. This performance metric of approximation should also be independent of the adopted weighting scheme as much as possible and counteract the phenomenon of overfitting at the same time.
This problem was partially examined in [28,42], while the criteria proposed there were based on the results obtained using a small number of approximating curves—30 and 68 for each reference function, respectively—and only three dependences as reference response functions (additionally simplified to a linear function). Therefore, the results of the search for the response function selection criterion based on the use of 480 reference functions and 480 approximations of each of them are presented here. In [42], a generalized stochastic perturbation method was used, and now the problem is discussed in the context of the direct symbolic integration approach, which should theoretically be a more accurate method if only probabilistic moments can be recovered by integration. The integration process was provided by the computer algebra system MAPLE [43], where the coefficients of responses were also computed from several solutions of the original problem obtained for random variables varying about its expectation.
Polynomial responses are commonly applied [32,36,44,45,46,47,48,49], and this idea follows some other applications in computational practice [50,51]. However, such approximations do not always provide satisfactory correctness [28,52]. Moreover, their application to describe the extreme magnitudes, for example, of structures seems unfortunate because polynomial limits at infinity are always infinite. In one of them, responses should tend to 0 instead as their values are only positive. Therefore, composite functions based on polynomials computed using the least squares method (LSM) were analyzed in this article. They are more flexible—due to the wider range of available functions—and can be further expanded into the Taylor series—as the compositions of elementary functions are analytical—in the perturbation technique, for instance. In turn, the different polynomial basis should help in capturing nonlinearity between the structural responses and the basic random variables.
In the last part of this work, the use of the criterion and the associated way of selecting the response function was presented on the example of eight steel diagrid grillages. The performance functions concerning the basic eigenfrequency, the extreme reduced stresses, the maximum of the global vertical displacements as well as local deflections were considered with the multiplier of (1) Young’s modulus e, (2) wall thickness t and (3) length of the manufactured elements l as the truncated Gaussian random variables. Additionally, the correctness of the response function selection was verified using extended FEM tests.

2. Theoretical Background

2.1. Direct Symbolic Integration Approach

In the case of some real function f(b) of the stationary input random variable b having probability density function p(b), classical definitions of the basic probabilistic moments and coefficients were introduced [53,54,55,56], for example:
  • the expectation:
E f b = + f b p b d b
  • the variance:
V a r f b = + f b E f b 2 p b d b
  • the mth central probabilistic moment (for m > 2):
μ m f b = + f b E f b m p b d b
The Gaussian probability density function is considered further, so that:
p b = 1 σ b 2 π exp b E b 2 2 σ 2 b b
where σ(b) ≡ σ stands for the standard deviation of b substituted further by a product of the expected value E[b] = b0 and the input coefficient of variation α(b) ≡ α.
The integral definitions proposed in Equations (1)–(3) are very rarely used with infinite limits. Random variables are usually analyzed as truncated having some lower and upper bounds of the probability density function driven by the physical meaning of the specific parameter or just the experimental works. Therefore, the integration process is limited here using the well-known three-sigma rule recognized as very efficient in various computational experiments [29,57]. Once the response function represents a reference exact dependence f(b), we obtain a solution of the probabilistic analytical method (AM), but when it is only an approximation fa(b)—of the semi-analytical method (SAM) [30,58].
Due to a large number of symbolic calculations in Section 3, in order to reduce their computation time, the semi-analytical method was approximated by determining the integrals using a numerical quadrature technique—the midpoint Riemann sum method [59]. The number of its subintervals was selected experimentally (see Section 3.4).
Probabilistic characteristics calculated on the basis of the formulas provided in this subsection can only be discrete. Their values were determined each time for the input coefficients of variation αi ∈ {0.0125, 0.025, …, αmax} with an increment of Δα = 1/80, where for αmax = 0.30, there were 24 calculation points.
In this work, the results of the probabilistic analytical method were taken as exact values. The approximate solutions of the semi-analytical method were compared to them in order to test the criteria for selecting the response function.

2.2. Response Function Method

The response function method consists in approximating the real dependence of the structure response f(b) with an analytical metamodel—the response function fa(b). To complete this task, it is necessary to determine such a function using a multiple solution of an investigated boundary value problem about the mean value of the random parameter, for bi (i = 1, …, N) belonging to the interval b0 − Δb,b0 + Δb. As a result, we obtain a set of discrete values fo (bi) with an error in the numerical procedure, e.g., the finite element method.
There are several ways to choose the Δb/b0 ratio [57,60] as well as this interval’s discretization both in terms of uniformity and the number of trial points [29,61]. Here, uniform interval subdivision with n = 11 trial points and the ratio Δb/b0 = 0.05 was applied (Figure 2) as the most frequently used. This choice was confirmed using the numerical experiments included in [28,58].
Each unknown response function was approximated here using composite functions based on polynomials (Table 1) computed using the classic least squares method [62] as well as its weighted version (WLSM) to modulate the importance of computational analysis results [63]. In the latter method, the values computed for the expectation of random variables were recognized as crucial. Their weights (10) were assumed as the sum of the rest ten weights of equivalent results (equal to 1) [57,64,65]—the Dirac-type distribution of the weights. However, in connection with the conclusions contained in [28], a uniform weight scheme was adopted as the basic one. The weighed one is only auxiliary.
When considering many approximations fa(b), the selection of the final response function in the initial stage of RFM development was made arbitrarily [30,66], while now it is the subject of additional optimization [57,65,67,68] related to curve fitting error measures (see Section 2.3): correlation maximization, minimization of the root-mean-square error (RMSE) or the residues variance. In addition to or interchangeably with the criteria calculated between the discrete data and the approximation, analogous magnitudes calculated in-between the LSM and the WLSM solutions are used [28,42]—the weights change in calculations does not affect how the structure behaves in reality. If the selected criteria indicate different approximations, it is assumed that the one with the smaller coefficients number is chosen [64,67], which reduces the risk of the Runge effect [69]—deterioration of the quality of polynomial interpolation, especially visible at the ends of the intervals, despite increasing their degree—and is consistent with the principle of using the simplest possible theory called Occam’s razor. It is widely used, among others, to choose between statistical models with a different number of parameters: in the AIC [70], BIC [71], SABIC [72] or RMSEA criteria [73].

2.3. Curve Fitting Error Measures

The first curve fitting error measure is the linear correlation coefficient. Let fo(bi) and fa(bi) denote the obtained discrete data and the values of the considered approximations, respectively, for the given bi (i = 1, …, N). Then, the following formula applies:
ρ f o , f a = i = 1 N f o b i f o ¯ f a b i f a ¯ i = 1 N f o b i f o ¯ 2 i = 1 N f a b i f a ¯ 2
where the general formula for the sample mean has the following form:
f o ¯ = 1 N i = 1 N f o b i
The closer the value of the coefficient ρ is to 1, the more the considered data series are correlated and the approximation courses are closer to the trial points. Due to the above and the narrow range of values (−1 ≤ ρ ≤ 1), greater visual differences are obtained when considering the value −log10 (1 − |ρ|).
Another measure of approximation fitting is the root-mean-square error:
R M S E = 1 N i = 1 N r i 2 = 1 N i = 1 N f o b i f a b i 2
where the magnitude ri is called the residuum, error or remainder. The next measure, fitting variance, is also based on this value:
V a r r i = 1 N 1 i = 1 N r i r ¯ 2
The coefficient of determination was also considered with the given formula:
R 2 = i = 1 N f a b i f o ¯ 2 i = 1 N f o b i f o ¯ 2
while due to its range of values (0 ≤ R2 ≤ 1), greater visual differences were obtained considering the quantity −log10 | R2 − 1|.

3. Criteria of Response Function Selection

3.1. Assumptions and Selection of Reference Functions for Numerical Examples

It was decided to consider here both the reference dependences and approximations having the following polynomial form:
Y = w X = i = 0 n D i X i
where it was assumed that the quantities used above are functions of single variables, e.g., in order to use the LSM method:
X = φ x , y φ x
and
Y = ψ x , y ψ y
This leads to the following dependence:
y = ψ 1 w φ ( x )
where the function w is hereinafter called the base polynomial and the argument is a random variable x with the truncated Gaussian probability density distribution, which is characterized by the input coefficient of variation α(x)∈[0.0, 0.30] to explore a fairly wide three-sigma range (0.1E[x], 1.9E[x]).
The use of as mathematically simple as possible selected elementary functions used in empirical formulas in Equations (11) and (12) led to obtaining 48 groups of dependences based on polynomials. Their list is included in Table 1.
Ultimately, they are to constitute performance functions describing extreme magnitudes of structures, e.g., deflections, displacements or stresses, with respect to a random parameter. This means that they only have to take positive values, which is the first limitation when it comes to choosing the reference base polynomials for numerical examples:
x max x x min y > 0
where
x min = 1 3 α max ( x ) E [ x ]
and
x max = 1 + 3 α max ( x ) E [ x ]
according to the three-sigma rule.
It should be emphasized that the sought reference base polynomials are assumed to be used for each dependence from Table 1, therefore, taking into account all forms of the functions, Inequality (14) is reduced to the following:
x max x x min w φ ( x ) > 1
which, due to the properties of the considered functions φ(x), is additionally simplified in the case of x > 0:
X max X X min w X > 1
where
X min = min x min , 1 / x max , x min , 1 / x max , ln x min , exp x min
and
X max = max x max , 1 / x min , x max , 1 / x min , ln x max , exp x max
Finding the exemplary coefficients for which Inequality (18) is satisfied was facilitated by narrowing down the interval (Xmin, Xmax) in which the base polynomial was considered. Its smallest width occurs at E[x] ≈ 1.14, while in further calculations E[x] = x0 = 1.0 was assumed. With such a value, the variable x is considered in the interval (0.1, 1.9), which satisfies the constraint imposed by Inequality (18) so that x > 0.
In order to be able to also easily assess the accuracy of the approximation visually, all coefficients values of the sought base polynomials are assumed to have the form |Di| = 10k, where k and i ∈ {0, …, n}. Moreover, their value should be such that after substituting for all groups of functions, it should be possible to further use the generated discrete data in the computer algebra system so that they do not exceed the limits imposed on the numbers, which was not easy to achieve, especially for the dependency y30.
It was decided to analyze the base reference polynomials up to the 6th degree inclusive in order to obtain the appropriate level of their “complexity” and at the same time leave the possibility of approximation using the functions based on the base polynomials of much higher orders—up to the 10th degree inclusive. Additionally, the possibility of obtaining a polynomial in which the influence of components derived from higher powers of the variable would clearly decrease—the polynomials only theoretically of a given degree—was avoided. The assumption was made that it should be similar to what can be schematically written as follows:
D j + 1 φ ( x ) j + 1 D j φ ( x ) j
where j ∈ {1, …, n − 1}. Its satisfactory approximation is obtained—for E[x] = 1.0—when |Dj+1| ≈ |Dj| which, in the presence of the previous assumptions, assumes the following:
D 1 = = D n = 10 k
The value of the coefficient D0 was obtained by considering the function φ(x) = ln(x) at the point x = E[x] ∈ [xmin, xmax]. After substituting the data for Inequality (17), it was observed that D0 > 1; therefore, in the presence of the earlier assumption, the smallest possible coefficient was adopted: D0 = 10.
Various values of the coefficient k occurring in Equation (22) and all the possible sign configurations in the D1, …, Dn coefficients were analyzed. For numerical examples, four polynomials were finally selected for each degree—only two for the first one because a larger number was not possible—for which Inequality (18) was satisfied and the maximum value of the function y = ψ−1(w(X)) in the interval (Xmin, Xmax) was the smallest. In the case of function w, the degree of which is not greater than 4, it was also possible for half of the dependences to select them for the two different smallest values of the coefficient k. The list of the selected reference base polynomials is given in Table 2.

3.2. Step One—Accuracy of the Coefficients

In the first part of the experiment, the influence of the discrete data accuracy on the coefficient values D*i of the approximation polynomials w* of the order POa was analyzed. For this purpose, cases were considered where discrete data were generated on the basis of a given group of functions as well as a reference base polynomial w and then approximated by the same type function. Due to the assumption of a different a priori error, by rounding the values of the trial points to a certain number of significant digits, it was then possible to analyze its impact on the a posteriori error assumed as the maximum from the value of the relative error (more precisely, its module—this detail is omitted in the further part of this work to shorten the descriptions). It was calculated for individual coefficients of the approximation polynomial in relation to the reference polynomial. This part of the experiment was shown schematically in Figure 3.
All the groups of functions from Table 1 as well as all the base reference polynomials from Table 2 were analyzed. The results for the polynomial dependences (the first group of functions) are shown in Figure 4a,b (dashed lines), while for all the groups of functions—in Figure 4b (solid lines). In order to increase the readability of the diagrams, only the maximum values of the relative errors from all the obtained ones for a given polynomial degree are presented, denoted as maxΔrel,c:
max Δ rel , c = max p o l y = 1 , 2 , y i = y 1 , Δ rel , c
where
Δ rel , c = max j = 0 , , P O b D j D j * D j
In the case of polynomial dependences (dashed lines in Figure 4), the relative error value was initially 1.0 regardless of the degree of the polynomial under consideration. This is due to the fact that the generated discrete data, when rounded to a very small number of significant digits, had identical values. The approximating function was then constant, the values of the coefficients at the nonzero powers of the variable were equal to 0, so their relative error in relation to the respective nonzero coefficients of the reference polynomials was 100%. At a slightly larger number of significant digits of rounding, the individual discrete data generated started to differ from the others, but only in the last place of their decimal notation. An attempt to approximate these values—the diagram of which is clearly a polygonal chain—with smooth polynomials caused maxΔrel,c to start exceeding 100%—except for linear functions. Only when we achieved a sufficiently high accuracy of the values of the test points—which guaranteed their greater differences—it only started to decrease.
When considering all the other groups of functions (solid lines in Figure 4b), a greater precision of discrete data was required to obtain the same a posteriori error than for the polynomial dependences (dashed lines). This is equivalent to the fact that with the established accuracy of discrete data, in the case of the polynomial dependences, a smaller relative error of the coefficients maxΔrel,c was obtained than for the other considered functions (see the example in Table 3). This is due to the fact that in the case of the first group of functions, it was possible to directly calculate the coefficients in Equation (10), while in the case of the others, an additional modification of the arguments or the set of values was required in accordance with Equations (11) and (12), correspondingly.
Regardless of the choice of the functions group, with the exception of the initial perturbations, in the case of reference polynomials of higher degrees, clearly greater precision of discrete data was required to obtain a specific error rate of the coefficients. It is worth noting that the not too high critical a posteriori error value of 5% was reached approximately for the second-order base polynomial and seven significant digits of discrete data rounding (Table 3, Figure 4). The standard accuracy of the FEM results is much lower, so if they are used to obtain the structure response function, even for the base polynomials more complex than linear, there is a risk of exceeding the above critical value of the relative error of the coefficients maxΔrel,c. It is also worth emphasizing that thus far, it had been assumed that discrete data were described by a specific group of functions based on a polynomial w with a known degree, while in most practical cases, the form of the response function is unknown. Therefore, the response function with satisfactory accuracy of the coefficients can be obtained on the basis of the FEM results only in a very limited number of simple cases in which we know that the state function dependence is linear—in some cases it may be enough to reduce to it. This situation is a serious problem that could prevent further effective application of this method in practice. However, when the RFM is used for reliability analysis, more important than the accuracy of the function coefficients is the one connected with the probabilistic moments because they can be used in further calculations (Figure 1).

3.3. Step Two—Accuracy of the Probabilistic Moments

In the formula for the Cornell reliability index, there are only the expected value and the variance. However, the description of the distribution only with their use is not sufficient because the values of these parameters may be similar for extremely different data, as in the case of the Anscombe quartet [74] (Figure 5). Taking into account the additional values of the third and fourth central moments—and the skewness and kurtosis based on them—allowed indicating numerical differences between such data (Table 4). It should also be noted that reliability can be ultimately described by indicators whose formula takes into account not only the expected value and the variance [4,5,6,7,8]. This is another confirmation of the statement that the compliance of values should also apply to some additional parameters describing the distribution.
In the further part of the experiment, four probabilistic moments were considered: the first raw moment—marked as m1—equivalent to the expected value is given in Equation (1), while the central moments from the second upwards—marked from m2 to m4—are described in Equation (3). The approximation of the discrete data to D1 = 7 significant digits was adopted because it is sufficient for groups of functions based on the linear base polynomials (maxΔrel,c below 1‰), not sufficient for the second-order base polynomials (maxΔrel,c even above 100%, Table 5) and definitely insufficient for the fifth-order base polynomials (maxΔrel,c even above 5 × 108). Therefore, in the further part of the experiment, only the results obtained on the basis of base polynomials with the abovementioned degrees were considered to verify whether, regardless of the possible accuracy of the coefficient values—which is also the resultant of the significant digits number—it is possible to obtain satisfactory accuracy of probabilistic moments measured by their relative error Δrel,m. The following expression was employed on the example of the expected value:
Δ rel , m 1 = E y i w E y j w * E y i w
The initially obtained results confirmed that for different approximations, the shape of the diagrams of the probabilistic moments’ relative error Δrel,m may also differ (Figure 6). The higher error value was not always obtained for the higher moment. Its monotony was not always only increasing in the considered range. Therefore, it was decided to use the maximum of all the values in the interval (0.0, αmax*) as a measure of accuracy of the given approximation:
max Δ rel , m = max m = m 1 , , m 4 α i 0.0125 , 0.025 , α max * Δ rel , m
The results obtained in the validation process confirmed its compliance (Figure 7). At the same time, they indicated a more general, assumed character: for some approximations, despite the significant relative error of the coefficients, the one concerning probabilistic moments was clearly smaller. One of the greatest advantages of the proposed measure of accuracy maxΔrel,m is the possibility of comparing any approximation to a given reference function despite having different mathematical forms, which was not possible in the case of the previous accuracy criterion Δrel,c.

3.4. Step Three—No Information about the Form of the Function the Discrete Values Come From

Having the new approximation accuracy criterion finally made it possible to analyze cases where no assumptions are made about the function approximating the discrete data—both the function groups and the degree of the base polynomial. However, knowing the form of the function on the basis of which the trial points had been generated, it was possible to investigate the a posteriori error for individual approximations.
In this part of the experiment, 480 reference functions were analyzed—all the 48 groups of functions were obtained on the basis of all the 10 reference polynomials—and 480 approximations of each of them—48 groups of functions with polynomials from the first to the tenth degree. Temporarily, this gave over 230,000 approximations, with this number then dropping to fewer than 175k due to the fact that the domains of the approximations did not coincide with the examined interval of x. This part of the experiment is shown schematically in Figure 8.
The first stage of the above scheme consists of determining for each approximation the relative error values for four moments and 24 discrete values of the input coefficient of variation (Figure 6). In total, this gives almost 17 million values of probabilistic characteristics to calculate, which would be very time-consuming in the case of symbolic integration. Therefore, when determining the values for the approximation, it was decided to make approximate calculations using the midpoint Riemann sum method [59]. The diagrams in this part of the experiment were partially analyzed visually and the most important part of the results was related to the 1% error, so the analysis of the results focused on the range from 10−4 to 100—two orders of magnitude difference. In order to determine how large the relative error of determining the integrals for the approximation was acceptable, a line diagram with a vertical logarithmic scale and its different spreads Δappr was considered. It simulated the relative error of a certain probabilistic moment of exemplary approximation with respect to the reference value (Δrel,m). A satisfactory accuracy was obtained with the value of Δappr = 10−5 (Figure 9). Due to the nature of the diagrams, the upper limit for considering the error Δrel,m could be increased to 101 without the loss of accuracy. In order to be able to present all the results in the diagrams later in this section, the upper limit was modified to represent values with the relative error greater than or equal to 101, and the lower limit—those with the relative error less than or equal to 10−4.
The initial tests were carried out by analyzing Δrel,a—the relative error of determination of the SAM integrals using the approximate method in relation to the analytical method:
Δ rel , a = μ m , AM μ m , approxSAM μ m , AM
The tests were performed with a changing number of subintervals for different configurations of discrete data, approximating functions and values of the coefficient of variation α. Each time, decreasing dependences were obtained, while in the most unfavorable case, to obtain the value of Δrel,a = 10−5, it was necessary to divide the integration interval into 5000 equally wide subintervals (Figure 10). This value was used for determining all the probabilistic moments for approximation with the approximate method, which was the case in the first stage of the calculation scheme presented in Figure 8.
After the calculation of the curve fitting error measures for each approximation, according to Equations (5)–(9), the third stage was started. After a long search, the most marked dependence of the maximum value of the a posteriori error of probabilistic moments on the criterion value was noted for a modified root-mean-square error (Figure 11a) provided in the following formula:
R M S E mod = log R M S E 3 R M S E w 1 log P O a log 49
It assumes a weighted product where RMSE is calculated between the LSM approximation and the input discrete data, while RMSEw—between the LSM approximation and its weighted version (weights 1,1,1,1,1,10,1,1,1,1,1). It makes the result independent of the adopted weighting scheme and thus makes the solution more probable. Such a situation should occur due to the lack of influence of the weights adopted in the calculations on the actual behavior of the structure. In order to increase the readability of the graph, the logarithmic scale was also used. Moreover, it was required to use the so-called penalty term, i.e., an expression that prefers approximations with the lowest possible degree of the approximation POa. This factor counteracts the phenomenon of overfitting when the approximation has too many parameters in relation to the sample size and practically passes through all the points having a complicated shape between them. The value of the constant coefficients in Equation (28) was determined by using the trial and error method and analyzing the results obtained for αmax*(x) = 0.30 because earlier for such a value the criteria for the maximum relative error of the coefficients and moments turned out to be the closest (Figure 7).
The RMSEmod criterion made it possible to obtain satisfactory accuracy of the probabilistic moments maxΔrel,m for the discrete data generated by the functions based on polynomials of various POb regardless of the possible accuracy of the values of the coefficients Δrel,c (Table 5) being the result of the number of significant digits rounding of the trial points. The validity of using the criterion was also confirmed by the graphs showing the indicated approximations in relation to the reference functions and the discrete data generated (Figure 12).
The RMSEmod rounded value above which—for αmax*(x) = 0.30—all the obtained a posteriori errors are below 1% is 36 (Figure 11a). It is worth noting that this number was almost independent—it visually imperceptibly decreased—from the interval width of the input coefficient of variation in which the accuracy of the approximation was tested—equivalent to αmax*. With its decrease, the relative error value maxΔrel,m decreased (Figure 11a–f). It should be noted that the least squares method was also based on the condition of minimizing the criterion similarly to the RMSE. In the opinion of the author, the abovementioned conclusions confirm that the proposed criterion is correct.
The limit value of the adopted criterion in relation to the form of the penalty term resulted in that only the approximations based on the base polynomial of the maximum eighth degree could exceed it. In the case of the ninth degree, despite the minimum values of the RMSE and RMSEw components (but not less than 10−20—the number of digits used in the experiment by the MAPLE system when performing calculations using the floating-point numbers was 20), the final value of the RMSEmod coefficient was still lower than the limit value equal to 36. This limitation seems rational if we take into account the number of trial points (n = 11) and the desire to avoid the overfitting phenomenon.
It is worth mentioning that all the considered criteria determining the quality of approximation fit—linear correlation coefficient, root-mean-square error, fitting variance, coefficient of determination—used both individually and in pairs did not lead to a marked dependence of the maximum value of the a posteriori error on the criterion value. Adoption of the approximation for which their extreme value is achieved in most cases also led to results of unsatisfactory accuracy—points with the ordinate above 10−2 in Figure 13.
In the abovementioned experiment, a total of 174,469 approximations were analyzed, and 17,660 of them (10.1%) had error values less than 1%. Only 1003 of them (7.91% from the previous group) had the RMSEmod value exceeding the established limit. This was due to the desire to ensure—apart from the required accuracy—that the result was independent of the adopted weighting scheme and counteract the phenomenon of overfitting. The approximation that would meet the criterion limit value was found not for all the sets of trial points in the experiment. Often, none of the 480 approximations for a given series of discrete data met the error condition—points with an ordinate greater than 10−2 in Figure 14a. It could be caused by (1) the insufficient number of analyzed functions, (2) the insufficient number of trial points or (3) too narrow interval of their examination.
The first reason is justified by considering only polynomial approximation. The best results obtained (Figure 14b) were significantly worse than without this restriction (Figure 14a). Additionally, in most cases, the assumed threshold of 1% of the relative error maxΔrel,m was exceeded (Figure 14b). The second and third reasons are justified by the results contained in [28], where more accurate results were obtained when considering a larger number of discrete data for a wider range of the input variable.
Bearing in mind the application of the established criterion in practice, it should therefore be emphasized that the adopted base of approximation in Table 1 may turn out to be insufficient. In such a case, it is recommended to extend it until a satisfactory dependence is found. Increasing the number of trial points or the width of their consideration interval may then require modification of the criterion, which necessitates additional experiments. Alternatively, it is also possible to select approximations from among those maximizing the value of RMSEmod, while considering them in the full range of variability of a given random parameter and rejecting solutions that differ from the others. This way is presented in the next section.

4. Observations through a Numerical Example of Steel Diagrid Grillages

4.1. Finite Element Analysis

In this experiment, eight grillages designed in accordance with the applicable standards for steel structures [75,76,77] were analyzed. They were computational models of a supporting steel structure for a rectangular glass floor (9.00 × 5.20 m). It belongs to the category of use C1 [76], including areas with tables where people may congregate, e.g., cafés or restaurants. It was additionally assumed that due to the visual (architect or investor) and design requirements (easier connections, as well as the internal forces transfer), the dimension of the cross-section in the normal direction must be constant within each model.
The following were assumed as random variables with the truncated Gaussian probability density distribution: the multiplier of (1) Young’s modulus e, (2) wall thickness t and (3) length of the manufactured elements l. They represent the variability related to the material, cross-sections and geometry, correspondingly. The value of their coefficients of variation was considered in the interval (0.0, 0.25). All the expected values were 1.0 since e, t and l are multipliers of deterministic quantities. The whole numerical example concerning steel diagrid grillages is shown schematically in Figure 15.
The FEM analysis was carried out with the use of the civil engineering system ROBOT. Its usage was driven by the fact that this system enables both obtaining discrete solutions of a given structural problem as well as shaping and dimensioning steel structures so that the analysis is fully supported by this program.
Two types of grillages were considered: orthogonal (model OB and OS) and diagrid. The latter were additionally grouped into having a mesh of right triangles (models RB and RS) and equilateral triangles arranged in the transverse (models TB and TS) as well as longitudinal directions (models LB and LS) to represent different possible architectural concepts of floor triangularization. In each of these groups, there were versions with a smaller and bigger mesh size (Figure 16), with uniform division, so that the nodes of the real structure are as repeatable as possible.
All of the supports were defined as rigid with fixed movement and free rotation in each direction. Table 6 shows the properties of the models in detail. There, it can be seen how the reduction of the mesh size directly influences the increase in the number of nodes, supports, structural members and panels. However, a thinner layer of laminated glass glazing is required. In this case, it consists of two hardened load-bearing panels and an 8 mm thick anti-slip top layer—also hardened. The former were dimensioned each time according to the appropriate design standards [78,79,80].
All the loads from Eurocodes in the persistent design situation were considered for the design working life of 50 years—a common structure according to [75]. The first (marked with G) is connected with the weight of the glass covering and the supporting structure itself. The second (marked with Q) represents the imposed live loads of 3.0 kN/m2 [76]. The surface loads were implemented. They were distributed using the trapezoidal and triangular method.
The 3D frame two-noded finite elements with six degrees of freedom in each node and rigid connection were used in each model. Based on the incremental formulation of the FEM equations [39,81,82], the following approximation of the displacement increment vector Δue (ξ) was considered in the finite element:
Δ u e ξ = N ξ Δ q e
where N represents the shape functions matrix, Δqe denotes the increments of the displacements vector and ξ is the dimensionless coordinate of the section location.
The global deformation was computed in the serviceability limit state (SLS) combination (G + Q), the stresses—in the ultimate limit state (ULS) combination (1.15 × 0.85G + 1.5Q) obtained in accordance with the formulas contained in [75]. The second-order global analysis P-Delta was used taking into account the influence of deformation on the statics of the system—geometric nonlinearity—to more realistically predict the structural behavior [77]. The incremental method was applied using a modified Newton–Raphson algorithm to solve the nonlinear problem. Therefore, the following FEM matrix equilibrium equations series [83] were used for each test necessary for the response function method recovery:
K q n Δ q n + 1 i = Δ Q n + 1 i
where K denotes the stiffness matrix of the system changing under the influence of deformation, resulting classically from an aggregation process over all the finite elements, q is the generalized displacements vector, Δ denotes the increment of a given quantity, Q is equivalent to the external loads vector, n denotes the increment number and superscript i indicates the iteration number. The stiffness matrix K was updated only after each subdivision—not after each iteration—as a result of its correction with the algorithm of the BFGS method. The following parameters were set to complete these experiments:
  • number of load increments: 5;
  • the maximum number of iterations per increment: 40;
  • number of reductions for the increment length: 3;
  • coefficient of the increment length reduction: 0.5;
  • the maximum number of BFGS corrections: 10;
  • tolerance factor of the relative norm for the residual forces: 0.0001;
  • tolerance factor for the relative norm of displacements: 0.0001.
The finite element method-based modal analysis was performed, too. Higher modes were also computed, but only the first mode was verified. The subspace iteration algorithm was used for solving the eigenvalue problem. The maximum number of iterations was set as 40 with a tolerance factor of 0.0001.
Yield strength was assumed as 235 MPa and the structural members were made of rectangular hollow sections (Figure 16). This type of cross-section was chosen because of its insensitivity with regard to the lateral–torsional buckling, the creation by their top walls of a flat surface to support the panels; hence, it is usually used in structures with glass covers [84]. This choice made it possible to avoid considering the warping in the FEM analysis as an additional degree of freedom in the node (the seventh in this case) because its influence is negligible for the selected cross-sections.
In each model, the sizing process was similar and deterministic, carried out entirely according to Eurocode 3-1-1 [77]. Initially, an identical RHS section of a given dimension in the normal direction was adopted for all the bars. Next, a division into groups of bars was made, taking into account the distribution of internal forces in the ULS combinations and deformations in the SLS combinations. For each of them, a cross-section with a given dimension in the normal direction was then calculated. The process of creating groups and sizing their cross-sections was iterative. Its limitations were finding no more than three profiles (for a given grillage model—economic reasons) available in Poland and their distribution in the structure, for which all the conditions of the limit states were met for no more than 90%. The above procedure was carried out for various constant dimensions in the normal direction—e.g., 250, 260, 300, 350, 400 mm—finally choosing one for which the lowest mass was obtained. The final profiles are shown in Figure 16, and the masses—calculated based on the lengths of the model bars, not the real structural elements—in Table 6. The minimum steel weight was found for grillages with the orthogonal mesh (models OB and OS), the maximum—for the triangular mesh in the transverse direction (models TB and TS).
In all the models, the most demanding deterministic limit criterion was the SLS condition connected with the global deformation of structures. For the OB, OS, RB, RS, TB, TS, LB and LS models, it was met at 86.9%, 87.3%, 83.0%, 85.3%, 88.4%, 87.2%, 81.5%, and 86.9%, correspondingly. The maximum efficiency ratio of the structural members in the ULS was simultaneously recognizably lower: 49.1%, 43.2%, 57.4%, 43.2%, 41.8%, 49.2%, 48.1%, and 40.0%.
The final purpose of the finite element method experiments was to obtain discrete solutions of the given structural problems for all the grillages. Therefore, the resulting values of the fundamental frequency, the extreme reduced stresses (according to the Huber–Mises–Hencky hypothesis), the maximum values of the global vertical displacement and local deflection were determined.

4.2. Using the RMSEmod Criterion

For all the FEM data series, each of the 480 previously mentioned approximations were adopted. However, those whose domains did not coincide with the examined interval of the random variables were rejected. In four out of eleven cases, the average of the RMSEmod criterion values from all the models exceeded the limit value of 36 (Table 7). In the remaining cases, several approximations with the maximum RMSEmod values were compared visually. In the range of the FEM tests, the difference was imperceptible (Figure 17a). However, when considering them in the full range of the assumed variability of the random parameter, the diagrams of some of them significantly differed from one common course for the majority (Figure 17b). Such cases occurred even for the approximation with the maximum mean of the RMSEmod values among those obtained for the given series of discrete data (Table 7). Therefore, all of them were omitted when selecting the response function in favor of the next ones sorted in the descending order according to the criterion value. Thus, the diagram of the dependence selected in this way—in Table 7, the corresponding RMSEmod value is marked with an asterisk—coincided with those obtained for many unselected approximations having significantly different forms of formulas. In the authors’ opinion, this indicates correctness of the solution.
The forms of some of the final response function formulas additionally confirmed correctness of the response function selection according to the RMSEmod criterion. When considering the variability of Young’s modulus, we obtained inverse proportionality to the first power for deflections and dependence on the root of e for the basic eigenfrequency ω. This is justified by well-known dependences of the linear elastic analysis:
  • for elastic displacements (with the assumption of small deformations):
u elastic = C e
  • and for the first eigenfrequency:
ω = C e ,
where C is a constant depending on the parameters of the structure and the considered performance function.
When the random parameter is t, both the state function ug and ω are based on the fourth-order polynomial, which occurs in the analytical description of the inertia moment Jrect for rectangular hollow sections as a function of their walls thickness t0:
J rect = b w + 2 t 0 h w + 2 t 0 3 b w h w 3 12 = = 4 3 t 0 4 + 2 h w + 2 3 b w t 0 3 + b w h w + h w 2 t 0 2 + 1 2 b w h w 2 + 1 6 h w 3 t 0 ,
where bw is the internal dimension of the shorter side and hw is the internal dimension of the longer side. However, it should be mentioned that the assumption of constant internal dimensions and simplification that ignores the occurrence of fillets were used in the example.
The correctness of the response function selection was also confirmed by the course of their diagrams (Figure 18), which is compliant with the engineering intuition in this matter. With the increase in wall thickness multiplier t, the deflections and reduced stresses nonlinearly decrease, and the fundamental frequency goes up slightly. However, when the length of the elements increases, an even faster gain is observed for the state functions ug, ul and σred, but a decrease for ω. As far as the quantitative assessment is concerned, it should be emphasized that the abovementioned functions assume elasticity in the full range of variable uncertainties, so that the theoretical values of the reduced stresses sometimes exceed the adopted yield strength fy = 235 MPa (Figure 18d). This limitation should be taken into account as a limiting condition in the further reliability assessment, so the values in the part of the σred diagram above fy will not matter.
The correctness of the response function selection was confirmed by the extended FEM tests, too (Figure 18). They were performed in the set {0.25E[b], 0.5E[b], 0.75E[b], 1.25E[b], 1.5E[b], 1.75E[b]}. For the LS model, the maximum value of the relative error—with the FEM tests as exact values—was only 1.33‰, even though the initial tests range was 15 times narrower.

5. Concluding Remarks

The method of creating groups of composite response functions describing the extreme magnitudes of structures was applied. They are based on polynomials, the coefficients of which can be obtained using the LSM. The functions type and the number of significant digits of trial points rounding had a clear impact on the a posteriori error of the approximation coefficients. When a sufficiently high accuracy of the input discrete data was achieved, it only started to decrease. At the same time, a smaller relative error of the coefficients was obtained for the polynomial dependences than for the composite functions. In order to maintain it, in the case of the base polynomials of higher degrees, a greater number of significant digits of discrete values rounding was required. Finally, it was noticed that even knowing the form of the response function, with the standard accuracy of FEM results, for dependences based on a polynomial more complex than linear, there is a risk of exceeding the 5% relative error of the coefficient values.
An a posteriori correctness measure of approximation was proposed. It is the maximum of the relative errors of four probabilistic moments obtained for the variance coefficients in the interval (0.0, αmax*). Its most marked dependence on the criterion value—between the approximation and the input data—was noted for the modified root-mean-square error. It makes the result independent of the adopted weighting scheme, at the same time counteracting the phenomenon of overfitting. Its rounded value, above which the satisfactory correctness was achieved, equals 36. If this condition was not met, satisfactory results were obtained by selecting approximations from among those maximizing the value of RMSEmod, while considering them in the full range of variability of the random parameter and rejecting solutions that differed from the others. This was confirmed by the forms of the simplified formulas, the extended FEM tests and shapes of diagrams of the response functions thus selected for the grillages.
The analyzed responses had a meaningful influence on the probabilistic moments. Therefore, if the form of the response function is unavailable, it is suggested to search for it according to the proposed criterion and the given way related with it. The adoption of approximation on the basis of the previously commonly used criteria—e.g., fitting correlation maximization or fitting variance—did not lead to results with satisfactory accuracy.
This study may be further extended to analysis of some random variables not exhibiting Gaussian probability density functions when precise determination of higher-order statistics is of paramount importance in computational mechanics [85].

Author Contributions

Conceptualization, B.P. and M.K.; methodology, B.P.; software, B.P.; validation, B.P.; formal analysis, B.P.; investigation, B.P.; resources, B.P.; data curation, B.P.; writing—original draft preparation, B.P.; writing—review and editing, M.K.; visualization, B.P.; supervision, M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Meyer Boake, T. Understanding Steel Design; Birkhäuser: Bazel, Switzerland, 2013. [Google Scholar]
  2. Moon, K. Design and Construction of Steel Diagrid Structures. In Proceedings of the 11th Nordic Steel Construction Conference-NSCC 2009, Malmö, Sweden, 2–4 September 2009; Swedish Institute of Steel Construction SBI: Stockholm, Sweden, 2009; pp. 398–405. [Google Scholar]
  3. Cornell, C.A. A probability-based structural code. Am. Concr. Inst. J. 1969, 66, 974–985. [Google Scholar] [CrossRef]
  4. Tichý, M. First-order third-moment reliability method. Struct. Saf. 1994, 16, 189–200. [Google Scholar] [CrossRef]
  5. Ono, T.; Idota, H. Development of High Order Moment Standardization Method into structural design and its efficiency. J. Struct. Constr. Eng. 1986, 365, 40–47. (In Japanese) [Google Scholar] [CrossRef] [Green Version]
  6. Zhao, Y.-G.; Lu, Z.-H. Fourth-Moment Standardization for Structural Reliability Assessment. J. Struct. Eng. 2007, 133, 916–924. [Google Scholar] [CrossRef]
  7. Zhang, L.W. An improved fourth-order moment reliability method for strongly skewed distributions. Struct. Multidiscip. Optim. 2020, 62, 1213–1225. [Google Scholar] [CrossRef]
  8. Lu, Z.-H.; Hu, D.-Z.; Zhao, Y.-G. Second-Order Fourth-Moment Method for Structural Reliability. J. Eng. Mech. 2017, 143, 06016010. [Google Scholar] [CrossRef]
  9. Ghanem, R.G.; Spanos, P.D. Stochastic Finite Elements: A Spectral Approach; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  10. Hurtado, J.E.; Barbat, A.H. Monte Carlo techniques in computational stochastic mechanics. Arch. Comput. Methods Eng. 1998, 5, 3–30. [Google Scholar] [CrossRef]
  11. Settineri, D.; Falsone, G. An APDM-based method for the analysis of systems with uncertainties. Comput. Methods Appl. Mech. Eng. 2014, 278, 828–852. [Google Scholar] [CrossRef]
  12. Falsone, G.; Laudani, R. Matching the principal deformation mode method with the probability transformation method for the analysis of uncertain systems. Int. J. Numer. Methods Eng. 2019, 118, 395–410. [Google Scholar] [CrossRef]
  13. Jin, R.; Chen, W.; Simpson, T.W. Comparative studies of metamodelling techniques under multiple modelling criteria. Struct. Multidiscip. Optim. 2001, 23, 1–13. [Google Scholar] [CrossRef]
  14. Simpson, T.W.; Booker, A.J.; Ghosh, D.; Giunta, A.A.; Koch, P.N.; Yang, R.J. Approximation methods in multidisciplinary analysis and optimization: A panel discussion. Struct. Multidiscip. Optim. 2004, 27, 302–313. [Google Scholar] [CrossRef] [Green Version]
  15. Simpson, T.W.; Peplinski, J.D.; Koch, P.N.; Allen, J.K. Metamodels for computer-based engineering design: Survey and recommendations. Eng. Comput. 2001, 17, 129–150. [Google Scholar] [CrossRef] [Green Version]
  16. Santner, T.J.; Williams, B.J.; Notz, W.I. The Design and Analysis of Computer Experiments; Springer: New York, NY, USA, 2003. [Google Scholar]
  17. Draper, N.R.; Smith, H. Applied Regression Analysis; Wiley: New York, NY, USA, 1998; ISBN 9781118625590. [Google Scholar]
  18. Myers, R.H.; Montgomery, D.C. Response Surface Methodology: Process and Product Optimization Using Designed Experiments, 2nd ed.; Wiley: New York, NY, USA, 2002. [Google Scholar]
  19. Shukla, S.K. Metamodeling: What is it good for? IEEE Des. Test Comput. 2009, 26, 96. [Google Scholar] [CrossRef]
  20. Fang, K.-T.; Li, R.; Sudjianto, A. Design and Modeling for Computer Experiments; Chapman and Hall/CRC: New York, NY, USA, 2005. [Google Scholar]
  21. Blatman, G.; Sudret, B. Efficient computation of global sensitivity indices using sparse polynomial chaos expansions. Reliab. Eng. Syst. Saf. 2010, 95, 1216–1229. [Google Scholar] [CrossRef]
  22. Blatman, G.; Sudret, B. Adaptive sparse polynomial chaos expansion based on least angle regression. J. Comput. Phys. 2011, 230, 2345–2367. [Google Scholar] [CrossRef]
  23. Smith, M. Neural Networks for Statistical Modeling; Von Nostrand Reinhold: New York, NY, USA, 1993. [Google Scholar]
  24. Friedman, J.H. Multivariate Adaptive Regression Splines. Ann. Stat. 1991, 19, 1–67. [Google Scholar] [CrossRef]
  25. Chen, S.; Chng, E.S.; Alkadhimi, K. Regularized orthogonal least squares algorithm for constructing radial basis function networks. Int. J. Control 1996, 64, 829–837. [Google Scholar] [CrossRef]
  26. Dai, H.; Zhang, B.; Wang, W. A multiwavelet support vector regression method for efficient reliability assessment. Reliab. Eng. Syst. Saf. 2015, 136, 132–139. [Google Scholar] [CrossRef]
  27. Sacks, J.; Welch, W.J.; Mitchell, T.J.; Wynn, H.P. Design and analysis of computer experiments. Stat. Sci. 1989, 4, 409–423. [Google Scholar] [CrossRef]
  28. Pokusiński, B.M.; Kamiński, M.M. Various response functions in lattice domes reliability via analytical integration and finite element method. Int. J. Appl. Mech. Eng. 2018, 23, 445–469. [Google Scholar] [CrossRef] [Green Version]
  29. Kamiński, M.M.; Świta, P. Generalized stochastic finite element method in elastic stability problems. Comput. Struct. 2011, 89, 1241–1252. [Google Scholar] [CrossRef]
  30. Kamiński, M.M. On semi-analytical probabilistic finite element method for homogenization of the periodic fiber-reinforced composites. Int. J. Numer. Methods Eng. 2011, 86, 1144–1162. [Google Scholar] [CrossRef]
  31. Rajashekhar, M.R.; Ellingwood, B.R. A new look at the response surface approach for reliability analysis. Struct. Saf. 1993, 12, 205–220. [Google Scholar] [CrossRef]
  32. Guan, X.L.; Melchers, R.E. Effect of response surface parameter variation on structural reliability estimates. Struct. Saf. 2002, 23, 429–444. [Google Scholar] [CrossRef]
  33. Gavin, H.P.; Yau, S.C. High-order limit state functions in the response surface method for structural reliability analysis. Struct. Saf. 2008, 30, 162–179. [Google Scholar] [CrossRef]
  34. Allaix, D.L.; Carbone, V.I. An improvement of the response surface method. Struct. Saf. 2011, 33, 165–172. [Google Scholar] [CrossRef]
  35. Bucher, C. Metamodels of optimal quality for stochastic structural optimization. Probabilistic Eng. Mech. 2018, 54, 131–137. [Google Scholar] [CrossRef]
  36. Bucher, C.G.; Bourgund, U. A fast and efficient response surface approach for structural reliability problems. Struct. Saf. 1990, 7, 57–66. [Google Scholar] [CrossRef]
  37. Babuŝka, I.; Tempone, R.; Zouraris, G.E. Solving elliptic boundary value problems with uncertain coefficients by the finite element method: The stochastic formulation. Comput. Methods Appl. Mech. Eng. 2005, 194, 1251–1294. [Google Scholar] [CrossRef]
  38. Keese, A.; Matthies, H.G. Hierarchical parallelisation for the solution of stochastic finite element equations. Comput. Struct. 2005, 83, 1033–1047. [Google Scholar] [CrossRef]
  39. Kleiber, M.; Hien, T. The Stochastic Finite Element Method; Wiley: New York, NY, USA, 1992. [Google Scholar]
  40. Liu, W.K.; Belytschko, T.; Mani, A. Random field finite elements. Int. J. Numer. Methods Eng. 1986, 23, 1831–1845. [Google Scholar] [CrossRef]
  41. Stefanou, G.; Savvas, D.; Papadrakakis, M. Stochastic finite element analysis of composite structures based on mesoscale random fields of material properties. Comput. Methods Appl. Mech. Eng. 2017, 326, 319–337. [Google Scholar] [CrossRef]
  42. Pokusiński, B.M.; Kamiński, M.M. On influence of the response functions on the diagrid and orthogonal grillages reliability by the stochastic iterative perturbation-based finite element method. AIP Conf. Proc. 2018, 1922, 150011. [Google Scholar] [CrossRef]
  43. Char, B.W.; Geddes, K.O.; Gonnet, G.H.; Leong, B.L.; Monagan, M.B.; Watt, S.M. First Leaves: A Tutorial Introduction to Maple V; Springer: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
  44. Forsberg, J.; Nilsson, L. On polynomial response surfaces and Kriging for use in structural optimization of crashworthiness. Struct. Multidiscip. Optim. 2005, 29, 232–243. [Google Scholar] [CrossRef]
  45. Xia, B.; Lü, H.; Yu, D.; Jiang, C. Reliability-based design optimization of structural systems under hybrid probabilistic and interval model. Comput. Struct. 2015, 160, 126–134. [Google Scholar] [CrossRef]
  46. Bucher, C.; Most, T. A comparison of approximate response functions in structural reliability analysis. Probabilistic Eng. Mech. 2008, 23, 154–163. [Google Scholar] [CrossRef]
  47. Xia, B.; Yu, D.; Liu, J. Transformed perturbation stochastic finite element method for static response analysis of stochastic structures. Finite Elem. Anal. Des. 2014, 79, 9–21. [Google Scholar] [CrossRef]
  48. Faravelli, L. Response-Surface Approach for Reliability Analysis. J. Eng. Mech. 1989, 115, 2763–2781. [Google Scholar] [CrossRef]
  49. Xia, B.; Yu, D. Change-of-variable interval stochastic perturbation method for hybrid uncertain structural-acoustic systems with random and interval variables. J. Fluids Struct. 2014, 50, 461–478. [Google Scholar] [CrossRef]
  50. Huang, B.; Li, Q.S.; Tuan, A.Y.; Zhu, H. Recursive approach for random response analysis using non-orthogonal polynomial expansion. Comput. Mech. 2009, 44, 309–320. [Google Scholar] [CrossRef]
  51. Rahman, S. A polynomial dimensional decomposition for stochastic computing. Int. J. Numer. Methods Eng. 2008, 76, 2091–2116. [Google Scholar] [CrossRef]
  52. Kianifar, M.R.; Campean, F. Performance evaluation of metamodelling methods for engineering problems: Towards a practitioner guide. Struct. Multidiscip. Optim. 2020, 61, 159–186. [Google Scholar] [CrossRef] [Green Version]
  53. Bendat, J.S.; Piersol, A.G. Random Data: Analysis and Measurement Procedures; Wiley: New York, NY, USA, 1971; ISBN 9781118032428. [Google Scholar]
  54. Feller, W. An Introduction to Probability Theory and Its Applications; Wiley: New York, NY, USA, 1965. [Google Scholar]
  55. Vanmarcke, E. Random Fields: Analysis and Synthesis; MIT Press: Cambridge, MA, USA, 1983; ISBN 9812563539. [Google Scholar]
  56. Kottegoda, N.T.; Rosso, R. Applied Statistics for Civil and Environmental Engineers; Blackwell: Cichester, UK, 2008; ISBN 9781405179171. [Google Scholar]
  57. Sokołowski, D.; Kamiński, M.M. Homogenization of carbon/polymer composites with anisotropic distribution of particles and stochastic interface defects. Acta Mech. 2018, 229, 3727–3765. [Google Scholar] [CrossRef] [Green Version]
  58. Kamiński, M.M. The Stochastic Perturbation Method for Computational Mechanics; Wiley: Chichester, UK, 2013; ISBN 9780470770825. [Google Scholar]
  59. Hughes-Hallett, D.; Gleason, A.M.; McCallum, W.G. Calculus: Single and Multivariable, 7th ed.; Wiley: Hoboken, NJ, USA, 2019. [Google Scholar]
  60. Kamiński, M.; Solecka, M. Optimization of the truss-type structures using the generalized perturbation-based Stochastic Finite Element Method. Finite Elem. Anal. Des. 2013, 63, 69–79. [Google Scholar] [CrossRef]
  61. Kamiński, M.M.; Szafran, J. The least squares stochastic finite element method in structural stability analysis of steel skeletal structures. Int. J. Appl. Mech. Eng. 2015, 20, 299–318. [Google Scholar] [CrossRef] [Green Version]
  62. Björck, Å. Numerical Methods for Least Squares Problems; SIAM: Philadelphia, PA, USA, 1996. [Google Scholar]
  63. Wolberg, J. Data Analysis Using the Method of Least Squares: Extracting the Most Information from Experiments; Springer: Berlin, Germany, 2005; ISBN 3540256741. [Google Scholar]
  64. Kamiński, M.M. On the dual iterative stochastic perturbation-based finite element method in solid mechanics with Gaussian uncertainties. Int. J. Numer. Methods Eng. 2015, 104, 1038–1060. [Google Scholar] [CrossRef]
  65. Kamiński, M.M.; Sokołowski, D. Dual probabilistic homogenization of the rubber-based composite with random carbon black particle reinforcement. Compos. Struct. 2016, 140, 783–797. [Google Scholar] [CrossRef]
  66. Kamiński, M.M.; Pokusiński, B.M. Reliability of some axisymmetric shell structure by the response function method and the generalized stochastic perturbation technique. In Proceedings of the Advances in Mechanics: Theoretical, Computational and Interdisciplinary Issues—3rd Polish Congress of Mechanics, PCM 2015 and 21st International Conference on Computer Methods in Mechanics, CMM 2015, Gdansk, Poland, 8–11 September 2015; pp. 279–282. [Google Scholar]
  67. Kamiński, M.M.; Strąkowski, M. On sequentially coupled thermo-elastic stochastic finite element analysis of the steel skeletal towers exposed to fire. Eur. J. Mech. A/Solids 2017, 62, 80–93. [Google Scholar] [CrossRef]
  68. Rabenda, M.; Kamiński, M.M. Dual Probabilistic Analysis of the Transient Heat Transfer by the Stochastic Finite Element Method with Optimized Polynomial Basis. J. Civ. Eng. Environ. Archit. 2017, 64, 211–225. [Google Scholar] [CrossRef] [Green Version]
  69. Runge, C. Über Empirische Funktionen und die Interpolation Zwischen Äquidistanten Ordinaten. Z. Math. Phys. 1901, 46, 224–243. [Google Scholar]
  70. Akaike, H. A New Look at the Statistical Model Identification. IEEE Trans. Automat. Contr. 1974, 19, 716–723. [Google Scholar] [CrossRef]
  71. Schwarz, G. Estimating the Dimension of a Model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
  72. Sclove, S.L. Application of model-selection criteria to some problems in multivariate analysis. Psychometrika 1987, 52, 333–343. [Google Scholar] [CrossRef]
  73. Steiger, J.H.; Lind, J.C. Statistically based tests for the number of common factors. In Proceedings of the Annual Meeting of the Psychometric Society: Structural Equation Modeling, Iowa City, IA, USA, 28 May 1980. [Google Scholar]
  74. Anscombe, F.J. Graphs in statistical analysis. Am. Stat. 1973, 27, 17–21. [Google Scholar] [CrossRef]
  75. European Committee for Standardization. EN 1990: Eurocode—Basis of Structural Design; European Committee for Standardization: Brussels, Belgium, 2002. [Google Scholar]
  76. European Committee for Standardization. EN 1991-1-1: Eurocode 1: Actions on Structures—Part 1-1: General Actions—Densities, Self-Weight, Imposed Loads for Buildings; European Committee for Standardization: Brussels, Belgium, 2002. [Google Scholar]
  77. European Committee for Standardization. EN 1993-1-1: Eurocode 3: Design of Steel Structures—Part 1-1: General Rules and Rules for Buildings; European Committee for Standardization: Brussels, Belgium, 2006. [Google Scholar]
  78. European Committee for Standardization. EN 13474-2: Glass in Building—Design of Glass Panes—Part 2: Design for Uniformaly Distributed Loads; European Committee for Standardization: Brussels, Belgium, 2000. [Google Scholar]
  79. European Committee for Standardization. EN 13474-3: Glass in Building—Determination of the Strength of Glass Panes—Part 3: General Method of Calculation and Determination of Strength of Glass by Testing; European Committee for Standardization: Brussels, Belgium, 2008. [Google Scholar]
  80. European Committee for Standardization. EN 16612: Glass in Building—Determination of the Load Resistance of Glass Panels by Calculation and Testing; European Committee for Standardization: Brussels, Belgium, 2013. [Google Scholar]
  81. Oden, J.T. Finite Elements of Nonlinear Continua; McGraw-Hill: New York, NY, USA, 1972; ISBN 978-0-486-44973-9. [Google Scholar]
  82. Owen, D.R.J.; Hinton, E. Finite Elements in Plasticity—Theory and Practice; Pineridge Press: Swansea, UK, 1980. [Google Scholar]
  83. Zienkiewicz, O.C.; Taylor, R.L.; Fox, D.D. The Finite Element Method for Solid and Structural Mechanics, 7th ed.; Elsevier: Amsterdam, The Netherlands, 2014; ISBN 9781856176347. [Google Scholar]
  84. Schittich, C.; Staib, G.; Balkow, D.; Schuler, M.; Sobek, W. Glass Construction Manual, 2nd ed.; Birkhäuser: Munich, Germany, 2007. [Google Scholar]
  85. Falsone, G. An extension of the Kazakov relationship for non-Gaussian random variables and its use in the non-linear stochastic dynamics. Probabilistic Eng. Mech. 2005, 20, 45–56. [Google Scholar] [CrossRef]
Figure 1. Exemplary structure reliability comparison scheme.
Figure 1. Exemplary structure reliability comparison scheme.
Applsci 11 10179 g001
Figure 2. Graphical representation of the response function method for n = 11 and Δb/b0 = 0.05 (blue points—discrete data, red curve—approximation).
Figure 2. Graphical representation of the response function method for n = 11 and Δb/b0 = 0.05 (blue points—discrete data, red curve—approximation).
Applsci 11 10179 g002
Figure 3. Scheme of the first part of the experiment.
Figure 3. Scheme of the first part of the experiment.
Applsci 11 10179 g003
Figure 4. The maximum relative error of the base polynomial coefficients maxΔrel,c for various POb and the number of significant digits of rounding D1: (a) considering only the polynomial dependences, (b) considering the polynomials and all the other groups of functions.
Figure 4. The maximum relative error of the base polynomial coefficients maxΔrel,c for various POb and the number of significant digits of rounding D1: (a) considering only the polynomial dependences, (b) considering the polynomials and all the other groups of functions.
Applsci 11 10179 g004
Figure 5. Graphical representation of the datasets from the Anscombe quartet: (a) Set I, (b) Set II, (c) Set III, (d) Set IV.
Figure 5. Graphical representation of the datasets from the Anscombe quartet: (a) Set I, (b) Set II, (c) Set III, (d) Set IV.
Applsci 11 10179 g005
Figure 6. The relative error of the probabilistic moments Δrel,m for the input coefficient of variation on the example of two approximations: (a) approximation with the function y4 based on the polynomial of POa = 1 of the discrete data generated by the function y4 with POb = 1 and poly = 1, (b) approximation with the function y11 based on the polynomial of POa = 2 of the discrete data generated by the function y11 with POb = 2 and poly = 1.
Figure 6. The relative error of the probabilistic moments Δrel,m for the input coefficient of variation on the example of two approximations: (a) approximation with the function y4 based on the polynomial of POa = 1 of the discrete data generated by the function y4 with POb = 1 and poly = 1, (b) approximation with the function y11 based on the polynomial of POa = 2 of the discrete data generated by the function y11 with POb = 2 and poly = 1.
Applsci 11 10179 g006
Figure 7. Comparison of the approximation accuracy criteria for all the groups of functions, its various base POb and the number of significant digits of rounding D1: (a) the maximum relative error of the base polynomial coefficients Δrel,c, (b) the maximum relative error of the probabilistic moments maxΔrel,m for αmax*(x) = 0.30.
Figure 7. Comparison of the approximation accuracy criteria for all the groups of functions, its various base POb and the number of significant digits of rounding D1: (a) the maximum relative error of the base polynomial coefficients Δrel,c, (b) the maximum relative error of the probabilistic moments maxΔrel,m for αmax*(x) = 0.30.
Applsci 11 10179 g007
Figure 8. Scheme of the third part of the experiment.
Figure 8. Scheme of the third part of the experiment.
Applsci 11 10179 g008
Figure 9. The relative error of a certain probabilistic moment of a theoretical approximation depending on the spread Δappr and the input coefficient of variation: (a) Δappr = 10−4, (b) Δappr = 10−5.
Figure 9. The relative error of a certain probabilistic moment of a theoretical approximation depending on the spread Δappr and the input coefficient of variation: (a) Δappr = 10−4, (b) Δappr = 10−5.
Applsci 11 10179 g009
Figure 10. The relative error in determining the integrals for an approximation using the approximate method depending on the number of subintervals (the most unfavorable case of the received ones).
Figure 10. The relative error in determining the integrals for an approximation using the approximate method depending on the number of subintervals (the most unfavorable case of the received ones).
Applsci 11 10179 g010
Figure 11. The maximum relative error of the probabilistic moments maxΔrel,m for different degrees of the base reference POb as a function of the RMSEmod criterion: (a) αmax*(x) = 0.30, (b) αmax*(x) = 0.25, (c) αmax*(x) = 0.20, (d) αmax*(x) = 0.15, (e) αmax*(x) = 0.10, (f) αmax*(x) = 0.05.
Figure 11. The maximum relative error of the probabilistic moments maxΔrel,m for different degrees of the base reference POb as a function of the RMSEmod criterion: (a) αmax*(x) = 0.30, (b) αmax*(x) = 0.25, (c) αmax*(x) = 0.20, (d) αmax*(x) = 0.15, (e) αmax*(x) = 0.10, (f) αmax*(x) = 0.05.
Applsci 11 10179 g011aApplsci 11 10179 g011b
Figure 12. Diagrams of approximations (maximizing RMSEmod) in relation to the reference functions and the discrete data generated: (a) A results, (b) B results, (c) C results.
Figure 12. Diagrams of approximations (maximizing RMSEmod) in relation to the reference functions and the discrete data generated: (a) A results, (b) B results, (c) C results.
Applsci 11 10179 g012aApplsci 11 10179 g012b
Figure 13. The maximum relative error of the probabilistic moments of the approximations with the extreme value of the correlation coefficient for poly = 1, αmax*(x) = 0.30, different degrees of the base reference POb and the groups of functions used to generate discrete data.
Figure 13. The maximum relative error of the probabilistic moments of the approximations with the extreme value of the correlation coefficient for poly = 1, αmax*(x) = 0.30, different degrees of the base reference POb and the groups of functions used to generate discrete data.
Applsci 11 10179 g013
Figure 14. The smallest of the obtained values maxΔrel,m for poly = 1, αmax*(x) = 0.30, different degrees of the base reference POb and the groups of functions used to generate discrete data: (a) considering all the approximations, (b) considering only the polynomial approximation.
Figure 14. The smallest of the obtained values maxΔrel,m for poly = 1, αmax*(x) = 0.30, different degrees of the base reference POb and the groups of functions used to generate discrete data: (a) considering all the approximations, (b) considering only the polynomial approximation.
Applsci 11 10179 g014aApplsci 11 10179 g014b
Figure 15. Scheme of the numerical example concerning steel diagrid grillages.
Figure 15. Scheme of the numerical example concerning steel diagrid grillages.
Applsci 11 10179 g015
Figure 16. Grillage structures under consideration: (a) model OB, (b) model OS, (c) model RB, (d) model RS, (e) model TB, (f) model TS, (g) model LB, (h) model LS.
Figure 16. Grillage structures under consideration: (a) model OB, (b) model OS, (c) model RB, (d) model RS, (e) model TB, (f) model TS, (g) model LB, (h) model LS.
Applsci 11 10179 g016
Figure 17. Comparison of several approximations with the maximum mean of the RMSEmod values on the example of extreme global deflection data of the OB model with random wall thickness: (a) FEM tests range, (b) three-sigma range.
Figure 17. Comparison of several approximations with the maximum mean of the RMSEmod values on the example of extreme global deflection data of the OB model with random wall thickness: (a) FEM tests range, (b) three-sigma range.
Applsci 11 10179 g017
Figure 18. Diagrams of the finally adopted response functions and the extended FEM tests for the LS model: (a) the maximum of the global vertical displacements, (b) the maximum of the local deflections, (c) the basic eigenfrequency, (d) the extreme reduced stresses.
Figure 18. Diagrams of the finally adopted response functions and the extended FEM tests for the LS model: (a) the maximum of the global vertical displacements, (b) the maximum of the local deflections, (c) the basic eigenfrequency, (d) the extreme reduced stresses.
Applsci 11 10179 g018aApplsci 11 10179 g018b
Table 1. Considered groups of functions.
Table 1. Considered groups of functions.
X = x 1 / x x 1 / x ln ( x ) exp ( x )
Y =
y y 1 = w x y 2 = w 1 / x y 3 = w x y 4 = w 1 / x y 5 = w ln ( x ) y 6 = w exp ( x )
1 / y y 7 = 1 / w x y 8 = 1 / w 1 / x y 9 = 1 / w x y 10 = 1 / w 1 / x y 11 = 1 / w ln ( x ) y 12 = 1 / w exp ( x )
y 2 y 13 = w x y 14 = w 1 / x y 15 = w x y 16 = w 1 / x y 17 = w ln ( x ) y 18 = w exp ( x )
1 / y 2 y 19 = 1 / w x y 20 = 1 / w 1 / x y 21 = 1 / w x y 22 = 1 / w 1 / x y 23 = 1 / w ln ( x ) y 24 = 1 / w exp ( x )
ln ( y ) y 25 = exp w x   1 y 26 = exp w 1 / x y 27 = exp w x y 28 = exp w 1 / x y 29 = exp w ln ( x )   2 y 30 = exp w exp ( x )
exp ( y ) y 31 = ln w x y 32 = ln w 1 / x y 33 = ln w x y 34 = ln w 1 / x y 35 = ln w ln ( x ) y 36 = ln w exp ( x )
sinh ( y ) y 37 = ar   sinh w x y 38 = ar   sinh w 1 / x y 39 = ar   sinh w x y 40 = ar   sinh w 1 / x y 41 = ar   sinh w ln ( x ) y 42 = ar   sinh w exp ( x )
ar   sinh y y 43 = sinh w x y 44 = sinh w 1 / x y 45 = sinh w x y 46 = sinh w 1 / x y 47 = sinh w ln ( x ) y 48 = sinh w exp ( x )
1 Special cases: exponential functions when the base polynomial is of the first order as well as Gaussian functions when the base polynomial is of the second order. 2 A special case: power functions when the base polynomial is of the first order.
Table 2. Data on the selected reference base polynomials.
Table 2. Data on the selected reference base polynomials.
Polynomial Order
(POb)
Polynomial Number
(poly)
Polynomial Form
11 10 + x
12 10 10 1 x
21 10 + 10 1 x + x 2
22 10 + 10 1 x + x 2
23 10 + 10 2 x x 2
24 10 + 10 2 x x 2
31 10 + 10 2 x x 2 + x 3
32 10 + 10 2 x x 2 + x 3
33 10 + 10 3 x x 2 x 3
34 10 + 10 3 x x 2 x 3
41 10 + 10 3 x + x 2 + x 3 x 4
42 10 + 10 3 x x 2 x 3 + x 4
43 10 + 10 4 x x 2 + x 3 x 4
44 10 + 10 4 x x 2 x 3 x 4
51 10 + 10 4 x + x 2 + x 3 + x 4 x 5
52 10 + 10 4 x x 2 x 3 x 4 + x 5
53 10 + 10 4 x + x 2 + x 3 x 4 + x 5
54 10 + 10 4 x x 2 x 3 + x 4 + x 5
61 10 + 10 4 x x 2 x 3 x 4 x 5 + x 6
62 10 + 10 4 x + x 2 x 3 x 4 x 5 + x 6
63 10 + 10 4 x x 2 + x 3 x 4 x 5 + x 6
64 10 + 10 4 x + x 2 x 3 + x 4 x 5 + x 6
Table 3. The results of the first part of the experiment assuming D1 = 7 and POb = POa = 2 on the example of dependences for which the greatest relative errors of the coefficients were obtained.
Table 3. The results of the first part of the experiment assuming D1 = 7 and POb = POa = 2 on the example of dependences for which the greatest relative errors of the coefficients were obtained.
Base Polynomial TypePolynomial FormΔrel,c
reference 1.000000 10 1 + 1.000000 ¯ 10 2 x 1.000000 10 2 x 2
approximation for y1 1.000024 10 1 +   0.954347 ¯ 10 2 x 0.977856 10 2 x 2 4.57
approximation for y22 1.000579 10 1 0.153796 ¯ 10 2 x 0.425345 10 2 x 2 115.38
Table 4. Statistical parameters for the four datasets from the Anscombe quartet (the most important differences are underlined).
Table 4. Statistical parameters for the four datasets from the Anscombe quartet (the most important differences are underlined).
Set ISet IISet IIISet IV
Expected value of ξ9.0 (identical)
Variance of ξ11.0 (identical)
Expected value of η7.50 (with the accuracy of two decimal places)
Variance of η4.125 ± 0.003
Linear regression equationη = 0.500ξ + 3.00 (with an accuracy of three and two decimal places, respectively)
Linear correlation coefficient0.816 (three decimal places match)
Coefficient of determination0.666 (three decimal places match)
Third central moment of η−0.406−8.20711.5539.384
Skewness of η−0.048−0.9791.3801.121
Fourth central moment of η30.742.372.161.7
Kurtosis of η−1.199−0.5141.2400.629
Table 5. Exemplary results obtained using the adopted criterion (the most important differences are underlined).
Table 5. Exemplary results obtained using the adopted criterion (the most important differences are underlined).
A ResultsB ResultsC Results
Reference base POb125
Approximation base POa125
Reference base poly131
Data-generating functiony7y9y7
Approximating functiony7y9y7
RMSEmod35.8342.7041.45
maxΔrel,m≤10−47.404 × 10−4≤10−4
Δrel,c3.210 × 10−71.0024.000
Table 6. Properties of the models.
Table 6. Properties of the models.
ModelNodal Points NumberSupports NumberBars NumberPanels NumberBasic Panel Size (m)Glass Pane Thicknesses (mm)Steel Weight (kg)
OB241638153.00 × 3.12 (rectangle sides)2 × 19 + 87325
OS402267282.25 × 2.23 (rectangle sides)2 × 12 + 88557
RB241653303.00 × 3.12 (legs)2 × 10 + 89983
RS402295562.25 × 2.23 (legs)2 × 8 + 811,388
TB322073423.00 (equilateral triangle side)2 × 10 + 810,090
TS5026121722.25 (equilateral triangle side)2 × 8 + 811,789
LB241653303.46 (equilateral triangle side)2 × 10 + 89582
LS382289522.60 (equilateral triangle side)2 × 8 + 810,841
Table 7. Data on the response function selection for grillages according to the RMSEmod criterion (the final selection is underlined or asterisked).
Table 7. Data on the response function selection for grillages according to the RMSEmod criterion (the final selection is underlined or asterisked).
Random VariablePerformance FunctionMean RMSEmodApproximation Base POaApproximation Group FormulaResponse Function Group Number
eug38.741 y 2 = w 1 / x 1
ul37.131 y 7 = 1 / w x 2
ω39.101 y 13 = w x 3
tug26.2210 y 41 = ar   sinh w ln ( x )
25.62 *4 y 14 = w 1 / x 4
25.514 y 7 = 1 / w x
25.404 y 16 = w 1 / x
ul28.02 *2 y 14 = w 1 / x 5
ω25.66 *4 y 11 = 1 / w ln ( x ) 6
σred20.16 *2 y 14 = w 1 / x 7
lug22.52 *3 y 29 = exp w ln ( x ) 8
ul25.58 *3 y 29 = exp w ln ( x ) 9
ω37.421 y 29 = exp w ln ( x ) 10
σred23.11 *3 y 1 = w x 11
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pokusiński, B.; Kamiński, M. On Selecting Composite Functions Based on Polynomials for Responses Describing Extreme Magnitudes of Structures. Appl. Sci. 2021, 11, 10179. https://doi.org/10.3390/app112110179

AMA Style

Pokusiński B, Kamiński M. On Selecting Composite Functions Based on Polynomials for Responses Describing Extreme Magnitudes of Structures. Applied Sciences. 2021; 11(21):10179. https://doi.org/10.3390/app112110179

Chicago/Turabian Style

Pokusiński, Bartłomiej, and Marcin Kamiński. 2021. "On Selecting Composite Functions Based on Polynomials for Responses Describing Extreme Magnitudes of Structures" Applied Sciences 11, no. 21: 10179. https://doi.org/10.3390/app112110179

APA Style

Pokusiński, B., & Kamiński, M. (2021). On Selecting Composite Functions Based on Polynomials for Responses Describing Extreme Magnitudes of Structures. Applied Sciences, 11(21), 10179. https://doi.org/10.3390/app112110179

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop