Next Article in Journal
A Review of Flow Characterization of Metallic Materials in the Cold Forming Temperature Range and Its Major Issues
Next Article in Special Issue
Enhanced Mechanical and Durability Properties of Cement Mortar by Using Alumina Nanocoating on Carbon Nanofibers
Previous Article in Journal
Influence of Process Parameters on the Height and Performance of Magnesium Alloy AZ91D Internal Thread by Assisted Heating Extrusion
Previous Article in Special Issue
Experimental Assessment and Numerical Modeling of the Bond–Slip Correlation for Steel Rebars in r.c. Members
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interpretable Machine Learning Algorithms to Predict the Axial Capacity of FRP-Reinforced Concrete Columns

1
Department of Civil Engineering, Turkish-German University, Istanbul 34820, Turkey
2
Department of Civil, Geological and Mining Engineering, Polytechnique Montréal, Montreal, QC H3C 3A7, Canada
3
Department of Civil Engineering, Istanbul University—Cerrahpasa, Istanbul 34320, Turkey
4
Department of Civil and Environmental Engineering, Temple University, Philadelphia, PA 19122, USA
5
Department of Smart City & Energy, Gachon University, Seongnam 13120, Korea
*
Authors to whom correspondence should be addressed.
Materials 2022, 15(8), 2742; https://doi.org/10.3390/ma15082742
Submission received: 19 March 2022 / Revised: 4 April 2022 / Accepted: 6 April 2022 / Published: 8 April 2022

Abstract

:
Fiber-reinforced polymer (FRP) rebars are increasingly being used as an alternative to steel rebars in reinforced concrete (RC) members due to their excellent corrosion resistance capability and enhanced mechanical properties. Extensive research works have been performed in the last two decades to develop predictive models, codes, and guidelines to estimate the axial load-carrying capacity of FRP-RC columns. This study utilizes the power of artificial intelligence and develops an alternative approach to predict the axial capacity of FRP-RC columns more accurately using data-driven machine learning (ML) algorithms. A database of 117 tests of axially loaded FRP-RC columns is collected from the literature. The geometric and material properties, column shape and slenderness ratio, reinforcement details, and FRP types are used as the input variables, while the load-carrying capacity is used as the output response to develop the ML models. Furthermore, the input-output relationship of the ML model is explained through feature importance analysis and the SHapely Additive exPlanations (SHAP) approach. Eight ML models, namely, Kernel Ridge Regression, Lasso Regression, Support Vector Machine, Gradient Boosting Machine, Adaptive Boosting, Random Forest, Categorical Gradient Boosting, and Extreme Gradient Boosting, are used in this study for capacity prediction, and their relative performances are compared to identify the best-performing ML model. Finally, predictive equations are proposed using the harmony search optimization and the model interpretations obtained through the SHAP algorithm.

1. Introduction

Fiber-reinforced polymers (FRP) have gained popularity in construction as a viable replacement for steel rebars in reinforced concrete (RC) members due to their corrosion resistance, higher strength, lightweight, and ease of fabrication [1]. Extensive research works have been carried out in the last two decades to investigate the performance of the different types of FRP reinforcement [2,3,4,5,6,7,8,9,10]. Some of the studies in this area were experimental in nature [11,12,13,14,15,16,17,18,19], whereas others aimed to develop more accurate, novel methods of prediction for structural performance using various techniques of artificial intelligence and optimization [20,21,22,23,24,25]. Murad et al. [23] used gene expression programming on FRP-RC beams to predict flexural strength. As the variables affecting the flexural strength, the beam cross-sectional dimensions, concrete compressive strength, the area, the elasticity modulus, and the ultimate tensile strength of the FRP reinforcement were selected. An experimental database consisting of 116 samples was used to develop the predictive model. Protchenko et al. [24] used an experimental database composed of 102 samples to further improve the existing predictive models in the ACI 440.1R-15 code pertaining to the flexural capacity of FRP-RC beams. While a relatively large amount of research in the literature deals with the experimental study of FRP-RC columns, the literature that deals with the applications of machine learning techniques for the predictive modeling of these structures is somewhat limited. Raza et al. [22] developed a large database of 279 specimens of GFRP-RC columns to develop an improved, ANN-based predictive model to estimate the axial capacity. Bakouregui et al. [25] applied the eXtreme Gradient Boosting (XGBoost) algorithm for the prediction of the axial load-carrying capacity of FRP-reinforced concrete columns. An experimental database consisting of 283 samples was chosen to train the model. The model output was interpreted using the SHAP algorithm to determine the effects of the various input parameters on the model output. In addition to the load eccentricity, the gross cross-sectional area, concrete compressive strength, column slenderness ratio, and the spacing between the transverse reinforcements were found to have the greatest impact on the axial load-carrying capacity of FRP-RC columns.
The current study focuses on the concentric loading case. In addition to the XGBoost algorithm, seven other ML algorithms are applied to a portion of the database compiled by Bakouregui et al. [25], which consists of samples with concentric loading. One of the objectives of the current study is to perform a systematic comparison between different ensemble ML models to predict the compressive capacity of FRP-RC columns, which is missing in the existing literature. The relative computational efficiency of the different ML algorithms is presented as well. To fulfill the objective, we select 117 FRP-RC column test specimens from an existing database [25]. The existing database consists of 283 FRP-RC column test specimens that include both short and slender column specimens subjected to concentric and eccentric loading. The other objective of the current study is to develop opensource, Python-based ML models to estimate the axial load-carrying capacity of FRP-RC columns to further help researchers utilize the developed models to improve the framework once additional experimental data are available.
The input variables of the database used in this paper are the slenderness ratio, gross cross-sectional area, the type of cross-section (circular or rectangular), the type of concrete (light-weight or normal-weight concrete), compressive strength of concrete, type of composite material used in the longitudinal reinforcement (GFRP or CFRP), longitudinal reinforcement ratio, elasticity modulus of the longitudinal reinforcements, the ultimate strength of the longitudinal reinforcements, type of the transverse reinforcement material (GFRP, CFRP, or steel), the configuration of transverse reinforcement (i.e., spirals or ties), and the spacing of the transverse reinforcement. The output variable is the axial load-carrying capacity of the corresponding specimen. The database is split into a training and test set in 90% to 10% ratio, and the performances of the different machine learning models are compared using four different accuracy metrics. Afterward, the SHAP algorithm is utilized to determine the most significant input variables and their interdependencies. Three different, closed-form equation formats are proposed based on the outcome of the SHAP algorithm. All three of these equations depend on a certain number of parameters that can be optimized to minimize the difference between the equation output and the actual axial load-carrying capacity measurements. A well-established, metaheuristic optimization technique called harmony search optimization is applied in this optimization process. Then, the performances of these two equations are compared using different accuracy metrics. The current study is unique in its attempt to obtain closed-form equations for the prediction of the axial load-carrying capacity of FRP-RC columns based on ensemble machine learning algorithms, the SHAP procedure, and metaheuristic optimization. Relatively high accuracies were observed from the obtained equations. The following sections further elaborate on the process of obtaining predictive models for these structures based on experimental data.

2. Machine Learning and Optimization Methodologies

Eight data-driven machine learning models were applied to the database consisting of 117 samples. Each of these samples contained seven continuous-valued input variables, five discrete-valued input variables, and one continuous-valued output variable, the axial load-carrying capacity. The statistical distribution of these variables is shown in Figure 1.
Figure 1 shows the correlation plot for the seven non-categorical, continuous-valued input variables together with the output variable Pexp. Ag, fc’, EFRP, fuL, spacingH, and Pexp stand for the gross cross-sectional area, concrete compressive strength at 28 days, elasticity modulus of the longitudinal reinforcements, the ultimate strength of the longitudinal reinforcements, spacing between transverse reinforcements, and the experimental axial load-carrying capacity, respectively. A list of all the variable names and abbreviations used in this paper can be found in Appendix A. In Figure 1, for each variable on a diagonal tile, the scale of this variable is shown either on the left vertical or the right vertical axis. In addition to the vertical axes, for each variable, the same scaling is shown on one of the horizontal axes as well. The lower triangle in this plot contains bivariate scatter plots with regression lines, while the diagonal contains histograms showing the distributions of all variables. The upper triangle contains the Pearson correlation coefficients (rxy) which can be computed using Equation (1). In Equation (1), x and y are two sequences of variable values between which the Pearson correlation is being computed. A Pearson correlation value close to 1 indicates a high correlation between the two variables. The significance of the correlation is indicated with stars in the upper triangle where the number of stars is proportional to the level of significance. According to Figure 1, the gross cross-sectional area Ag has the greatest correlation with Pexp followed by the transverse reinforcement spacing and the concrete compressive strength. The remaining continuous-valued input variables are negatively correlated to Pexp.
r xy = n i = 1 n x i y i i = 1 n x i i = 1 n y i n i = 1 n x i 2 i = 1 n x i 2 n i = 1 n y i 2 i = 1 n y i 2
In what follows, the machine learning models applied in this study are briefly presented.

2.1. Machine Learning Models

All models presented in this section were trained and tested using both the continuous-valued and discrete-valued input variables. The discrete-valued input variables, namely, the type of cross-section, type of concrete, type of material used in the longitudinal and transverse reinforcements, and the configuration of the transverse reinforcements were one-hot encoded before being used in the training and testing of the machine learning models. The machine learning models applied in this study are briefly presented as follows. For further details of the algorithms presented in this section, it is suggested that the reader checks the references [26,27,28,29,30,31,32,33,34,35,36,37].

2.1.1. Kernel Ridge Regression (KRR)

Let P n , x n × 17 , where x n is the n-th training sample containing the information of 17 one-hot encoded and continuous input variables. Let x be any sample outside of the training set. The kernel ridge prediction P ^ for this new sample is calculated using Equation (2) where κ is a kernel function, and N is the total number of samples in the training set [26].
P ^ = n = 1 N θ ^ n κ x , x n
The coefficients θ ^ n of the kernel ridge regressor are computed using Equation (3), where C is the regularization constant, κ is the kernel matrix, I is the identity matrix, θ ^ = θ ^ 1 ,   , θ ^ N T , and P = P 1 ,   ,   P N T .
κ + C I θ ^ = P

2.1.2. Support Vector Regression (SVR)

The Support Vector Regression technique aims to reduce the computational complexity by limiting the training operations to a subset of the training set. Equation (4) shows the predictive model for a vector x from outside the training set [27].
P ^ = n = 1 N a n a ^ n κ x , x n + b
In Equation (4), a n and a ^ n are the Lagrange multipliers that satisfy the conditions 0 a n C and 0 a ^ n C . The regularization parameter C works towards reducing the noise in the training set as the smaller values of C correspond to greater regularization. In Equation (4), only the training samples for which a n a ^ n 0 have an impact on the model output, and they are called the support vectors.

2.1.3. Lasso Regression

The Lasso Regression technique is based on the linear predictive model shown in Equation (5) [28].
y x = β 0 + n = 1 17 β n x n
The model coefficients β n in Equation (5) are estimated through the minimization of the objective function shown in Equation (6). The second term in Equation (6), calculating the L1 norm of the coefficient vector, leads to some of the model coefficients associated with the less significant input variables being set to zero. This yields a predictive model that depends on a smaller number of variables which can also be called a sparse model [28].
i = 1 N P i β 0 j = 1 17 β j x ij 2 + λ j = 1 17 β j

2.1.4. Gradient Boosting Machine (GBM)

GBM is based on the idea of consecutively fitting models to data to increase the model accuracy so that the new base learners h x , θ have maximum correlation with the negative gradient g t x of the loss function ψ y , f [29]. In every iteration of the GBM algorithm, the best gradient descent step ρ t is calculated as in Equation (7), and the estimator function is updated as in Equation (8) [29,30].
ρ t = arg min ρ i = 1 N ψ y i , f ^ t 1 x i + ρ h x I , θ t
f ^ t : = f ^ t 1 + ρ t h x , θ t

2.1.5. Adaptive Boosting (AdaBoost)

AdaBoost is an ensemble learning technique that iteratively runs weak regressors h t x and merges them to produce a strong regressor. The procedure for obtaining the final strong regressor h f x is given in Equations (9) and (10), where ε t is the loss associated with h t x [31].
h f x = inf y Y :   t : h t x y log 1 β t 1 2 t log 1 / β t
β t = ε t 1 ε t

2.1.6. Random Forest (RF)

Random Forest is another ensemble learning algorithm that merges the output of decision tree models trained on subsets of the training set for a more accurate prediction. In addition to the random subsampling of the training set for the training of the decision trees, in the node splitting phase while generating a tree, a randomly selected subset of the input variables is utilized. The format of the Random Forest model is given in Equation (11), where m ^ j represents one of the decision trees [32,33].
m ^ x = 1 M i = 1 N m ^ j x

2.1.7. Extreme Gradient Boosting (XGBoost)

XGBoost is a tree boosting algorithm that differentiates itself through high scalability and computational speed. The algorithm is capable of scaling to billions of samples in distributed settings. The output of the XGBoost model can be summarized as in Equation (12), where fk denotes a regression tree, and K is the total number of regression trees [25,34]. The optimum values of the leaf weights w j * that minimize the loss can be computed as in Equation (13), where l denotes the loss function, and Ij is the set that contains the indices of the data samples associated with the j-th leaf [35].
y ^ i = k = 1 K f k x i
w j * = i I j g i i I j h i + λ , g i = l y i , y ^ i t 1 y ^ i t 1 , g i = 2 l y i , y ^ i t 1 y ^ i t 1 2  

2.1.8. Categorical Gradient Boosting (CatBoost)

CatBoost is another gradient boosting algorithm that performs particularly well in the presence of categorical data. In addition to the better processing of categorical features, the implementation of ordered boosting contributes to better accuracy in this algorithm. CatBoost addresses a particular issue called prediction shift, which is present in other gradient boosting algorithms such as XGBoost [36,37]. CatBoost overcomes the issue of prediction shift through the implementation of ordered boosting with ordered target statistics, the details of which can be found in [37].

2.2. Harmony Search Optimization

Metaheuristic algorithms have been applied to a large number of engineering problems in recent years. These algorithms are shown to effectively address problems characterized by high nonlinearity or non-differentiability [38]. One of the most widely used and well-established algorithms in this category is the harmony search algorithm. The algorithm was invented by Geem et al. [39] and has been applied to a wide range of engineering problems such as the optimum design of retaining walls [40,41], laminated composite plates [42,43], highway bridge plate girders [44], concrete-filled steel tubular columns [45], reinforced concrete cylindrical walls [46,47,48], and tuned liquid dampers [49].
The algorithm starts with the generation of an arbitrarily sized population consisting of optimum-solution candidate vectors, also called harmony vectors (HV). The number of harmony vectors in this population is called the harmony memory size (HMS). Each harmony vector has a size equal to the number of parameters being optimized. After the generation of the initial population, all harmony vectors go through the harmony search iterations. These iterations are defined by Equations (14)–(17).
k = int rand · HMS ,   rand 0 , 1
x i , new = x i , min + rand · x i , max x i , min , if   HMCR   >   rand
x i , new = x i , k + rand · PAR · x i , max x i , min , if   HMCR     rand
HMCR = 0.5 1 i max i ,   PAR = 0.05 1 i max i
After each harmony search iteration, the newly generated HVs are compared to the existing ones in terms of their performance and, in cases where they demonstrate better performance, they replace the old HVs. After each iteration, the best-performing HV is identified. The process is repeated until the performances converge to an optimum level. For further details of the harmony search algorithm, it is suggested that the reader see reference [50].

3. Results and Discussions

This section presents the performances of eight different machine learning algorithms in predicting the axial load-carrying capacity of FRP-RC columns based on the experimental database of 117 samples collected from the literature [25]. The performances of these algorithms were measured using four different metrics of accuracy, the details of which have been given in Appendix B. The machine learning algorithms were ranked according to their accuracy in predicting the axial load-carrying capacity and their computational speed. The impacts of different input variables on the machine learning model output were investigated using the SHAP methodology. After the determination of the most significant input variables and their dependencies, three different equation formats were proposed for the prediction of the axial load-carrying capacity. A harmony search algorithm was utilized for the development of these equations. Finally, the performances of the proposed equations are presented.

3.1. Machine Learning Model Performances

The machine learning models were trained after splitting the database of experimental samples into a training set and a test set in a 90% to 10% ratio. The accuracies and computational speed of these models are presented for training and test sets separately.
A list of algorithms and their accuracies with respect to four different metrics of accuracy can be found in Table 1. In Table 1, for each algorithm, the time that elapses from the beginning of the model fitting process to the end of the predictions for the test set is listed. According to Table 1, the Gradient Boosting Machine (GBM) algorithm emerges as the most efficient algorithm considering both prediction accuracy and time elapsed, since this model delivered the most accurate predictions next to XGBoost in 58% of the time. Among the other predictive models, Random Forest (RF) and Lasso Regression can be counted as models that delivered a greater than 95% R2 score on the test set in less than 3 s. The CatBoost algorithm was observed to take a significantly longer time with a relatively low R2 score on the test set compared to the top-performing models. The model performances in terms of accuracy metrics and speed of computation are visualized in Figure 2, where the horizontal axes represent the metrics of accuracy and speed. In Figure 3, the experimental axial load-carrying capacity values are plotted against the model predictions. The straight black lines in these plots represent a perfect match between the predicted and actual values, whereas the dotted red lines represent a ± 10 % deviation from a perfect match.

3.2. Application of the SHAP Algorithm

The SHAP algorithm was applied to the XGBoost model, which demonstrated the best overall performance. The SHAP summary plot is an information-intense representation of how the various design variables affect the axial load-carrying capacity. The SHAP summary plot in Figure 4 shows the SHAP values of the different variables together with a color bar that indicates whether a variable has a high or low value. Figure 4 shows the SHAP values calculated for each variable at each sample point, where every dot corresponds to a sample. Around certain SHAP values, the dots have greater concentration. For the most significant nine variables, the SHAP values were computed separately. For the remaining variables, the SHAP values were summed, and they are represented in Figure 4 as a single variable. For any sample represented in Figure 4, the color of the corresponding dot has a shade of red if a variable has a high value at this sample, and the dot has a shade of blue if the variable has a low value. Positive and negative SHAP values indicate increasing and decreasing effects on the model output, respectively. According to Figure 4, the gross cross-sectional area of the column, the concrete compressive strength, the horizontal reinforcement spacing, and the slenderness ratio ( λ ) have the greatest impact on the model output. Furthermore, increasing the values of Ag and fc’ also affects the prediction, whereas the opposite can be said for the slenderness ratio.
The feature dependence plots in Figure 5 contain information that supplements the SHAP summary plot. In Figure 5, the feature dependence plots of the four most significant design variables can be seen. For each one of these variables, the variation of another variable that is most dependent on it is presented using color. For each dot in Figure 5, a shade of red indicates a high value of the most dependent variable, whereas a shade of blue indicates a low value of the most dependent variable. The main information contained in Figure 5 is the relationship between the values of a variable and the associated SHAP values. Figure 5 shows that increasing the values of Ag and fc’ also leads to an increase in the corresponding SHAP values, which indicates an increasing effect on the model output. On the other hand, the slenderness ratio and the model output are inversely proportional. Furthermore, increasing the horizontal reinforcement spacing does not have a significant effect on the model output. The developed Python-based ML models to estimate the axial capacity of FRP-RC column can be found at https://github.com/ccakiroglu/FRPRCColumn (accessed on 4 April 2022).

3.3. Development of the Predictive Equations

Based on the SHAP summary plot in Figure 4 and the feature dependence plots in Figure 5, three different equation formats were proposed. The first one of these formats, given in Equation (18), consists of the linear combination of three product terms. The first one of these terms includes, besides the most important input variable A g , the variables f c , which is the most dependent variable on A g , and f uL , which is the most dependent variable on f c . The second term consists of the multiplication of the slenderness ratio λ and f c , which is the most dependent variable on λ . Finally, the third term includes the effects of the spacing between the horizontal reinforcements and the longitudinal reinforcement ratio ( ρ ), which is the most dependent variable on it.
P = a 0   +   a 1 A g a 2 f c a 3 f uL a 4   +   a 5 λ a 6 f c a 7 + a 8 spacing H a 9 ρ a 10
An initial population of 20 candidate solution vectors was created. In this population, each solution candidate consists of a randomly generated, unique combination of the coefficients a0 to a10, where all these coefficients take values in (−1, +1). Afterward, these coefficients went through harmony search iterations. In order to train the coefficients of Equation (18), the entire database of experimental results was split into a training set and a test set in a 90% to 10% ratio. For each member of the population, the difference between the actual experimental axial load-carrying capacities (Pexp) and the p values computed by Equation (18) was calculated and stored in a vector that has the length of the entire training set. As the metric that shows the accuracy of the proposed equation, the Euclidean norm of this vector was calculated. Figure 6 shows the decrease in this vector norm throughout the optimization steps.
In Figure 6, the performances of the best- and worst-performing solution candidates are shown with blue and red colors, respectively. Figure 6b shows that, after 2500 iterations, a convergence to the minimum total error norm could be observed. The development of the coefficients a 0 to a 10 in the optimization process are shown in Figure 7.
In Figure 7, for each coefficient, the values these coefficients took in the best- and worst-performing members of the population are plotted in blue and red colors, respectively. Using the limit values of the coefficients in Figure 7, Equation (18) can be rewritten as in Equation (19).
P = 46.75   +   0.00207 A g 0.901 f c 1.085 f uL 0.02185   +   0.0493 λ 0.676 f c 1.862   +   16.84 spacing H 0.122 ρ 3.84
Using Equation (19), the p values were computed for the test set and compared to the actual measured values. This comparison can be seen in Figure 8, where the dotted lines indicate the ± 10 % deviations from an exact match.
The accuracy metrics in Figure 8 show that Equation (19) outperformed five out of the eight algorithms in Table 1. The machine learning algorithms that performed better than Equation (19) were Random Forest, XGBoost, and GBM. Although good performance could be obtained from Equation (19), a second equation format for practical use was proposed, as in Equation (20).
P = a 0   +   a 1 A g a 2 f c a 3   +   a 4 A g a 5 s p a c i n g H a 6   +   a 7 A g a 8 λ a 9
In Equation (20), the number of variables being considered in the prediction and the total number of coefficients were reduced. The variables having the highest impact on the model output according to the SHAP summary plot were selected. Equation (20) consists of three product terms after a0. In each one of these terms, Ag is multiplied with one of the three most significant variables after itself, according to the SHAP summary plot. The harmony search procedure was repeated for Equation (20).
Figure 9a,b show the development of the error norm during the first 500 and 2500 harmony search iterations, respectively. Using the Euclidean norm of the difference vector between the actual measurements and predictions of Equation (20), a minimum error of 2639 kN could be achieved. This error norm was slightly greater than the 2336 kN that could be achieved using the Equation (18) format. Figure 10 shows the development of the coefficients a0 to a9 in Equation (20) during the harmony search process.
In both Figure 7 and Figure 10, it can be observed that the coefficients of the worst population member demonstrated much wider fluctuations than the best member. However, after a certain number of iterations, both the best and worst member coefficients converged nearly to the same values. Using the limit values of the best member coefficients, Equation (21) was obtained.
P = 131.77   +   0.002608 A g 0.93058 f c 0.955   +   38.76 A g 93.69 spacing H 15.07   +   7.58 A g 19.54 λ 146.67
The negative powers in the second and third products of Equation (21) make these terms close to zero so that Equation (21) can be used in a simplified format, as in Equation (22).
P = 131.77   +   0.002608 A g 0.93058 f c 0.955
Figure 11 shows the comparison of the experimental axial load-carrying capacities to the p values predicted by Equation (22). According to the accuracy values in Figure 11, Equation (22) was able to outperform the KRR, SVR, AdaBoost, and CatBoost algorithms. A comparison of Figure 8 and Figure 11 shows that Equation (22) was able to achieve comparable performance to Equation (19) in a much simpler format. In order to eliminate the bias terms in Equations (19) and (22), a third equation format that consists of a single product term was proposed in Equation (23).
P = a 0 A g a 1 f c a 2 spacing H a 3 λ a 4 ρ a 5 f uL a 6
Figure 12 shows the development of the error norm for the best- and worst-performing harmony vectors for the first 500 and 2500 iteration steps.
Figure 12 shows that a minimum error norm of 2286 kN could be achieved after 2500 iterations which is less than the error norms achieved through Equations (18) and (20). After inserting the limit values of the coefficients a 0 to a 6 into Equation (23), we obtain Equation (24).
P = 0.00123 A g 0.9946 f c 0.9266 spacing H 0.01124 λ 0.1474 ρ 0.08769 f uL 0.0589
The performance of Equation (24) in predicting the axial load-carrying capacity has been presented in Figure 13, where the actual p values are plotted against the p values predicted by Equation (24). Finally, a list of all three equations proposed in this paper can be found in Table 2 with the corresponding R2 values. A comparison with the R2 values in Table 1 shows that Equation (24) outperformed all of the predictive models except for the XGBoost model. It should be noted that the proposed equations are data-driven, and the performance of those equations depends on the characteristics of the data used to develop the ML models. The equations are only applicable for the range of maximum and minimum values of the input parameters.

4. Summary and Conclusions

This work aimed to address an important gap in the research literature dealing with FRP-RC columns under concentric axial loading by applying various machine learning techniques to identify the most significant design variables affecting the axial load-carrying capacity. Particularly, ensemble learning techniques, such as Gradient Boosting Machine, Random Forest, and XGBoost, were observed to predict the axial load-carrying capacity with high accuracy. The coefficient of determination, root mean square error, mean absolute error, and mean absolute percentage error were used to quantify the accuracy of the predictions made by eight different machine learning models. Based on the predictive model with the best performance, the SHAP algorithm was utilized to identify those variables that have the greatest impact on the structural response. The gross cross-sectional area was found to have the greatest impact on the model output, followed by the concrete compressive strength, spacing of transverse reinforcement, and the slenderness ratio. Once the design variables were ranked according to their impact on the model output, in the next part of the paper, three different equation formats were proposed for the prediction of the axial load-carrying capacity as a function of the most significant design variables. These equations were optimized using the harmony search algorithm, and the equations were observed to have good accuracy. The first one of these equations predicts the axial load-carrying capacity as a function of the gross cross-sectional area, concrete compressive strength, the ultimate strength of the longitudinal reinforcements, slenderness ratio, spacing between transverse reinforcements, and the longitudinal reinforcement ratio. On the other hand, the second equation depends only on the two most significant variables, namely, the gross cross-sectional area and the compressive strength of concrete. Finally, the third equation consists of a single product term that consists of the gross cross-sectional area, the concrete compressive strength, spacing between transverse reinforcements, slenderness ratio, longitudinal reinforcement ratio, and the ultimate strength of the longitudinal reinforcements. With an R2 score of 0.978, the third equation performed better than the first two equations. Additionally, this equation outperformed all of the predictive machine learning models except for the XGBoost model.
The availability of closed-form equations for an accurate prediction of structural response is beneficial in engineering practice. Due to an R2 score of 0.95 and its simple format, the second prediction equation developed in this study is convenient for practical application. On the other hand, for more accurate predictions, the third equation can be used.
However, it should be noted that the developed equations are based on an experimental database consisting of 117 samples, and further studies in this area with larger databases are warranted. Furthermore, it should be noted that the results predicted by the developed equations are only valid within the range of the database used. Readers should keep in mind that the proposed equations are data-driven and can violate the mechanics-based capacity prediction model if misrepresented data are used. In addition to experimental studies, the databases could be further enhanced with the help of well-calibrated finite-element models. Besides increasing the size of the database used in model training, future research in this area can focus on the prediction of the axial load-carrying capacity under eccentric axial loading.

Author Contributions

Methodology, G.B. and C.C.; formal analysis (coding), G.B., K.I. and C.C.; writing—original draft preparation, G.B. and C.C.; writing—review and editing, K.I., S.K. and Z.W.G.; visualization, C.C.; supervision, G.B. and Z.W.G.; funding acquisition, Z.W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2020R1A2C1A01011131).

Data Availability Statement

Data sharing is not applicable for this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

NomenclatureXGBoostExtreme gradient boosting
KRRKernel ridge regression
FRPFiber-reinforced polymerSVRSupport vector regression
GFRPGlass fiber-reinforced polymerCatBoostCategorical gradient boosting
CFRPCarbon fiber-reinforced polymerAdaBoostAdaptive boosting
AgGross cross-sectional areaGBMGradient boosting machine
spacingHSpacing between transverse reinforcementsSHAPShapley additive explanations
PexpExperimental axial
load-carrying capacity
fcConcrete compressive strength
MAEMean average errorMAPEMean average percentage error
RMSERoot mean square errorR2Coefficient of determination
HMSHarmony memory sizeHVHarmony vector
HMCRHarmony memory consideration ratePARPitch adjustment rate
λ Slenderness ratio ρ Longitudinal reinforcement
ratio
fuLUltimate strength of the longitudinal reinforcementsEFRPModulus of elasticity of the longitudinal reinforcements

Appendix B

Definitions of the accuracy metrics used in this paper:
R 2 = 1 i = 1 N P i P ^ i 2 i = 1 N P i P mean 2
RMSE = i = 1 N P i P ^ i 2 N
MAPE = 100 i = 1 N P i P i P i N
MAE = i = 1 N P i P i N

References

  1. Alsayed, S.H.; Alhozaimy, A.M. Ductility of concrete beams reinforced with FRP bars and steel fibers. J. Compos. Mater. 1999, 33, 1792–1806. [Google Scholar] [CrossRef]
  2. Cousin, P.; Hassan, M.; Vijay, P.V.; Robert, M.; Benmokrane, B. Chemical resistance of carbon, basalt, and glass fibers used in FRP reinforcing bars. J. Compos. Mater. 2019, 53, 3651–3670. [Google Scholar] [CrossRef]
  3. Hu, X.; Xiao, J.; Zhang, K.; Zhang, Q. The state-of-the-art study on durability of FRP reinforced concrete with seawater and sea sand. J. Build. Eng. 2022, 51, 104294. [Google Scholar] [CrossRef]
  4. Huang, L.; Chen, J.; Tan, X. BP-ANN based bond strength prediction for FRP reinforced concrete at high temperature. Eng. Struct. 2022, 257, 114026. [Google Scholar] [CrossRef]
  5. Deifalla, A. Punching shear strength and deformation for FRP-reinforced concrete slabs without shear reinforcements. Case Stud. Constr. Mater. 2022, 16, e00925. [Google Scholar] [CrossRef]
  6. El Zareef, M.A.; Elbisy, M.S.; Badawi, M. Evaluation of code provisions predicting the concrete shear strength of FRP-reinforced members without shear reinforcement. Compos. Struct. 2021, 275, 114430. [Google Scholar] [CrossRef]
  7. Truong, G.T.; Choi, K.K.; Kim, C.S. Punching shear strength of interior concrete slab-column connections reinforced with FRP flexural and shear reinforcement. J. Build. Eng. 2022, 46, 103692. [Google Scholar] [CrossRef]
  8. Arabshahi, A.; Tavakol, M.; Sabzi, J.; Gharaei-Moghaddam, N. Prediction of the effective moment of inertia for concrete beams reinforced with FRP bars using an evolutionary algorithm. Structures 2022, 35, 684–705. [Google Scholar] [CrossRef]
  9. Ali, A.H.; Mohamed, H.M.; Benmokrane, B. Composite FRP reinforced concrete members with fiber reinforced polymer spirals. Structures 2021, 33, 1868–1877. [Google Scholar] [CrossRef]
  10. Ali, A.H.; Mohamed, H.M.; Chalioris, C.E.; Deifalla, A. Evaluating the shear design equations of FRP-reinforced concrete beams without shear reinforcement. Eng. Struct. 2021, 235, 112017. [Google Scholar] [CrossRef]
  11. Almahmood, H.; Ashour, A.; Sheehan, T. Flexural behavior of hybrid steel-GFRP reinforced concrete continuous T-beams. Compos. Struct. 2020, 254, 112802. [Google Scholar] [CrossRef]
  12. Abdelazim, W.; Mohamed, H.M.; Afifi, M.Z.; Benmokrane, B. Proposed Slenderness Limit for Glass Fiber-Reinforced Polymer-Reinforced Concrete Columns Based on Experiments and Buckling Analysis. ACI Struct. J. 2020, 117, 241–254. [Google Scholar] [CrossRef]
  13. Adam, M.A.; Said, M.; Mahmoud, A.A.; Shanour, A.S. Analytical and experimental flexural behavior of concrete beams reinforced with glass fiber reinforced polymers bars. Constr. Build. Mater. 2015, 84, 354–366. [Google Scholar] [CrossRef]
  14. Afifi, M.Z.; Mohamed, H.M.; Benmokrane, B. Strength and axial behavior of circular concrete columns reinforced with CFRP bars and spirals. J. Compos. Constr. 2014, 18, 04013035. [Google Scholar] [CrossRef]
  15. Abdelazim, W.; Mohamed, H.M.; Benmokrane, B. Inelastic second-order analysis for slender GFRP-reinforced concrete columns: Experimental investigations and theoretical study. J. Compos. Constr. 2020, 24, 04020016. [Google Scholar] [CrossRef]
  16. Elchalakani, M.; Dong, M.; Karrech, A.; Mohamed Ali, M.S.; Huo, J.S. Circular concrete columns and beams reinforced with GFRP bars and spirals under axial, eccentric, and flexural loading. J. Compos. Constr. 2020, 24, 04020008. [Google Scholar] [CrossRef]
  17. Zhao, J.; Luo, X.; Wang, Z.; Feng, S.; Gong, X.; Shumuye, E.D. Experimental study on bond performance of carbon-and glass-fiber reinforced polymer (CFRP/GFRP) bars and steel strands to concrete. Materials 2021, 14, 1268. [Google Scholar] [CrossRef]
  18. AlNajmi, L.; Abed, F. Evaluation of FRP bars under compression and their performance in RC columns. Materials 2020, 13, 4541. [Google Scholar] [CrossRef]
  19. Abdelkarim, O.I.; Ahmed, E.A.; Mohamed, H.M.; Benmokrane, B. Flexural strength and serviceability evaluation of concrete beams reinforced with deformed GFRP bars. Eng. Struct. 2019, 186, 282–296. [Google Scholar] [CrossRef]
  20. Zhou, Y.; Zheng, S.; Huang, Z.; Sui, L.; Chen, Y. Explicit neural network model for predicting FRP-concrete interfacial bond strength based on a large database. Compos. Struct. 2020, 240, 111998. [Google Scholar] [CrossRef]
  21. Golafshani, E.M.; Rahai, A.; Sebt, M.H. Artificial neural network and genetic programming for predicting the bond strength of GFRP bars in concrete. Mater. Struct. 2015, 48, 1581–1602. [Google Scholar] [CrossRef]
  22. Raza, A.; Shah, S.A.R.; Haq, F.; Arshad, H.; Raza, S.S.; Farhan, M.; Waseem, M. Prediction of axial load-carrying capacity of GFRP-reinforced concrete columns through artificial neural networks. Structures 2020, 28, 1557–1571. [Google Scholar] [CrossRef]
  23. Murad, Y.; Tarawneh, A.; Arar, F.; Al-Zu’bi, A.; Al-Ghwairi, A.; Al-Jaafreh, A.; Tarawneh, M. Flexural strength prediction for concrete beams reinforced with FRP bars using gene expression programming. Structures 2021, 33, 3163–3172. [Google Scholar] [CrossRef]
  24. Protchenko, K.; Leśniak, P.; Szmigiera, E.; Urbański, M. New Model for Analytical Predictions on the Bending Capacity of Concrete Elements Reinforced with FRP Bars. Materials 2021, 14, 693. [Google Scholar] [CrossRef] [PubMed]
  25. Bakouregui, A.S.; Mohamed, H.M.; Yahia, A.; Benmokrane, B. Explainable extreme gradient boosting tree-based prediction of load-carrying capacity of FRP-RC columns. Eng. Struct. 2021, 245, 112836. [Google Scholar] [CrossRef]
  26. Theodoridis, S. Machine Learning: A Bayesian and Optimization Perspective; Academic Press: Amsterdam, The Netherlands, 2015. [Google Scholar]
  27. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
  28. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning: With Applications in R; Spinger: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  29. Natekin, A.; Knoll, A. Gradient boosting machines, a tutorial. Front. Neurorobot. 2013, 7, 21. [Google Scholar] [CrossRef] [Green Version]
  30. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Statist. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  31. Freund, Y.; Schapire, R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef] [Green Version]
  32. Scikit-Learn Documentation. Available online: https://scikit-learn.org/stable/modules/ensemble.html#forest (accessed on 14 March 2022).
  33. Feng, D.C.; Wang, W.J.; Mangalathu, S.; Hu, G.; Wu, T. Implementing ensemble learning methods to predict the shear strength of RC deep beams with/without web reinforcements. Eng. Struct. 2021, 235, 111979. [Google Scholar] [CrossRef]
  34. Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  35. XGBoost Documentation. Available online: https://xgboost.readthedocs.io/en/stable/tutorials/model.html (accessed on 14 March 2022).
  36. Dorogush, A.V.; Ershov, V.; Gulin, A. CatBoost: Gradient boosting with categorical features support. arXiv 2018, arXiv:1810.11363. [Google Scholar]
  37. Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. CatBoost: Unbiased boosting with categorical features. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar]
  38. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  39. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  40. Kayabekir, A.E.; Arama, Z.A.; Bekdaş, G.; Nigdeli, S.M.; Geem, Z.W. Eco-Friendly Design of Reinforced Concrete Retaining Walls: Multi-objective Optimization with Harmony Search Applications. Sustainability 2020, 12, 6087. [Google Scholar] [CrossRef]
  41. Yücel, M.; Kayabekir, A.E.; Bekdaş, G.; Nigdeli, S.M.; Kim, S.; Geem, Z.W. Adaptive-Hybrid Harmony Search Algorithm for Multi-Constrained Optimum Eco-Design of Reinforced Concrete Retaining Walls. Sustainability 2021, 13, 1639. [Google Scholar] [CrossRef]
  42. Cakiroglu, C.; Bekdaş, G.; Geem, Z.W. Harmony Search Optimization of Dispersed Laminated Composite Plates. Materials 2020, 13, 2862. [Google Scholar] [CrossRef] [PubMed]
  43. Cakiroglu, C.; Islam, K.; Bekdaş, G.; Kim, S.; Geem, Z.W. Metaheuristic Optimization of Laminated Composite Plates with Cut-Outs. Coatings 2021, 11, 1235. [Google Scholar] [CrossRef]
  44. Cakiroglu, C.; Bekdaş, G.; Kim, S.; Geem, Z.W. Optimization of Shear and Lateral–Torsional Buckling of Steel Plate Girders Using Meta-Heuristic Algorithms. Appl. Sci. 2020, 10, 3639. [Google Scholar] [CrossRef]
  45. Cakiroglu, C.; Islam, K.; Bekdaş, G.; Billah, M. CO2 Emission and Cost Optimization of Concrete-Filled Steel Tubular (CFST) Columns Using Metaheuristic Algorithms. Sustainability 2021, 13, 8092. [Google Scholar] [CrossRef]
  46. Bekdaş, G.; Cakiroglu, C.; Islam, K.; Kim, S.; Geem, Z.W. Optimum Design of Cylindrical Walls Using Ensemble Learning Methods. Appl. Sci. 2022, 12, 2165. [Google Scholar] [CrossRef]
  47. Bekdas, G. Optimum design of axially symmetric cylindrical reinforced concrete walls. Struct. Eng. Mech. 2014, 51, 361–375. [Google Scholar] [CrossRef]
  48. Bekdaş, G. Harmony Search Algorithm Approach for Optimum Design of Post-Tensioned Axially Symmetric Cylindrical Reinforced Concrete Walls. J. Optim. Theory Appl. 2014, 164, 342–358. [Google Scholar] [CrossRef]
  49. Ocak, A.; Nigdeli, S.M.; Bekdaş, G.; Kim, S.; Geem, Z.W. Adaptive Harmony Search for Tuned Liquid Damper Optimization under Seismic Excitation. Appl. Sci. 2022, 12, 2645. [Google Scholar] [CrossRef]
  50. Kayabekir, A.E.; Bekdaş, G.; Yücel, M.; Nigdeli, S.M.; Geem, Z.W. Harmony Search Algorithm for Structural Engineering Problems. In Nature-Inspired Metaheuristic Algorithms for Engineering Optimization Applications; Carbas, S., Toktas, A., Ustun, D., Eds.; Springer Tracts in Nature-Inspired Computing; Springer: Singapore, 2021. [Google Scholar] [CrossRef]
Figure 1. Correlation matrix of the dataset.
Figure 1. Correlation matrix of the dataset.
Materials 15 02742 g001
Figure 2. Comparison of predictive models with respect to accuracy and speed; (a) R2; (b) RMSE; (c) MAE; (d) MAPE; (e) Time [s].
Figure 2. Comparison of predictive models with respect to accuracy and speed; (a) R2; (b) RMSE; (c) MAE; (d) MAPE; (e) Time [s].
Materials 15 02742 g002
Figure 3. Comparison of experimental and predicted p values; (a) XGBoost; (b) GBM; (c) Random Forest; (d) Lasso; (e) AdaBoost; (f) CastBoost.
Figure 3. Comparison of experimental and predicted p values; (a) XGBoost; (b) GBM; (c) Random Forest; (d) Lasso; (e) AdaBoost; (f) CastBoost.
Materials 15 02742 g003aMaterials 15 02742 g003b
Figure 4. SHAP values of the design variables for the XGBoost model.
Figure 4. SHAP values of the design variables for the XGBoost model.
Materials 15 02742 g004
Figure 5. Feature dependence plots for the XGBoost model; (a) A g f c ; (b) f c f uL ; (c) Slenderness f c ; (d) spacing H ρ .
Figure 5. Feature dependence plots for the XGBoost model; (a) A g f c ; (b) f c f uL ; (c) Slenderness f c ; (d) spacing H ρ .
Materials 15 02742 g005
Figure 6. Decrease in the total error throughout the harmony search iterations (Equation (18)). (a) First 500 HS iterations; (b) First 2500 HS iterations.
Figure 6. Decrease in the total error throughout the harmony search iterations (Equation (18)). (a) First 500 HS iterations; (b) First 2500 HS iterations.
Materials 15 02742 g006
Figure 7. Development of the coefficients in Equation (18); (a) a 0 ; (b) a 1 ; (c) a 2 ; (d) a 3 ; (e) a 4 ; (f) a 5 ; (g) a 6 ; (h) a 7 ; (i) a 8 ; (j) a 9 ; (k) a 10 .
Figure 7. Development of the coefficients in Equation (18); (a) a 0 ; (b) a 1 ; (c) a 2 ; (d) a 3 ; (e) a 4 ; (f) a 5 ; (g) a 6 ; (h) a 7 ; (i) a 8 ; (j) a 9 ; (k) a 10 .
Materials 15 02742 g007aMaterials 15 02742 g007b
Figure 8. Comparison of experimental and predicted (Equation (19)) p values.
Figure 8. Comparison of experimental and predicted (Equation (19)) p values.
Materials 15 02742 g008
Figure 9. Decrease in the total error throughout the harmony search iterations (Equation (20)). (a) First 500 HS iterations; (b) First 2500 HS iterations.
Figure 9. Decrease in the total error throughout the harmony search iterations (Equation (20)). (a) First 500 HS iterations; (b) First 2500 HS iterations.
Materials 15 02742 g009
Figure 10. Development of the coefficients in Equation (20); (a) a 0 ; (b) a 1 ; (c) a 2 ; (d) a 3 ; (e) a 4 ; (f) a 5 ; (g) a 6 ; (h) a 7 ; (i) a 8 ; (j) a 9 .
Figure 10. Development of the coefficients in Equation (20); (a) a 0 ; (b) a 1 ; (c) a 2 ; (d) a 3 ; (e) a 4 ; (f) a 5 ; (g) a 6 ; (h) a 7 ; (i) a 8 ; (j) a 9 .
Materials 15 02742 g010
Figure 11. Comparison of experimental and predicted (Equation (22)) p values.
Figure 11. Comparison of experimental and predicted (Equation (22)) p values.
Materials 15 02742 g011
Figure 12. Decrease in the total error throughout the harmony search iterations (Equation (23)); (a) First 500 HS iterations; (b) First 2500 HS iterations.
Figure 12. Decrease in the total error throughout the harmony search iterations (Equation (23)); (a) First 500 HS iterations; (b) First 2500 HS iterations.
Materials 15 02742 g012
Figure 13. Comparison of experimental and predicted (Equation (24)) p values.
Figure 13. Comparison of experimental and predicted (Equation (24)) p values.
Materials 15 02742 g013
Table 1. Performance comparison of different machine learning algorithms.
Table 1. Performance comparison of different machine learning algorithms.
R2RMSEMAEMAPETime Elapsed
Train (KRR)0.9037444920.2432.40 s
Test (KRR)0.8417685480.299
Train (SVR)0.52216466040.1822.67 s
Test (SVR)0.52013346520.304
Train (Lasso)0.9713992360.0962.36 s
Test (Lasso)0.9552441940.101
Train (GBM)0.99957.944.10.0222.54 s
Test (GBM)0.975182.91460.063
Train (AdaBoost)0.9733842840.1192.64 s
Test (AdaBoost)0.9103452830.117
Train (RF)0.99319380.70.0272.95 s
Test (RF)0.969204142.40.066
Train (XGBoost)0.99922.37.50.0034.41 s
Test (XGBoost)0.982153.8112.70.054
Train (CatBoost)0.99935.728.90.01329.92 s
Test (CatBoost)0.931301197.40.119
Table 2. Proposed equations.
Table 2. Proposed equations.
EquationR2
P = 46.75   +   0.00207 A g 0.901 f c 1.085 f uL 0.02185   +   0.0493 λ 0.676 f c 1.862   +   16.84 spacing H 0.122 ρ 3.84 0.958
P = 131.77   +   0.002608 A g 0.93058 f c 0.955 0.950
P = 0.00123 A g 0.9946 f c 0.9266 spacing H 0.01124 λ 0.1474 ρ 0.08769 f uL 0.0589 0.978
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cakiroglu, C.; Islam, K.; Bekdaş, G.; Kim, S.; Geem, Z.W. Interpretable Machine Learning Algorithms to Predict the Axial Capacity of FRP-Reinforced Concrete Columns. Materials 2022, 15, 2742. https://doi.org/10.3390/ma15082742

AMA Style

Cakiroglu C, Islam K, Bekdaş G, Kim S, Geem ZW. Interpretable Machine Learning Algorithms to Predict the Axial Capacity of FRP-Reinforced Concrete Columns. Materials. 2022; 15(8):2742. https://doi.org/10.3390/ma15082742

Chicago/Turabian Style

Cakiroglu, Celal, Kamrul Islam, Gebrail Bekdaş, Sanghun Kim, and Zong Woo Geem. 2022. "Interpretable Machine Learning Algorithms to Predict the Axial Capacity of FRP-Reinforced Concrete Columns" Materials 15, no. 8: 2742. https://doi.org/10.3390/ma15082742

APA Style

Cakiroglu, C., Islam, K., Bekdaş, G., Kim, S., & Geem, Z. W. (2022). Interpretable Machine Learning Algorithms to Predict the Axial Capacity of FRP-Reinforced Concrete Columns. Materials, 15(8), 2742. https://doi.org/10.3390/ma15082742

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop