Next Article in Journal
A Novel mHealth Approach for the Monitoring and Assisted Therapeutics of Obstructive Sleep Apnea
Next Article in Special Issue
Incorporating Artificial Intelligence Technology in Smart Greenhouses: Current State of the Art
Previous Article in Journal
Directionally Solidified Cobalt-Doped MgO-MgAl2O4 Eutectic Composites for Selective Emitters
Previous Article in Special Issue
Developing Predictive Models of Collapse Settlement and Coefficient of Stress Release of Sandy-Gravel Soil via Evolutionary Polynomial Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting the Young’s Modulus of Rock Material Based on Petrographic and Rock Index Tests Using Boosting and Bagging Intelligence Techniques

by
Long Tsang
1,
Biao He
2,*,
Ahmad Safuan A Rashid
3,
Abduladheem Turki Jalil
4 and
Mohanad Muayad Sabri Sabri
5,*
1
Geofirst Pty Ltd., 2/7 Luso Drive, Unanderra, NSW 2526, Australia
2
Department of Civil Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia
3
Faculty of Civil Engineering, Universiti Teknologi Malaysia, Johor Bahru 81310, Johor, Malaysia
4
Medical Laboratories Techniques Department, Al-Mustaqbal University College, Babylon, Hilla 51001, Iraq
5
Peter the Great St. Petersburg Polytechnic University, 195251 St. Petersburg, Russia
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(20), 10258; https://doi.org/10.3390/app122010258
Submission received: 8 September 2022 / Revised: 6 October 2022 / Accepted: 8 October 2022 / Published: 12 October 2022
(This article belongs to the Collection Heuristic Algorithms in Engineering and Applied Sciences)

Abstract

:
Rock deformation is considered one of the essential rock properties used in designing and constructing rock-based structures, such as tunnels and slopes. This study applied two well-established ensemble techniques, including boosting and bagging, to the artificial neural networks and decision tree methods for predicting the Young’s modulus of rock material. These techniques were applied to a dataset comprising 45 data samples from a mountain range in Malaysia. The final input variables of these models, including p-wave velocity, interlocking coarse-grained crystals of quartz, dry density, and Mica, were selected through a likelihood ratio test. In total, six models were developed: standard artificial neural networks, boosted artificial neural networks, bagged artificial neural networks, classification and regression trees, extreme gradient boosting trees (as a boosted decision tree), and random forest (as a bagging decision tree). The performance of these models was appraised utilizing correlation coefficient (R), mean absolute error (MAE), and lift chart. The findings of this study showed that, firstly, extreme gradient boosting trees outperformed all models developed in this study; secondly, boosting models outperformed the bagging models.

1. Introduction

Generally, supplying a dependable evaluation of rock mass behaviors (i.e., deformation and strength) is primarily significant in designing and investigating rock engineering implementations, including underground excavation, slope, and foundation. To delve more into the depth of this issue, to analyze the characteristic of rock deformation, two major inputs should be considered for which the elasticity constants of rock are mass (E, v). Generally, it is feasible to measure the parameters of rock mass deformability, including Young’s modulus, by field tests essentially in situ moduli (signified by Erm) directly and through research lab examinations as intact modulus (signified by E) indirectly [1,2]. Furthermore, considering true deformation analysis regarding rock needs to be conducted due to location circumstances; it is significant to assess the parameters of rock deformation by utilizing both field and lab conditions. Therefore, the intact material modulus resulting from lab practices must be associated with the rock mass modulus utilizing a suitable classified scheme.
The most broadly applied schemes for discovering Young’s modulus contain plate loading and flat jack examination for determinations in the field [3], along with uniaxial compressive strength (UCS) examination in the lab. Nevertheless, the consequences of these experiments may be vulnerable to be ambiguous according to the discontinuity and anisotropic characteristics of rock mass forced to various field pressures [1,4]. However, direct measurements of Young’s modulus are extensively slow and demand expensive facilities, especially for field experiments [1,5].
Various polynomial regression patterns utilizing field along with lab analysis data have been offered to overcome these restrictions. These regression models employed associations with data from field and classification methods of rock mass, consisting of rock mass rating [6], tunnelling quality index [7], and geological strength index [8], to measure the values of deformation modulus, Erm, for an isotropic rock mass. The results from the majority of these associations present a quite reliable fit to the field data, although the modulus-based equations and the exponential relationships, as recommended by [9,10], address poor efficiency in rock mass deformation moduli’s prediction.
Moreover, several regression equations were developed using data from lab analyses. The applied method of developing these regressions was relating a specified spectrum of data from simple index tests, including Schmidt hammer [11,12], ultrasonic velocity [2,13], point load strength [14,15], and porosity [1,16,17], to Young’s modulus, E. While some previous studies stated that the traditional statistical methods lack the power of generalizability [17,18], this present study endeavors to examine this controversy rigorously.
Based on the latest research, the number of claimed successful applications of intelligent methodologies has noticeably increased in the area of science and engineering since almost 20 years ago [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42]. In this regard, artificial intelligence and machine learning methods, such as artificial neural networks (ANNs), have been used in the initial design of rock-based structures [5,43,44,45]. To elaborate, in 2018, [46] put into perspective the consequences of a fuzzy inference system (FIS) model in forecasting the UCS and E of greywacke samples. The author used 54 fuzzy rules in the suggested fuzzy model to outline inputs, rock index properties, output variable, E, and USC. To conclude, the author examined the results of the fuzzy model by analyzing its performance based on multiple regression. In so doing, they observed that the predicted results in comparison with a statistical model were in better agreement with the lab test results. In another investigation, Kahraman, et al. [47] provided an ANN model to forecast the E and UCS levels for Misis fault breccia. Having compared their forecast outcomes with the regression models, they observed a high degree of precision in their designed ANN model. In another comparative study to predict Young’s modulus of gypsum, Yılmaz and Yuksek [15] developed both an ANN and multiple regression, and they concluded that the ANN model could predict the rock modulus more accurately. Mohamad, et al. [48] and Momeni, et al. [49], to show the prosperous application of intelligent methodologies in predicting the UCS of rocks, carried out studies using the hybrid particle swarm optimization technique (PSO) and ANN model. Table 1 clearly shows some newly proposed models for forecasting the rock modulus, E, using soft computation methods. Bejarbaneh, Bejarbaneh, Fahimifar, Armaghani and Abd Majid [2] also developed the multiple regression (MR), ANN, and FIS models to predict the E using data from a dam positioned in Malaysia. Their findings showed that the ANN model outperformed the FIS and MR models in terms of accuracy. It should be mentioned that the successful applications of intelligent methodologies have been reported by many researchers in solving civil and mining engineering problems [50,51,52,53].
While the application of the supervised machine learning (ML) techniques, particularly ANN, was vast in predicting the E, to the authors’ best knowledge, a few studies investigated the application of ensembled ANN for predicting the elasticity modulus. In addition, no study applied DTs and their ensembled variants for E prediction. Bagging and boosting are two well-established ensemble approaches for solving both regression and classification problems. However, rare studies employed these techniques for predicting the E. Thus, this study will apply the ensemble variants of ANN and DTs to investigate the performance of these techniques for predicting the E.

2. Methods

The dataset under study consists of 45 samples of eight variables, i.e., dry density (DD), p-wave velocity (Vp), interlocking coarse-grained crystals of quartz (Qtz), plagioclase (Plg), alkali feldspar (Kpr), chlorite (Chl), Mica, and young’s modulus (E). Note that DD, Vp, and E were obtained using relevant laboratory testing procedure [52] and mineral properties of the samples (i.e., Qtz, Plg, Kpr, Chl, and Mica) were obtained using petrographic analyses. The goal is to investigate the most appropriate ensemble approach for predicting E. The dependent variable that should be expressed in terms of other variables is E.
Two ensemble approaches were applied to ANN and DTs for E prediction. These approaches included bagging and boosting. The performance of these models was assessed utilizing linear correlation (R), mean absolute error (MAE), and lift. A flow diagram of this research is indicated in Figure 1.

2.1. Data Gathering and Case Study

The authors of this present study gathered data from the surface of the Pahang–Selangor raw water transfer (PSRWT) tunnel in Malaysia. This project was constructed to transport the Selangor’s water demand and the state of Kuala Lumpur to Selangor. It is worth noting that the tunnel was passed under the major mountain chain separating Selangor and Pahang states. This mountain chain has an elevation varying from 100 to 1400 m and shaping the backbone of Peninsular Malaysia. Granite is the major rock variety in this tunnel, with a regular rock strength of 150–200 MPa. The excavators unearthed three tunnel parts using three tunnel boring machines (TBMs). The length of these tunnels was 11.7 km, 11.7 km, and 11.3 km.
For lab tests, the research team prepared 45 rock samples. They also examined the samples for any factor that may lead to an unwanted change in qualities and unanticipated breakdowns. Thus, the samples were inspected for any cracks or tiny-scale discontinuities. Dry density (DD) and Vp were among the rock physical tests. In addition, in order to determine the E, the research team conducted uniaxial compression tests. The rock samples’ E were calculated based on the axial strain outcomes and the stress estimated by significant transducers. ISRM [59] standards were implemented during the sample preparation and sample testing.
The data of this research also were used by Armaghani, Mohamad, Momeni and Narayanasamy [54]. The research team also conducted petrographic analyses on the granite samples, employing a polarizing petrological microscope. In order to determine the portion of various minerals in the samples, small parts of the samples were prepared. The specimens present nonporphyritic and holocrystalline mineral composition and usually chiefly involve alkali feldspar (Kpr), interlocking coarse-grained crystals of quartz (Qtz), biotite (Bi), and plagioclase (Plg). Note that micas are factored by three minerals: sericite, muscovite, and biotite.

2.2. Data Analysis

In this research, multiple statistical and simulation procedures were applied to examine the outcomes of lab tests. The consequent parts describe the utilization of the techniques discussed before to gauge the E of granite samples. Eventually, the E values gained from lab experiments were compared with those forecasted.

2.3. Regression Analysis

This research produced various experimental equations employing simple and multiple regression methods. While the simple regression is represented by the general form of Y = A1X1 + M, the multiple regression is denoted by the generic structure of Y = A1X1 + A2X2 + .... + AnBn + M, where {A1, A2, …, An} are the regression’s coefficients and M is Y’s fixed value when every input parameter is equal to zero. Figure 2 displays the simple equations, plots, and linear correlation (R). In addition, two equations were developed using multiple regression techniques (Table 2). Four common approaches, including “enter”, “stepwise”, “backwards”, and “forwards”, were applied to develop these equations. The simple regression equations showed that Vp was the most correlated factor with E, followed by Qtz. In addition, three methods of multiple regression techniques, including stepwise, backwards, and forwards, yielded identical equations for estimating “E”. The performances of these equations are not encouraging. Both simple and multiple regression models yielded low accuracy. Moreover, relying only on one predictor in the simple regression technique is insufficient for analyzing and predicting the “E”. Yilmaz and Yuksek [14] stated that the primary technical paucity of regression techniques is that they simply ascertain relations and do not learn the underlying causal structure.
The machine learning methods can overwhelm the abovementioned concerns using their assumption-free qualities. Various ML techniques can explain E-related problems. While some studies considered well-known ML techniques, such as ANN and DT, for E prediction, no study employed these methods’ boosted and bagged variants. Therefore, this research will apply these two advanced techniques to ANN and DT to predict the E values of Main Range granite in Malaysia.

2.4. ANN and DT Models

2.4.1. ANN

Various ML techniques are available for solving regression problems. Two of the most commonly used ML techniques are ANN and DT. An ANN is a modeling procedure stimulated by the human nervous system that enables learning through representative data instances, which explains a physical aspect or a choice process. One different characteristic of ANN is that they can ascertain practical associations between inputs and target variables and also obtain exact knowledge and complicated information from representational datasets. Associative relationships between inputs and target variables are ascertained without presumptions on the analytical description of the phenomes. This technique presents destined benefits over regression-based techniques, including its capability to handle noisy data. NNs comprise a layer of input nodes and a layer of output nodes joined by one or more layers of hidden nodes.
An ANN ensemble is a learning model where several NNs are trained for a similar assignment [60]. It was shown that the generalization capability of the NN model was remarkably enhanced by ensembling various NNs [61]. Typically, an NN ensemble is built in a couple of steps: (1) training several elements of NNs and (2) consolidating the element forecasts.
There are two common approaches for training component NNs, including boosting and bagging. The process of boosting and bagging is shown in Figure 3. Despite the fact that both bagging and boosting involve N students, they are fundamentally distinct. Unlike the bagging technique, which combines predictions of the same type, the boosting method incorporates predictions of various kinds. In bagging, each model is constructed independently of the others, whereas, in boosting, the outcomes of earlier constructed models are impacted. In the bagging technique, every model is given equal weight, while, in the boosting technique, the new ones are weighted according to their performance. In boosting, new components of training data contain observations that the old model incorrectly classified. Bagging utilizes randomly generated subsets of training data.

2.4.2. DTs

Boosting technique was introduced by Schapire [62] and updated by Freund [63], Freund and Schapire [64]. This technique produces a set of element NNs whose training datasets are ascertained by the performance of previous ones. Training examples that are incorrectly forecasted by previous networks will perform more vital tasks in the training of succeeding networks. Bagging is introduced by Breiman [65] following the bootstrap sampling procedure [66]. This technique produces various training datasets from the initial training dataset and later trains an element NN from every training set.
DT is a kind of supervised learning algorithm suitable for classification and regression problems. With a tree-like formation, it builds the training data from the top down by choosing the most suitable decision node to split first. The subsequent nodes are then produced using entropy and information gain. For every chance node, weights are estimated by conditional (joint) probabilities. Various DTs are successfully implemented in the engineering field, including classification and regression decision tree (CART), Chi-square automatic interaction detector (CHAID), Quick, unbiased and efficient statistical tree (QUEST), and C5. However, these DTs are assumed as weak learners because they are vulnerable to changes in data as well as overfit.
To remedy the issues with DTs stated above, ensemble approaches can be employed. The same as the application of the ensemble approach in ANN, the ensemble approach integrates various DTs to create a more suitable predictive performance compared to a single DT. Random forest (RF) [67] and extreme gradient boosting tree (XGBT) [68] are well-known bagging and boosting techniques, which use a CART model for creating a stronger learner.
As earlier discussed, RF is a bagging algorithm that reduces variance. DTs are unstable with regard to the tiny fluctuations in data. RF can offer a strong model, which lessens the diversity caused by the “bagging” technique. Bagging is an ensemble method in which several predictors are made and consolidated by applying some averaging systems, including normal average, majority vote, and weighted average.
Gradient boosting (GB) is an effective technique for boosting. GB applies a gradient descent algorithm that can optimize any differentiable loss function. A combination of DTs is created one by one, and single DTs are summed sequentially. The next DT attempts to improve the loss (variation between real and forecasted values). Extreme gradient boosting tree (XGBT), which enjoys definite assessments to determine the most suitable tree model, is an effective extension of GB. XGBT implements its tree creation process where the similarity score (SS) and gain discover the fittest node divisions. Later, the node division with the most excellent gain is chosen as the most suitable division for the tree.

3. Models’ Development and Results

3.1. Data Preparation

For developing the ML models, it was required to normalize data, since the data were unbalanced. Thus, the input variables were transformed using min/max transformation techniques wherein the minimum value was zero and the maximum was one. Details of inputs’ distribution before and after the transformation are displayed in Table 3.

3.2. Input Selection

Before applying boosting and bagging techniques to data, an input selection has been conducted. As previously mentioned, this study adopted a likelihood ratio test for selecting the most relevant inputs for predicting the E. The candidate inputs included DD, Vp, Qtz, Mica, Chl, Plg, and Kpr. This test then selected Vp, Qtz, DD, and Mica, respectively, as the most relevant E prediction inputs. Figure 4 provides a series of 3D density diagrams of the selected variables and shows the frequency distribution of pairs of each input variable and the target variable (E). The inputs selected in this step will be used to develop the standard, boosting, and bagging models.

3.3. Bagging and Boosting Models

This study developed six models, including standard ANN, standard DT (CART), bagged ANN, boosted ANN, bagged DT (random forest), and boosted DT (extreme gradient boosting tree). These models are developed using the transformed and selected inputs in the previous steps. For developing these models, a 10-fold cross-validation technique was employed to avoid overfitting. In k-fold cross-validation, smaller collections are created by dividing the training dataset into k folds. Then, the training data, including k-1 folds, are trained. The unused part of the data is employed to validate the resulting model. Next, the mean of the values calculated in the circle is adopted to report the k-fold cross-validation performance. Figure 5 shows the process of a 10-fold cross-validation used in this study.
The team behind this research employed various hyperparameters controlling the learning process to develop these models. They also used a grid-search strategy to optimize these parameters. The process of a grid-search in this study is shown in Figure 6. As previously mentioned, the single DT developed in this study is CART. Its optimized hyperparameters included levels under root = 5; highest replacements = 5; lowest variation in impurity = 0.0; lowest records in parent branch = 2%; and smallest records in child branch = 1%. Concerning XGBT model, the optimized hyperparameters were: boost rounds = 10; maximum tree depth = 6; minimum child weight = 1.0; objective = reg:linear; sub sample = 1.0; Eta = 0.3; Colsample by tree = 1.0; Colsample be level = and 1.0; Lambda. For RF, the best hyperparameters comprised: number of trees to build = 10 and minimum leaf node size = 1. Concerning the standard ANN model, the hyperparameters were: NN model = MLP; hidden layer 1 = 1; hidden layer 2: 0; and overfit prevention set = 30%. For boosted and bagged models: number of component models for boosting = 10 and combining rule = mean.
Once the data were prepared and models’ hyperparameters were tuned, the models were executed. Figure 7 shows a comparison between the actual and predicted values of “E” by the models developed in this study. The performance of the models was evaluated using two well-known criteria, including linear correlation (R) and mean absolute error (MAE). The calculation formulas of these criteria are shown by Equations (1) and (2).
R = i = 1 n ( e i e ¯ i ) ( m i m ¯ i ) i = 1 n ( e i e ¯ i ) 2 ( m i m ¯ i ) 2
M A E = i = 1 n | e i m i | n
where, ei and mi denote nth real and predicted values, respectively; ēi and m ¯ i signify the average values of real and predicted values, respectively; and n stands for the number of samples in the dataset.
Table 4 and Table 5 show the performance of each model developed in this study. Concerning standard models, the ANN possessed a better R and MAE than the CART model. RF outperformed its peer, ANN, when the bagged models are concerned. For boosted models, XGBT had a better performance than the boosted ANN. Among all models developed in this study, boosted models outperformed the bagged and standard models. Finally, the boosted DT, which, in fact, was an XGBT, outperformed other models developed in this study.
In addition to R and MAE, a lift chart also was used for models’ evaluation. The lift chart is from the family of cumulative charts. In this chart, the fraction of records in every quantile that is hit with the overall percentage of hits in the training data is compared. Equation (3) shows its calculation formula. Hits refer to the values higher than the midpoint of the values’ range.
L i f t = ( H q / R q ) ( T h / T r )
where Hq denotes hits in quantile; Rq refers to the records in quantile; Th signifies the total hits; and Tr is the total records.
Figure 8 shows the lift chart of the models developed in this investigation. In this chart, elevated lines show better models, especially on the left edge of the graph. As can be seen, boosted DT (XGBT) line was higher than the others. This line also was more stable compared to other lines.
Great performance of the XGBT model compared to CART and RF models was expected because of the various capabilities of XGBT that help it to produce a reliable and accurate predictive model. XGBT enjoys the similarity score for a straightaway prune tree, beginning the real modeling goals. In addition, XGBT is a reliable choice for unbalanced datasets, but RF is not trustable for these circumstances. One of the most important differences between XGBT and RF is that the XGBT regularly provides added significance to functional space when decreasing the cost of a model, while RF seeks to provide more inclinations to hyperparameters to optimize the model. XGBT also outperformed all forms of ANN, which can add knowledge and experiences in the field of civil engineering, particularly mining.
In comparison with the previous studies, the accuracy of XGBT model (R = 0.994) was more than that of Armaghani et al. [54] (R = 0.992), who applied an ANFIS model to the same database. They used DD, ultrasonic velocity, quartz content, and plagioclase to develop their predictive model. They selected these inputs using simple and multiple regression models. In addition, the accuracy achieved in this study was higher than in other studies reported in Table 1.

4. Conclusions

This study applied two well-established ensemble techniques, including boosting and bagging to the ANN and DT methods, for predicting the Young’s modulus. These techniques were applied to a dataset comprising 45 data samples from a mountain range in Malaysia. The final input variables of these models, including Vp, Qtz, DD, and Mica, were selected through a likelihood ratio test. In total, six models were developed: standard ANN, boosted ANN, bagged ANN, CART, XGBT, and RF. The performance of these models was evaluated using R, MAE, and lift charts. The findings of this study showed that, firstly, XGBT outperformed all models developed in this study; secondly, boosting models outperformed the bagging models.
One constraint of this analysis was its small sample size. Other experiments can utilize these predictive techniques for E prediction using larger sample sizes to obtain higher accuracies. However, the predictive techniques stated above can be implemented for forecasting the E depending on the circumstances. Regarding the performance, it was further remarked that applying the bagging and boosting techniques builds a functional approach to reducing ambiguities during planning rock engineering projects. Moreover, these methods can be employed for predicting many other mining problems, including unconfined compressive strength. XGBT is an effective, reliable, and functional algorithm. XGBT holds both tree learning algorithms and linear model solvers. Therefore, what makes it quick is its ability to administer parallel calculations at the same time.

Author Contributions

Conceptualization, L.T., B.H. and A.S.A.R.; methodology, L.T., B.H. and M.M.S.S.; formal analysis, L.T. and B.H.; writing—original draft preparation, L.T., B.H., A.T.J., A.S.A.R. and M.M.S.S.; writing—review and editing, L.T., B.H., A.T.J., A.S.A.R. and M.M.S.S.; supervision, A.S.A.R. and M.M.S.S.; funding acquisition, M.M.S.S. All authors have read and agreed to the published version of the manuscript.

Funding

The research is partially funded by the Ministry of Science and Higher Education of the Russian Federation under the strategic academic leadership program ‘Priority 2030’ (Agreement 075-15-2021-1333 dated 30 September 2021).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Armaghani, D.J.; Mohamad, E.T.; Momeni, E.; Monjezi, M.; Narayanasamy, M.S. Prediction of the strength and elasticity modulus of granite through an expert artificial neural network. Arab. J. Geosci. 2016, 9, 48. [Google Scholar] [CrossRef]
  2. Bejarbaneh, B.Y.; Bejarbaneh, E.Y.; Fahimifar, A.; Armaghani, D.J.; Abd Majid, M.Z. Intelligent modelling of sandstone deformation behaviour using fuzzy logic and neural network systems. Bull. Eng. Geol. Environ. 2018, 77, 345–361. [Google Scholar] [CrossRef]
  3. Hoek, E.; Diederichs, M.S. Empirical estimation of rock mass modulus. Int. J. Rock Mech. Min. Sci. 2006, 43, 203–215. [Google Scholar] [CrossRef]
  4. Bejarbaneh, B.Y.; Armaghani, D.J.; Amin, M.F.M. Strength characterisation of shale using Mohr–Coulomb and Hoek–Brown criteria. Measurement 2015, 63, 269–281. [Google Scholar] [CrossRef] [Green Version]
  5. Mishra, D.; Basu, A. Estimation of uniaxial compressive strength of rock materials by index tests using regression analysis and fuzzy inference system. Eng. Geol. 2013, 160, 54–68. [Google Scholar] [CrossRef]
  6. Bieniawski, Z. Engineering classification of jointed rock masses. Civ. Eng. Siviele Ing. 1973, 1973, 335–343. [Google Scholar]
  7. Barton, N.; Lien, R.; Lunde, J. Engineering classification of rock masses for the design of tunnel support. Rock Mech. 1974, 6, 189–236. [Google Scholar] [CrossRef]
  8. Hoek, E.; Brown, E.T. Practical estimates of rock mass strength. Int. J. Rock Mech. Min. Sci. 1997, 34, 1165–1186. [Google Scholar] [CrossRef]
  9. Mitri, H.; Edrissi, R.; Henning, J. Finite-element modeling of cable-bolted stopes in hard-rock underground mines. Trans.-Soc. Min. Metall. Explor. Inc. 1995, 298, 1897–1902. [Google Scholar]
  10. Sonmez, H.; Gokceoglu, C.; Ulusay, R. Indirect determination of the modulus of deformation of rock masses based on the GSI system. Int. J. Rock Mech. Min. Sci. 2004, 41, 849–857. [Google Scholar] [CrossRef]
  11. Yılmaz, I.; Sendır, H. Correlation of Schmidt hardness with unconfined compressive strength and Young’s modulus in gypsum from Sivas (Turkey). Eng. Geol. 2002, 66, 211–219. [Google Scholar] [CrossRef]
  12. Dinçer, I.; Acar, A.; Çobanoğlu, I.; Uras, Y. Correlation between Schmidt hardness, uniaxial compressive strength and Young’s modulus for andesites, basalts and tuffs. Bull. Eng. Geol. Environ. 2004, 63, 141–148. [Google Scholar] [CrossRef]
  13. Yaşar, E.; Erdoğan, Y. Estimation of rock physicomechanical properties using hardness methods. Eng. Geol. 2004, 71, 281–288. [Google Scholar] [CrossRef]
  14. Yilmaz, I.; Yuksek, G. Prediction of the strength and elasticity modulus of gypsum using multiple regression, ANN, and ANFIS models. Int. J. Rock Mech. Min. Sci. 2009, 46, 803–810. [Google Scholar] [CrossRef]
  15. Yılmaz, I.; Yuksek, A. An example of artificial neural network (ANN) application for indirect estimation of rock parameters. Rock Mech. Rock Eng. 2008, 41, 781–795. [Google Scholar] [CrossRef]
  16. Lashkaripour, G.R. Predicting mechanical properties of mudrock from index parameters. Bull. Eng. Geol. Environ. 2002, 61, 73–77. [Google Scholar] [CrossRef]
  17. Beiki, M.; Majdi, A.; Givshad, A.D. Application of genetic programming to predict the uniaxial compressive strength and elastic modulus of carbonate rocks. Int. J. Rock Mech. Min. Sci. 2013, 63, 159–169. [Google Scholar] [CrossRef]
  18. Rezaei, M.; Majdi, A.; Monjezi, M. An intelligent approach to predict unconfined compressive strength of rock surrounding access tunnels in longwall coal mining. Neural Comput. Appl. 2014, 24, 233–241. [Google Scholar] [CrossRef]
  19. Zhu, W.; Rad, H.N.; Hasanipanah, M. A chaos recurrent ANFIS optimized by PSO to predict ground vibration generated in rock blasting. Appl. Soft Comput. 2021, 108, 107434. [Google Scholar] [CrossRef]
  20. Asteris, P.G.; Mamou, A.; Hajihassani, M.; Hasanipanah, M.; Koopialipoor, M.; Le, T.-T.; Kardani, N.; Armaghani, D.J. Soft computing based closed form equations correlating L and N-type Schmidt hammer rebound numbers of rocks. Transp. Geotech. 2021, 29, 100588. [Google Scholar] [CrossRef]
  21. Fattahi, H.; Hasanipanah, M. Prediction of blast-induced ground vibration in a mine using relevance vector regression optimized by metaheuristic algorithms. Nat. Resour. Res. 2021, 30, 1849–1863. [Google Scholar] [CrossRef]
  22. Aghaabbasi, M.; Shekari, Z.A.; Shah, M.Z.; Olakunle, O.; Armaghani, D.J.; Moeinaddini, M. Predicting the use frequency of ride-sourcing by off-campus university students through random forest and Bayesian network techniques. Transp. Res. Part A Policy Pract. 2020, 136, 262–281. [Google Scholar] [CrossRef]
  23. Ke, B.; Khandelwal, M.; Asteris, P.G.; Skentou, A.D.; Mamou, A.; Armaghani, D.J. Rock-Burst Occurrence Prediction Based on Optimized Naïve Bayes Models. IEEE Access 2021, 9, 91347–91360. [Google Scholar] [CrossRef]
  24. He, Z.; Armaghani, D.J.; Masoumnezhad, M.; Khandelwal, M.; Zhou, J.; Murlidhar, B.R. A Combination of Expert-Based System and Advanced Decision-Tree Algorithms to Predict Air-Overpressure Resulting from Quarry Blasting. Nat. Resour. Res. 2021, 30, 1889–1903. [Google Scholar] [CrossRef]
  25. Bayat, P.; Monjezi, M.; Mehrdanesh, A.; Khandelwal, M. Blasting pattern optimization using gene expression programming and grasshopper optimization algorithm to minimise blast-induced ground vibrations. Eng. Comput. 2021, 38, 3341–3350. [Google Scholar] [CrossRef]
  26. Le, T.-T.; Asteris, P.G.; Lemonis, M.E. Prediction of axial load capacity of rectangular concrete-filled steel tube columns using machine learning techniques. Eng. Comput. 2021, 1–34. [Google Scholar] [CrossRef]
  27. Harandizadeh, H.; Armaghani, D.J.; Asteris, P.G.; Gandomi, A.H. TBM performance prediction developing a hybrid ANFIS-PNN predictive model optimized by imperialism competitive algorithm. Neural Comput. Appl. 2021, 33, 16149–16179. [Google Scholar] [CrossRef]
  28. Gavriilaki, E.; Asteris, P.G.; Touloumenidou, T.; Koravou, E.-E.; Koutra, M.; Papayanni, P.G.; Karali, V.; Papalexandri, A.; Varelas, C.; Chatzopoulou, F. Genetic justification of severe COVID-19 using a rigorous algorithm. Clin. Immunol. 2021, 226, 108726. [Google Scholar] [CrossRef] [PubMed]
  29. Ding, W.; Nguyen, M.D.; Mohammed, A.S.; Armaghani, D.J.; Hasanipanah, M.; Van Bui, L.; Pham, B.T. A new development of ANFIS-Based Henry gas solubility optimization technique for prediction of soil shear strength. Transp. Geotech. 2021, 29, 100579. [Google Scholar] [CrossRef]
  30. Zhou, J.; Huang, S.; Qiu, Y. Optimization of random forest through the use of MVO, GWO and MFO in evaluating the stability of underground entry-type excavations. Tunn. Undergr. Space Technol. 2022, 124, 104494. [Google Scholar] [CrossRef]
  31. Zhou, J.; Shen, X.; Qiu, Y.; Shi, X.; Khandelwal, M. Cross-correlation stacking-based microseismic source location using three metaheuristic optimization algorithms. Tunn. Undergr. Space Technol. 2022, 126, 104570. [Google Scholar] [CrossRef]
  32. Shan, F.; He, X.; Armaghani, D.J.; Zhang, P.; Sheng, D. Success and challenges in predicting TBM penetration rate using recurrent neural networks. Tunn. Undergr. Space Technol. 2022, 130, 104728. [Google Scholar] [CrossRef]
  33. Chen, L.; Asteris, P.G.; Tsoukalas, M.Z.; Armaghani, D.J.; Ulrikh, D.V.; Yari, M. Forecast of Airblast Vibrations Induced by Blasting Using Support Vector Regression Optimized by the Grasshopper Optimization (SVR-GO) Technique. Appl. Sci. 2022, 12, 9805. [Google Scholar] [CrossRef]
  34. Moosavi, S.M.H.; Ma, Z.; Armaghani, D.J.; Aghaabbasi, M.; Ganggayah, M.D.; Wah, Y.C.; Ulrikh, D.V. Understanding and Predicting the Usage of Shared Electric Scooter Services on University Campuses. Appl. Sci. 2022, 12, 9392. [Google Scholar] [CrossRef]
  35. Koopialipoor, M.; Asteris, P.G.; Mohammed, A.S.; Alexakis, D.E.; Mamou, A.; Armaghani, D.J. Introducing stacking machine learning approaches for the prediction of rock deformation. Transp. Geotech. 2022, 34, 100756. [Google Scholar] [CrossRef]
  36. Asteris, P.G.; Rizal, F.I.M.; Koopialipoor, M.; Roussis, P.C.; Ferentinou, M.; Armaghani, D.J.; Gordan, B. Slope stability classification under seismic conditions using several tree-based intelligent techniques. Appl. Sci. 2022, 12, 1753. [Google Scholar] [CrossRef]
  37. Momeni, E.; Yarivand, A.; Dowlatshahi, M.B.; Armaghani, D.J. An efficient optimal neural network based on gravitational search algorithm in predicting the deformation of geogrid-reinforced soil structures. Transp. Geotech. 2021, 26, 15. [Google Scholar] [CrossRef]
  38. Yang, H.; Li, Z.; Jie, T.; Zhang, Z. Effects of joints on the cutting behavior of disc cutter running on the jointed rock mass. Tunn. Undergr. Space Technol. 2018, 81, 112–120. [Google Scholar] [CrossRef]
  39. Liu, B.; Yang, H.; Karekal, S. Effect of water content on argillization of mudstone during the tunnelling process. Rock Mech. Rock Eng. 2020, 53, 799–813. [Google Scholar] [CrossRef]
  40. Yang, H.; Xing, S.; Wang, Q.; Li, Z. Model test on the entrainment phenomenon and energy conversion mechanism of flow-like landslides. Eng. Geol. 2018, 239, 119–125. [Google Scholar] [CrossRef]
  41. Yang, H.; Zeng, Y.; Lan, Y.; Zhou, X. Analysis of the excavation damaged zone around a tunnel accounting for geostress and unloading. Int. J. Rock Mech. Min. Sci. 2014, 69, 59–66. [Google Scholar] [CrossRef]
  42. Yang, H.; Wang, Z.; Song, K. A new hybrid grey wolf optimizer-feature weighted-multiple kernel-support vector regression technique to predict TBM performance. Eng. Comput. 2020, 38, 2469–2485. [Google Scholar] [CrossRef]
  43. Feng, X.-T.; Hudson, J. The ways ahead for rock engineering design methodologies. Int. J. Rock Mech. Min. Sci. 2004, 41, 255–273. [Google Scholar] [CrossRef]
  44. Hudson, J.; Feng, X. Updated flowcharts for rock mechanics modelling and rock engineering design. Int. J. Rock Mech. Min. Sci. 2007, 44, 174–195. [Google Scholar] [CrossRef]
  45. Mishra, D.; Srigyan, M.; Basu, A.; Rokade, P. Soft computing methods for estimating the uniaxial compressive strength of intact rock from index tests. Int. J. Rock Mech. Min. Sci. 2015, 100, 418–424. [Google Scholar] [CrossRef]
  46. Winn, K. A Fuzzy Model to Predict the Unconfined Compressive Strength of Singapore’s Sedimentary Rocks in Comparison With Multi-Regression Analysis. In Proceedings of the ISRM International Symposium-10th Asian Rock Mechanics Symposium, Singapore, 29 October–3 November 2018. [Google Scholar]
  47. Kahraman, S.; Gunaydin, O.; Alber, M.; Fener, M. Evaluating the strength and deformability properties of Misis fault breccia using artificial neural networks. Expert Syst. Appl. 2009, 36, 6874–6878. [Google Scholar] [CrossRef]
  48. Mohamad, E.T.; Armaghani, D.J.; Momeni, E.; Abad, S.V.A.N.K. Prediction of the unconfined compressive strength of soft rocks: A PSO-based ANN approach. Bull. Eng. Geol. Environ. 2015, 74, 745–757. [Google Scholar] [CrossRef]
  49. Momeni, E.; Armaghani, D.J.; Hajihassani, M.; Amin, M.F.M. Prediction of uniaxial compressive strength of rock samples using hybrid particle swarm optimization-based artificial neural networks. Measurement 2015, 60, 50–63. [Google Scholar] [CrossRef]
  50. Mansouri, I.; Shariati, M.; Safa, M.; Ibrahim, Z.; Tahir, M.; Petkovic, D. Analysis of influential factors for predicting the shear strength of a V-shaped angle shear connector in composite beams using an adaptive neuro-fuzzy technique (Retraction of Vol 30, Pg 1247, 2019). J. Intell. Manuf. 2020, 30, 1247–1257. [Google Scholar] [CrossRef]
  51. Chahnasir, E.S.; Zandi, Y.; Shariati, M.; Dehghani, E.; Toghroli, A.; Mohamad, E.T.; Shariati, A.; Safa, M.; Wakil, K.; Khorami, M. Application of support vector machine with firefly algorithm for investigation of the factors affecting the shear strength of angle shear connectors. Smart Struct. Syst. 2018, 22, 413–424. [Google Scholar]
  52. Kechagias, J.; Tsiolikas, A.; Asteris, P.; Vaxevanidis, N. Optimizing ANN performance using DOE: Application on turning of a titanium alloy. In Proceedings of the MATEC Web of Conferences, Chisinau, Moldova, 31 May–2 June 2018; p. 01017. [Google Scholar]
  53. Asteris, P.G.; Ashrafian, A.; Rezaie-Balf, M. Prediction of the compressive strength of self-compacting concrete using surrogate models. Comput. Concr 2019, 24, 137–150. [Google Scholar]
  54. Armaghani, D.J.; Mohamad, E.T.; Momeni, E.; Narayanasamy, M.S. An adaptive neuro-fuzzy inference system for predicting unconfined compressive strength and Young’s modulus: A study on Main Range granite. Bull. Eng. Geol. Environ. 2015, 74, 1301–1319. [Google Scholar] [CrossRef]
  55. Dehghan, S.; Sattari, G.; Chelgani, S.C.; Aliabadi, M. Prediction of uniaxial compressive strength and modulus of elasticity for Travertine samples using regression and artificial neural networks. Min. Sci. Technol. (China) 2010, 20, 41–46. [Google Scholar] [CrossRef]
  56. Gokceoglu, C.; Zorlu, K. A fuzzy model to predict the uniaxial compressive strength and the modulus of elasticity of a problematic rock. Eng. Appl. Artif. Intell. 2004, 17, 61–72. [Google Scholar] [CrossRef]
  57. Majdi, A.; Beiki, M. Evolving neural network using a genetic algorithm for predicting the deformation modulus of rock masses. Int. J. Rock Mech. Min. Sci. 2010, 47, 246–253. [Google Scholar] [CrossRef]
  58. Singh, R.; Kainthola, A.; Singh, T. Estimation of elastic constant of rocks using an ANFIS approach. Appl. Soft Comput. 2012, 12, 40–45. [Google Scholar] [CrossRef]
  59. ISRM. The complete ISRM suggested methods for rock characterization, testing and monitoring: 1974–2006. In Suggested Methods Prepared by the Commission on Testing Methods, International Society for Rock Mechanics; Ulusay, R., Hudson, J.A., Eds.; ISRM Turkish National Group: Ankara, Turkey, 2007. [Google Scholar]
  60. Krogh, P.S.A. Learning with ensembles: How over-fitting can be useful. In Proceedings of the 1995 Conference, Denver, Colorado, 27 November–2 December 1995; p. 190. [Google Scholar]
  61. Hansen, L.K.; Salamon, P. Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 993–1001. [Google Scholar] [CrossRef] [Green Version]
  62. Schapire, R.E. The strength of weak learnability. Mach. Learn. 1990, 5, 197–227. [Google Scholar] [CrossRef] [Green Version]
  63. Freund, Y. Boosting a weak learning algorithm by majority. Inf. Comput. 1995, 121, 256–285. [Google Scholar] [CrossRef] [Green Version]
  64. Freund, Y.; Schapire, R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef] [Green Version]
  65. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  66. Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; CRC Press: Boca Raton, FL, USA, 1994. [Google Scholar]
  67. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  68. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
Figure 1. Flow diagram of this research for predicting E.
Figure 1. Flow diagram of this research for predicting E.
Applsci 12 10258 g001
Figure 2. Simple regression models for predicting E.
Figure 2. Simple regression models for predicting E.
Applsci 12 10258 g002
Figure 3. Bagging and boosting process.
Figure 3. Bagging and boosting process.
Applsci 12 10258 g003
Figure 4. 3D density diagrams of pairs of each input variable and E.
Figure 4. 3D density diagrams of pairs of each input variable and E.
Applsci 12 10258 g004
Figure 5. Tenfold cross-validation process.
Figure 5. Tenfold cross-validation process.
Applsci 12 10258 g005
Figure 6. Grid-search process in this study.
Figure 6. Grid-search process in this study.
Applsci 12 10258 g006
Figure 7. Actual and predicted values of the models developed in this study.
Figure 7. Actual and predicted values of the models developed in this study.
Applsci 12 10258 g007
Figure 8. Lift chart of the models developed in this study.
Figure 8. Lift chart of the models developed in this study.
Applsci 12 10258 g008
Table 1. Recent studies on the use of machine learning techniques to predict the E.
Table 1. Recent studies on the use of machine learning techniques to predict the E.
StudyInput VariableMethodR2
Armaghani, Mohamad, Momeni, Monjezi and Narayanasamy [1] Is(50), n, RN, VpICA-ANN0.71
Armaghani, et al. [54] p; Vp, Qtz, Kpr, Plg, Chl, MicaANFIS0.99
Beiki, Majdi and Givshad [17] p; n; VpGA0.67
Bejarbaneh, Bejarbaneh, Fahimifar, Armaghani and Abd Majid [2] Rn, Vp, Is(50)FIS and ANN0.79 and 0.82
Dehghan, et al. [55] Vp; Is(50); Rn; nANN0.77
Gokceoglu and Zorlu [56] Is(50), BPI; Vp, BTSFIS0.79
Majdi and Beiki [57] p; RQD; n; NJ; GSIGA-ANN0.89
Singh, et al. [58] p; Is(50), WAANFIS0.66
Yılmaz and Yuksek [15] ne, Is(50), Rn; IdANN0.91
Yilmaz and Yuksek [14] Vp; Is(50); Rn; WCANFIS0.95
BPI = block punch index, GSI = geological strength index, Id = slake durability index, ne = effective porosity, NJ = number of joints per meter, RQD = rock quality designation, WA = water absorption, WC = water content, n = porosity, p = density, ICA = imperialism competitive algorithm, quartz = Qtz, alkali feldspar = Kpr, plagioclase = Plg, chlorite = Chl, Schmidt hammer rebound number = Rn, ultrasonic velocity = Vp, point load strength = Is(50).
Table 2. Equations using various multiple regression techniques.
Table 2. Equations using various multiple regression techniques.
MethodEquationRStd. Error of the Estimate
EnterE = 25.89 × DD + 0.005 × Vp + 1.50 × Qtz − 1.22 × Kpr − 0.10 × Plg − 2.18 × Chl − 1.29 × Mica − 36.160.626.620
Stepwise/Backwards/ForwardsE = 0.0009094 × Vp + 1.644 × Qtz − 43.930.55326.018
Table 3. Details of inputs distribution before and after the transformation.
Table 3. Details of inputs distribution before and after the transformation.
InputBefore TransformationAfter Transformation
SkewnessSDSkewnessSD
Vp0.361137.620.360.19
Qtz0.215.680.210.17
DD2.750.132.750.18
Mica0.433.560.430.21
Chl0.851.650.850.33
Plg0.207.860.200.15
Kpr−1.055.36−1.050.17
Table 4. Models’ performance.
Table 4. Models’ performance.
DTANN
RMAERMAE
Standard model0.47621.1110.56618.928
Bagged model0.919.7260.84912.551
Boosted model0.9943.8360.9725.343
Table 5. Details of models’ error.
Table 5. Details of models’ error.
Minimum ErrorMaximum ErrorMean ErrorStandard Deviation
Standard DT−44.94451.9570.08126.845
Bagged DT−39.532.02−1.20513.486
Boosted DT−4.63821.4663.2314.872
Standard ANN−60.3772.076−1.55225.35
Bagged ANN−46.86442.601−3.74216.251
Boosted ANN−15.57717.1980.3147.332
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tsang, L.; He, B.; Rashid, A.S.A.; Jalil, A.T.; Sabri, M.M.S. Predicting the Young’s Modulus of Rock Material Based on Petrographic and Rock Index Tests Using Boosting and Bagging Intelligence Techniques. Appl. Sci. 2022, 12, 10258. https://doi.org/10.3390/app122010258

AMA Style

Tsang L, He B, Rashid ASA, Jalil AT, Sabri MMS. Predicting the Young’s Modulus of Rock Material Based on Petrographic and Rock Index Tests Using Boosting and Bagging Intelligence Techniques. Applied Sciences. 2022; 12(20):10258. https://doi.org/10.3390/app122010258

Chicago/Turabian Style

Tsang, Long, Biao He, Ahmad Safuan A Rashid, Abduladheem Turki Jalil, and Mohanad Muayad Sabri Sabri. 2022. "Predicting the Young’s Modulus of Rock Material Based on Petrographic and Rock Index Tests Using Boosting and Bagging Intelligence Techniques" Applied Sciences 12, no. 20: 10258. https://doi.org/10.3390/app122010258

APA Style

Tsang, L., He, B., Rashid, A. S. A., Jalil, A. T., & Sabri, M. M. S. (2022). Predicting the Young’s Modulus of Rock Material Based on Petrographic and Rock Index Tests Using Boosting and Bagging Intelligence Techniques. Applied Sciences, 12(20), 10258. https://doi.org/10.3390/app122010258

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop