Next Article in Journal
Evolutionary Game Analysis of Abandoned-Bike-Sharing Recycling: Impact of Recycling Subsidy Policy
Next Article in Special Issue
Damping of Dry Sand in Resonant Column-Torsional Simple Shear Device
Previous Article in Journal
Managing and Governing Integrated Research Programmes: Lessons from Theory and Practice
Previous Article in Special Issue
Salt Cavern Thermal Damage Evolution Investigation Based on a Hybrid Continuum-Discrete Coupled Modeling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Appraisal of Different Artificial Intelligence Techniques for the Prediction of Marble Strength

1
School of Art, Anhui University of Finance & Economics, Bengbu 233030, China
2
Department of Mining Engineering, University of Engineering and Technology, Peshawar 25000, Pakistan
3
Department of Mining Engineering, University of Engineering and Technology, Lahore 39161, Pakistan
4
Department of Sustainable Advanced Geomechanical Engineering, Military College of Engineering, National University of Sciences and Technology, Risalpur 23200, Pakistan
5
School of Civil, Environmental and Architectural Engineering, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
6
Department of Geology and Geophysics, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
7
Department of Civil Engineering, University of Engineering and Technology, Peshawar 25000, Pakistan
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sustainability 2023, 15(11), 8835; https://doi.org/10.3390/su15118835
Submission received: 22 February 2023 / Revised: 17 May 2023 / Accepted: 26 May 2023 / Published: 30 May 2023
(This article belongs to the Special Issue Advances in Rock Mechanics and Geotechnical Engineering)

Abstract

:
Rock strength, specifically the uniaxial compressive strength (UCS), is a critical parameter mostly used in the effective and sustainable design of tunnels and other engineering structures. This parameter is determined using direct and indirect methods. The direct methods involve acquiring an NX core sample and using sophisticated laboratory procedures to determine UCS. However, the direct methods are time-consuming, expensive, and can yield uncertain results due to the presence of any flaws or discontinuities in the core sample. Therefore, most researchers prefer indirect methods for predicting rock strength. In this study, UCS was predicted using seven different artificial intelligence techniques: Artificial Neural Networks (ANNs), XG Boost Algorithm, Random Forest (RF), Support Vector Machine (SVM), Elastic Net (EN), Lasso, and Ridge models. The input variables used for rock strength prediction were moisture content (MC), P-waves, and rebound number (R). Four performance indicators were used to assess the efficacy of the models: coefficient of determination (R2), Root Mean Square Error (RMSE), Mean Square Error (MSE), and Mean Absolute Error (MAE). The results show that the ANN model had the best performance indicators, with values of 0.9995, 0.2634, 0.0694, and 0.1642 for R2, RMSE, MSE, and MAE, respectively. However, the XG Boost algorithm model performance was also excellent and comparable to the ANN model. Therefore, these two models were proposed for predicting UCS effectively. The outcomes of this research provide a theoretical foundation for field professionals in predicting the strength parameters of rock for the effective and sustainable design of engineering structures

1. Introduction

Uniaxial compressive strength (UCS) is an integral parameter significantly used in the effective and sustainable design of tunnels and other engineering structures in civil and mining engineering [1,2,3,4,5,6,7,8,9,10,11,12]. UCS is determined using direct and indirect methods. The direct method proceeded according to ISRM and ASTM [13,14,15,16,17,18] standards on rock samples in the laboratory that involve (1) acquisition of a standard high-quality or NX core sample (54 mm dia) of rock (does not contain any cracks), (2) preparation of the core sample for flatness of both ends within 0.02 mm and parallelism of the upper and lower surfaces within 0.05 mm, and (3) applying load on the sample using a Universal Testing Machine (UTM) machine at the rate of 0.5–1.0 MPa/s. However, extracting high-quality core samples from a weak and jointed rock is challenging, and the arduous laboratory testing procedure may provide hurdles during the determination of UCS [19]. It takes time and is costly to safely execute UCS in a lab test. The results obtained may be questionable in the case of the presence of discontinuities in the core sample [20,21]. Therefore, most researchers often prefer indirect methods for estimating UCS. Indirect methods include predictive models developed by different researchers based on mineralogical–petrographic analyses and physical and index properties of rock for estimation of UCS [22,23,24,25]. When compared to the direct technique, indirect methods for estimating UCS are quicker, more convenient, and less costly [26]. For easy understanding, indirect methods are divided into conventional predictive methods and soft computing methods for the estimation of UCS [27]. The conventional predictive methods use statistical techniques, i.e., simple and multilinear regression modeling. These have been used successfully for predicting UCS [28] and simultaneously correlate inputs with output (i.e., multilinear regression) [29]. Linear and multilinear regression have the problem that they can only predict the mean values, and when the data are bigger, they cannot accurately predict the values that are needed. As a result, these approaches are less suitable for application in nonlinear and multivariable engineering issues [30,31,32,33,34,35,36,37].
Soft computing methods include Artificial Neural Networks (ANNs), Adaptive Network-Based Fuzzy Inference Systems (ANFIS), Relevance Vector Machines (RVMs), and other techniques are some of its examples. These methods are now widely used in rock mechanics because of how easily and adaptably they can anticipate the required values depending on a variety of inputs. These techniques are well suited for usage when conventional statistical techniques are less accurate in making predictions [38,39,40,41,42]. In rock mechanics, these methods are getting more attention now because they are flexible and make it easy to predict required values based on the input variables. In situations when using traditional statistical approaches for prediction is not as convenient, these methods are particularly appropriate to be applied [43,44,45,46].
Various research at national and international levels has been conducted using different artificial intelligence techniques to predict the UCS of rock. In this regard, Shahani and Zheng [47] predicted UCS using dry-density Brazilian tensile strength (BTS) and the point load index as input variables using ANNs and a multilinear regression model (MLRM). They found that the ANN models show better performance, i.e., R2 0.99 than the MLRM. Manouchehrian et al. [48] used texture as an input variable in ANN and multivariate modeling for the prediction of UCS. They predicted UCS values effectively, and the performance of the ANNs was better compared to the multivariate statistics. Similarly, Torabi-Kaveh, Naseri, Abdi, Garavand [49,50] used various input variables such as porosity (η), P-wave (PV), and density (ρ) to predict UCS and ES using ANNs and ANFIS. Dehghan et al. [1] used ANNs and MLR to predict UCS and the Static Young modulus (Es) based on the PV, point load index, Schmidt hammer rebound number, and η as input variables. Due to the advancement in artificial intelligence, some of the latest and most effective algorithms have been developed to predict UCS. In this regard, Zhang et al. [51] created a Random Forest (RF) model based on the beetle antennae search (BAS) method for estimating the UCS of lightweight self-compacting concrete (LWSSC) with high precision and efficiency. Matin et al. [52] used just a few rock parameters and indices such as porosity (η), water content (Wc), Is (50), P-wave velocity (PV), and rebound numbers (Rns) that were determined using an RF model. Based on these variables, an effective model for predicting UCS was created. Suthar [53] used the M5 model tree, RF, ANN, Support Vector Machine (SVM), and Gaussian processes to forecast the UCS of stabilized pond ashes with lime and lime sludge (GPs). Wang et al. [54] developed an effective model for predicting UCS based on RF-selected variables. Ren et al. [55] created k-nearest neighbors (KNNs), naive Bayes, RF, ANN, and (SVM) as machine-learning (ML) methods to precisely predict rock UCS using multiple input parameters. Ghasemi et al. [56] built a tree-based method for predicting the UCS and Young Modulus (E) of carbonate rocks. They found that the applied method gives promising results. Saedi et al. [57] predicted UCS and E of migmatite rocks using ANNs and multivariate regression (MVR). They found that ANN and ANFIS show better prediction performance. Shahani et al. [58] suggested an XGBoost model for predicting the UCS and E of sedimentary rock that is still intact. To estimate the UCS and E of various rocks, Armaghani et al. [59] created a hybrid model based on an ANN and an imperialist competitive algorithm (ICA). This literature review provides insight into the use of different artificial intelligence techniques for predicting UCS. While ANNs have shown promising results, their prediction performance is still a matter of debate. Moreover, the literature highlights that there has been no analysis of input variables using statistical methods to identify the most suitable input variables, which could enhance the prediction performance of both artificial intelligence and statistical techniques. In addition, the latest artificial intelligence techniques, such as the XG Boost algorithm, Random Forest (RF), Elastic Net (EN), Lasso, and Ridge, have not been adequately explored for effective prediction of UCS. Therefore, accurate prediction of rock UCS is vital for ensuring the safe and efficient stability analysis of engineered structures in a rock mass environment.

2. Materials and Methods

2.1. Design of Experimental Works

Representative samples of marble were collected from seven distinct areas, namely Afghanistan, Mardan, Chitral, Mohmand Agency (reddish brown), Buner, Chitral, and Mohmand (super white). The samples were grouped into seven categories represented by A, B, C, D, E, F, and G, respectively. Each group consisted of 10 samples that were tested for various geo-mechanical properties of marble. The tests included moisture content (MC (%)), bulk density (gm/mL)), dry density (gm/mL), water absorption (%), P-wave (km/sec), S-wave (km/sec), slake and durability (Id2), rebound number (R), porosity (η), void ratio (e), and uniaxial compressive strength (UCS). To conduct the tests in line with the International Standard of Rock Mechanics (ISRM), cylindrical core samples with a diameter of 54 × 108 mm were prepared. The two ends of each core sample were meticulously ground with a grinder and sandpaper to ensure parallelism in the upper and lower surfaces within 0.05 mm and the flatness of the surface within 0.02 mm. The samples were prepared and tested as presented in Figure 1 according to the ASTM standard [60,61].
A summary of the various tests mentioned in the above paragraph is presented below. These tests were conducted according to the ISRM standard [60,61].
The method for determining the characteristics of marble involved various techniques and apparatuses. To measure the bulk density of the rock, its weight was measured with a digital balance, and the volume it displaced was measured using a graduated/volumetric cylinder. The dry density was obtained by drying the specimens in an oven, and the dry weight was determined using a digital balance. The volume displaced by the rock was measured using a graduated cylinder.
To determine the moisture content and water absorption test of the marble, the wet and dry weights of the specimens were measured using an oven and a digital balance. The slake durability index was determined by subjecting the specimens to four wetting and drying cycles using a testing apparatus. The porosity and void ratio of the marble were determined using a volumetric cylinder and an oven to dry the samples.
An ultrasonic wave transducer apparatus was used to determine the primary and secondary wave velocities (P-wave and S-wave velocities) of the marble, while the Schmidt hammer test was used to determine the rebound number using the Schmidt rebound hammer or a concrete test hammer.
For the direct testing of the uniaxial compressive strength (UCS) of marble, core samples of marble were prepared in a cylindrical form with a length/diameter ratio of 2.5–3 according to the ISRM. The specimens were carefully ground and covered with a polyethylene plastic cover to protect them from moisture. The UCS was determined using an electrohydraulic servo universal testing machine of model C64.106* with a maximum load of 1000 kN. The machine was set to load at an equal displacement of 0.1 mm/min with a collection rate of 10 times/s.

2.2. Predictive Models

XG Boost Algorithm
XGBoost stands for Extreme Gradient Boosting, which was proposed by Tianqi Chen and Guestrin, and is an efficient gradient-boosted decision tree (GBDT) ML library that is portable and scalable [62]. XG boost is an extension of boosting that formalizes the additive creation of weak models, and it uses the gradient descent method over an objective function. XGBoost uses the loss function assessment as a starting point and matches the results to the standard gradient boosting techniques quickly and efficiently [63].
O b j   θ = 1 n   i n Z   ( y i   Y i ) + j = 1 j Ω   ( f j   )
In Equation (1), Z represents the training loss function, which is used to evaluate how well a model performs when trained on data. Ω is a regularization term intended to limit model complexity by preventing overfitting. fj denotes a jth tree prediction in the formula [63]. Figure 2 shows the XGBoost model structure [64].
The boosting methodology improves the framework evaluation accuracy by creating multiple trees as alternatives to produce an addressed tree, then connecting them to determine a methodical predictive algorithm [64]. In addition to parallel tree boosting, it is the top machine-learning library for tackling regression, classification, and ranking problems [65].
Random Forest
The Random Forest regression model, introduced by Breiman in 2001, is one of the machine learning ensemble approaches [64]. It is one of the tree-based techniques used for classification and regression analysis. RF trees are constructed using a subset of the random variables chosen independently and replaced with the original data set. When solving forecasting problems, it incorporates categorical as well as numerical variables [66]. The basic architecture of an RF is given in Figure 3.
Random Forest is an ultra-modern technique for bagging. The built-in cross-validation function for the Random Forest allows ranking explanatory factors from the most effective to the least associated with the outcome variable. As a result, feature extraction is more valuable when examining data from various sources [66]. Among other widely accepted forms of AI computing, RF observes a singular association between model embodiment and the predictive accuracy [64]. The Random Forest algorithm may be expressed as [67]:
Y = 1 N i = 1 N F i X
In Equation (2), X represents the input parameter factor, Y represents the prediction result and N shows the number of regression trees formed. Figure 3 shows the basic structure of a Random Forest Regression (RFR) model [64].
Support Vector Machine
A type of supervised learning known as Support Vector Machines (SVMs) was first proposed by Vapnik et al. in 1997 [68]. The fundamental concept of SVMs is that neurons are grouped in two layers, just like in ANNs. SVMs with a sigmoid kernel function are equivalent to a two-layer perceptron neural network. SVMs are alternative training methods for polynomial, radial basis function, and multilayer perceptron classifiers in which the network weights are determined by solving a quadratic programming problem with linear constraints [69].
Support Vector Machines are capable of solving classification and complicated nonlinear regression issues. The basic goal when applied to regression problems is to create a perfect classification surface that reduces the error in all training samples obtained from that surface [67].
Figure 4 illustrates the architecture of a Support Vector Machine (SVM). The signal vector input is present in the input layer. In the buried layer, an inner-product kernel is created between the input signal vector (x) and the support vector (si). The output neuron receives an addition of the buried layer’s neuron linear outputs. The output neuron is biased [70].
Lasso Regression
Lasso regression was introduced to geophysics between 1986 and 1996 [71,72]. It carries out feature selection and regularization penalties to raise the accuracy of prediction. However, multicollinearity is avoided by selecting the most important predictor from a group of highly correlated independent variables and neglecting the others. An L1-norm penalty term was used to reduce regression coefficients, some to zero, ensuring that the most significant explanatory variables were selected. Another benefit of Lasso is that it can only choose n parameters when a data set of size n is fitted to a regression model with p parameters and p > n (p represents predictor variables and n represents the number of observations) [72].
Ridge Regression
Ridge regression was developed to improve the predictability of a regression model. In Ridge regression, the L2-norm penalty term is used to reduce the regression coefficients to nonzero values to prevent overfitting, but it does not serve as a feature-selection mechanism [72]. If there are numerous predictors, all of which have non-zero coefficients and are selected from a normal distribution, ridge regression is the best option. It specifically performs well when there are many predictors, each with little influence, and avoids poorly defined and large variance coefficients in linear regression models with numerous correlated variables [71].
Elastic Net
The Elastic Net is a variant of the Lasso that can withstand large levels of inter-predictor correlation. When predictors are highly correlated, the Lasso solution routes may become unstable (like SNPs in high linkage disequilibrium). The Elastic Net (ENET) was suggested for high-dimensional data processing to address this problem [71].
Elastic Net is a member of a group of regression algorithms that use L1-norm and L2-norm regularization penalty terms; the tuning parameter regulates the potency of these penalty terms [72]. Automatic variable selection is performed using the L1 component of the ENET, and clustered selection is encouraged with the L2 component, which also stabilizes the solution routes concerning random sampling to enhance prediction. By creating a grouping effect during the variable selection process, the ENET can choose groups of correlated features even when the groups are unknown. This is because a group of strongly correlated variables tends to have coefficients with comparable magnitude. When p > n (p represents predictor variables and n represents the number of observations), the Elastic Net chooses more variables than n, in contrast to the Lasso. However, the elastic net is free of the oracle characteristic [73].
Artificial Neural Networks (ANNs)
One of the supervised machine learning (ML) techniques that are frequently used is the Artificial Neural Network (ANN). ANN computational models have been used to solve a wide range of issues in various disciplines [72]. A model comprises several little processing units (neurons) that are capable of handling complicated data processing and knowledge representation [74]. An ANN has three main components that include input layers, hidden layers, and an output layer [72]. Since it essentially maps the input and output values, it has good interpolation capabilities, particularly when the input data are noisy. Neural networks can be used in place of auto-correlation, multivariable regression, linear regression, trigonometric analysis, and other statistical analysis approaches [74].
For any regression model in an ANN, a supervised learning method is required during training to provide the highest levels of accuracy and efficiency. In network training, the BP algorithm uses a sequence of instances to establish connections between nodes, as well as to determine the parameterized function [75]. Many networks are trained using the BP method. According to the available literature, the BP algorithm performs the NN operation by evaluating and implying random variables. There is a need to train the model, and research studies have been conducted to complete this in a better way [76].
Equation (3) gives a mathematical expression for an ANN.
B a s i c   n e t w o r k = f w x + b i a s
where w and x indicate weights and input, respectively. The weight and input for n numbers are presented as
  • w = w 1 , w 2 , w 3 , w 4 , , w n ;
  • x = x 1 , x 2 , x 3 , x 4 , , x n .
The ANNs used Equation (4) to predict the values.
n e t = i = 1 n f w i x i + b
The tangent sigmoid function described in Equation (5) was used as the transferred function in this investigation.
y = t a n h   n e t
Using Equation (6), the output of the network represented by “y” may be computed.
o u t p u t   o f   t h e   n e t w o r k = y = t a n h ( n e t ) = t a n h i = 1 n f w i + b )
The network error is defined as the “calculated values (VCalculated) minus estimated values (VPredicted) of the network”. By increasing or decreasing the neuron’s weight, it is possible to reduce the error in this network to some extent. Equation (7) represents the inaccuracy of networks in their mathematical form.
E m = V C a l c u l a t e d V P r e d i c t e d
Moreover, the total error in a network can be calculated using Equation (8).
E T o t a l = 1 2 m E m 2
Code Development for ANNs using MATLAB
Figure 5 shows an example of the self-generated ANN code used in this study for n networks using a similar training and activation function for a single loop. An internal loop in this program can be used to process data for as many networks as desired. The activation function for the code was static, although the structure of the data was likely to vary. Here, one algorithm run was used to process 100 networks. As a result, network1 contains one neuron, network2 contains two, and so forth. Although there are numerous ANN approaches, Khan et al. utilized the Levenberg–Marquardt algorithm and suggested BP [77,78]. Khan et al. [77,78] discovered that the Levenberg–Marquardt (LM) method is better than other algorithms and is more time-effective. As a result, LM was used in the current model for both the hidden and output layers. The fundamental ANN structure in this study consists of three inputs (moisture content, P-waves, and rebound number as input variables) and one output, i.e., UCS, as shown in Figure 6. The data were classified into three classes: training (70%), testing (15%), and validation (15%).
The Artificial Neural Network designed to estimate the UCS in the present work is presented in Figure 6.
In order to determine the relationship between the model input variables and the corresponding outputs, ANNs learn from the samples of data that are presented to them and utilize those samples to alter their weights. As a result, ANNs do not require prior knowledge regarding the nature of the relationship between the input and output variables, which is one advantage they have over most empirical and statistical methods. If the relationship between x and y is non-linear, regression analysis can only be used successfully if the nature of the non-linearity is known beforehand. On the other hand, ANN models do not require this prior understanding of the type of non-linearity [21]. Generally, several machine learning models work based on the above statement.

2.3. Data Analysis for Selecting the Most Appropriate Input Variables

In this study, various parameters were determined in the laboratory using direct methods, i.e., moisture content, bulk density, dry density, water absorption, slake and durability, rebound number, P-wave, S-wave, porosity, void ratio, and UCS. The descriptive statistical analysis of these variables was carried out for a better understanding of their statistical behavior, and the results are presented in Table 1. Furthermore, these variables were analyzed using pairwise correlation with output, and correlation matrix analysis to choose the most appropriate input variables for predicting uniaxial compressive strength (UCS) using different artificial intelligence techniques.
The correlation matrix analysis of the input and output was also carried out to select the most effective input variables to eliminate multicollinearity in the prediction. To better understand the variance and covariance in the regressions used in the prediction model, a correlation matrix can be used as a descriptive statistical tool. It is often used in conjunction with other matrices in statistical analysis. Conversely, correlation explains the interaction between the regression variables used in predictive analyses. Figure 7 and Figure 8 show how the correlation matrix often describes the variance in each parameter. There are both positive and negative correlations among some of them. This will allow the researcher to see how different factors affect the predicted model’s final outcomes. As the strength of the negative or positive correlation increases, so too will the significance of model efficiency. The criteria for selecting parameters are (a) check the high relationship (negative or positive) parameters with the output (UCS), (b) check the input parameter relationships with each other, (c) check the high correlation parameters with the output and with each other. If there are two parameters that have a high correlation with the output and also a high correlation with each other, then select the one input parameter. For example, Figure 7 and Figure 8 show that there are four input parameters that show a high correlation with the output, i.e., moisture content, dry density, rebound number, and P-wave. In addition, rebound number and P-wave have a high correlation with each other, so in both, one parameter will be considered as the input for better model learning. Therefore, moisture content, P-wave, and rebound number were selected for the input and the others were discarded during the AI model evolution. These three parameters were selected for the prediction of UCS using seven different AI techniques.

2.4. Performance Indicator

Various performance indicators such as the coefficient of determination R2, Root Mean Square Error (RMSE), Mean Square Error (MSE), and Mean Absolute Error (MAE) were used to evaluate the prediction performance of the Artificial Neural Network (ANN), XG Boost algorithm, Random Forest Regression (RFR), Elastic Net (EN), Lasso, Support Vector Machine (SVM), and Ridge models. The following formulas as mentioned in Equations (9)–(12) were used [43,64]:
R 2 = 1 R S S / T S S
where:
  • RSS = sum of the square of the residual;
  • TSS = total sum of the square.
R M S E = M S E = 1 T n = 1 T s n s ^ 2
where:
T = total no of observations;
s n = actual value of nth observation;
s ^ = predicted value of s.
M S E = 1 T n = 1 T s n s ^ 2
where:
T = total no of observations;
s n = actual value of nth observation;
s ^ = predicted value of s.
M A E = 1 T n = 1 T s n s ^  
where:
T = total no of observations;
s n = actual value of nth observation;
s ^ = predicted value of s.

3. Analysis of Results

3.1. Model Hyperparameter Optimization

It is important to determine the optimal combination of hyperparameters in machine-learning models when attempting to improve the predictability of the model. Hyperparameters determine how well the model can learn. A tuning technique known as “grid search” was used in the current study, which searched exhaustively for all optimum values for the user-specified hyperparameter combinations using this technique. Additionally, to overcome the problem of overfitting, standard k-fold cross-validation was used as part of the process. In order to carry out the k-fold cross-validation, the procedure outlined below must be followed [79]:
  • In order to train the data set, the training data set needs to be divided into k folds.
  • The (k˗-1) fold is used for training out of all k folds.
  • The remaining last k-fold is used for validation.
  • In order to train the model with specific hyperparameters, training data (k-1 folds) are used, and validation data are used as 1-fold. For each fold, the model’s performance is recorded.
  • K-fold cross-validation refers to the process of repeating the steps above until each k-fold is used for validation purposes. That is why this process is known as “K-fold cross-validation”.
  • After calculating each model score for each model in step d, the mean and standard deviation of the model performance are computed.
  • It is necessary to repeat steps b to f for different values of the hyperparameters.
  • The hyperparameters associated with the best mean and standard deviation of the model scores are then selected.
  • Using the entire training data set, the model is trained, and its performance is evaluated on the basis of the test data set.
In this study, grid search is combined with a 10-fold cross-validation (k = 10) in order to optimize the hyperparameters in the classification algorithms as a result of the grid search, as shown in Figure 9. The optimum hyperparameter for all seven AI techniques is described in Table 2.

3.2. Prediction of UCS using Artificial Neural Networks

Figure 10 shows the regression data and the ANN training, validation, and testing phases for the UCS model. A good regression is obtained between the predicted and measured UCS values during training, validation, and testing. Figure 10 shows the results of the ANN model draws from the plot graph process. Figure 11 shows a very good R2 value of 0.9995 between the predicted and measured UCS.
Ridge Regression
The used Python library is an open-source software package with utility functions for engineering, especially machine learning. These libraries are used by the user to predict their metric of interest. Python’s Scikit-learn includes a free and open-source machine-learning library, which includes Ridge regression, Elastic Net, Lasso regression SVR, RFR, and XG Boost. For the Ridge regression, the model was executed on the training set (80%) and the testing set (20%). Figure 12 shows a graph between the predicted values and actual values with the correlation coefficient (R2 = 0.9790).
Elastic Net
For the Elastic Net, the model was executed on the training set (80%) and the testing set (20%). Figure 13 shows a graph between the predicted values and actual values with the correlation coefficient (R2 = 0.9755).
Lasso Regression
For the Lasso regression, the model was executed on the training set (80%) and the testing set (20%). Figure 14 shows us a graph between the predicted values and actual values with the correlation coefficient (R2 = 0.9755).
Support Vector Machine
For the Support Vector Machine, the model was executed on the training set (80%) and the testing set (20%). The SVR is similar to the ANN, which included an input layer, hidden layer, and output layer. The SVR model estimates the average of the prediction values. Figure 15 shows us a graph between the predicted values and actual values having the correlation coefficient (R2 = 0.9573).
Random Forest
The model was applied to the training set (80%) and testing set (20%) for the Random Forest Regression, and the n estimated and max depth parameters of this RFR model were determined. To determine the higher average of the forecast, the number of estimators is the same as the number of decision trees (DTs) produced using the Random Forest Regression (RFR) model. Figure 16 shows us a graph between the predicted values and actual values with the correlation coefficient (R2 = 0.9949).
XG Boost Algorithm
A Python module called XG Boost Algorithm is used to build machine learning models. For the XG Boost, the model was applied to the training set (80%) and the testing set (20%). The XG boost algorithm is a highly interpretable model. After creating a tree model, the predicted values were directly obtained. Figure 17 shows us a graph between the predicted and actual values with the correlation coefficient (R2 = 0.9990).
The results obtained using the above performance indicator for evaluating the efficacy of each predictive model are shown in Table 3.
To compare their performance, the training and testing accuracies of the seven different models are listed in Table 2. Among the various models, the Artificial Neural Network gave the most accurate prediction on the training and testing data sets (99%), while the Support Vector Machine model showed the lowest predicted performance on the testing and training data sets. The R2, MAE, MSE, and RMSE for the ANN model were 0.999, 0.1428, 0.0782, and 0.2796, respectively, on the training data set, while they were 0.9995, 0.6420,0.0694, and 0.2634 on testing data, respectively, which shows that the performance of the ANN model is greater than all the other predictive models. For the XG Boost Regressor, the value of the performance indicator R2 was 0.9989, MAE was 0.5694, MSE was 0.0782, and RMSE was 0.2796 for the training data set, while for the testing data set, the R2 was 0.9990, MAE was 0.1145, MSE was 0.0694, and RMSE was 0.4162. For the Random Forest Regression, the performance indicator R2 was 0.9943, MAE was 0.7176, MSE was 1.3294, and RMSE was 1.1530 for the training data set, while for the testing data set, the R2 was 0.9949, MAE was 0.3555, MSE was 0.6584, and RMSE was 0.8114. For the Lasso, the performance indicators R2, MAE, MSE, and RMSE were 0.9887, 1.367, 3.0666, and 1.7512 for the training data set, respectively, while for the testing data set, the R2 was 0.9755, MAE was 1.8918, MSE was 3.5788, and RMSE was 1.2555. For the Ridge model, the R2, MAE, MSE, and RMSE were 0.9876, 1.3906, 3.0492, and 1.7462, respectively, for the training data set, while for the testing data, the performance indicators R2, MAE, MSE, and RMSE were 0.979, 1.2149, 3.001, and 1.7347, respectively. For the Elastic net model, the performance indicator R2 was 0.9887, MAE was 1.3751, MSE was 3.2071, and RMSE was 1.7908 for the training data set, while for the testing data set, the R2 was 0.9755, MAE was 1.241, MSE was 3.6308, and RMSE was 1.9055. Similarly, for the Support Vector Machine, the R2, MAE, MSE, and RMSE were 0.9826, 9.4444, 187.2607, and 13.68, respectively, for the training data set, and for the testing data set, the value of R2, MAE, MSE, and RMSE were 0.9573, 6.5449, 111.4614, and 10.5575, respectively. According to Table 2, the ANN model had values of 0.9995, 0.2634, 0.0694, and 0.1642 for R2, RMSE, MSE, and MAE, respectively. This highlights that the ANNs model’s performance was better than that of any other prediction model. However, the hierarchy of the mentioned predictive models in terms of their efficacy based on the performance indicators in predicting the UCS can be ANN > XG Boost Regressor > SVR > Random Forest Regressor > Lasso > Elastic Net > Ridge.

4. Discussion

This study conducted a data analysis to select the most appropriate input variables for predicting the uniaxial compressive strength (UCS) using different artificial intelligence techniques. Various laboratory parameters were determined using direct methods, including moisture content, bulk density, dry density, water absorption, slake and durability, rebound number, P-wave, S-wave, porosity, void ratio, and UCS. A descriptive statistical analysis of these variables was carried out, including a p-value significance analysis, pairwise correlations with the output, and correlation matrix analysis to choose the most appropriate input variables. The statistical analysis showed that the p-value for the rebound number, P-wave, and moisture content had a positive coefficient of less than 0.05, which indicated a strong correlation with the UCS. The other input variables, such as dry density, bulk density, water absorption, and the slake durability index, showed a negative correlation with the UCS, and, therefore, were not selected as input variables. The porosity and void ratio showed an invalid p-value and were also not selected. Additionally, the correlation matrix analysis was carried out to select the most effective input variables and eliminate multicollinearity in the prediction. The results of the correlation matrix analysis indicated that moisture content, rebound number, and P-wave had a strong correlation with the UCS, as presented in Figure 7 and Figure 8. Therefore, these variables were selected as appropriate input variables for the prediction of UCS.
The above analysis shows the performance of various machine-learning models when predicting the UCS of rock samples, as presented in Section 3. The ANN model achieved an impressive coefficient of determination (R2) of 0.9995, indicating a strong correlation between predicted and measured UCS values. This suggests that the ANN model can be used to predict UCS values accurately. Among the other models, the Random Forest Regression (RFR) performed well with an R2 value of 0.9949. This suggests that RFR can also be used as an alternative method for predicting UCS values. The XG Boost algorithm also performed well, with an R2 value of 0.9990, which is similar to the ANN model. The Ridge regression, Elastic Net, and Lasso regression models also showed good performance with R2 values ranging from 0.9887 to 0.9886. However, their performance was slightly lower than that of the ANN, XG Boost, SVM, and RFR models. Overall, the analysis suggests that the ANN model, followed by XG Boost, SVM, and RFR are the best models for predicting UCS values, while Ridge regression, Elastic Net, and Lasso regression are also good alternatives. The SVM model may not be the best option for predicting UCS values. This study considers a small data set due to limited resources. In future studies, the authors will use the application of infrared radiation (IR) technology and AI together to avoid such a large parameter determination in the field as used in this study training. The IR and AI together will make the prediction more reliable and applicable. Moreover, in the future, the given 70 sample data set can be increased using the harmony search optimization algorithm [80].

5. Conclusions

The strength property (uniaxial compressive strength) of rock is a fundamental parameter significantly used in the effective and sustainable design of the tunnel and other engineering structures. In this research, the UCS was predicted using seven different artificial intelligence (AI) metaheuristics techniques, i.e., Artificial Neural Networks (ANNs), the XG Boost algorithm, Random Forest Regression (RFR), Support Vector Machine (SVM), Elastic Net (EN), Lasso, and Ridge models using moisture content, P-waves, and rebound number as input parameters in order to choose the best prediction model. The efficacy of the models was evaluated using four performance indicators, i.e., the coefficient of determination (R2), Root Mean Square Error (RMSE), Mean Square Error (MSE), and Mean Absolute Error (MAE). The results show that the performance indicators for the ANN were 0.9995, 0.2634, 0.0694, and 0.1642, respectively. The comparative analysis based on the performance indicators revealed that the ANN model has greater prediction efficacy compared to the other AI models; however, the ANN model gives approximately a similar performance as the XG Boost Regressor model. Furthermore, it was noticed that SVM, RFR, Ridge, Lasso, and Elastic Net models give acceptable prediction performance; however, they are less effective in performance than the ANN and XG Boost Regressor models when predicting UCS. Therefore, the ANN and XG Boost Regressor are recommended to be used as the most effective predictive models for the prediction of UCS. Since this research work was conducted using a limited number of rock samples, it would be beneficial to extend the data set in order to refine the findings. Additionally, since this study was focused on marble only, it would be necessary to carry out further fine-tuning of the models before applying them to any other type of rock mass environment to ensure the best possible results. Further research needs to be carried out to explore the applications of the various AI techniques for the effective prediction of the UCS. The outcomes of this research will provide a theoretical foundation for field professionals in the prediction of the strength parameters of rock for an effective and sustainable design of engineering structures.

Author Contributions

Contributed to this research, designed experiments, and wrote this paper: M.S.J., R.e.Z., S.H., N.M.K. and Z.U.R.; conceived this research and were responsible for this research: S.H., K.C., M.Z.E. and S.R.; analyzed data, S.S.A., S.S. and M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Researchers Supporting Project number (RSP2023R496), King Saudi University, Riyadh, Saudi Arabia. And also supported by Anhui Provincial Scientific Research Preparation Plan Project (2022AH050596).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dehghan, S.; Sattari, G.; Chelgani, S.C.; Aliabadi, M.J.M.S. Technology. Prediction of uniaxial compressive strength and modulus of elasticity for Travertine samples using regression and artificial neural networks. Min. Sci. Technol. 2010, 20, 41–46. [Google Scholar]
  2. Bieniawski, Z.T. Estimating the strength of rock materials. J. S. Afr. Inst. Min. Metall. 1974, 74, 312–320. [Google Scholar] [CrossRef]
  3. Mahdiabadi, N.; Khanlari, G.J. Prediction of uniaxial compressive strength and modulus of elasticity in calcareous mudstones using neural networks, fuzzy systems, and regression analysis. Period. Polytech. Civ. Eng. 2019, 63, 104–114. [Google Scholar] [CrossRef]
  4. Khan, N.M.; Cao, K.; Emad, M.Z.; Hussain, S.; Rehman, H.; Shah, K.S.; Rehman, F.U.; Muhammad, A.J. Development of Predictive Models for Determination of the Extent of Damage in Granite Caused by Thermal Treatment and Cooling Conditions Using Artificial Intelligence. Mathematics 2022, 10, 2883. [Google Scholar] [CrossRef]
  5. Wu, H.; Ju, Y.; Han, X.; Ren, Z.; Sun, Y.; Zhang, Y.; Han, T. Size effects in the uniaxial compressive properties of 3D printed models of rocks: An experimental investigation. Int. J. Coal Sci. Technol. 2022, 9, 83. [Google Scholar] [CrossRef]
  6. Gao, H.; Wang, Q.; Jiang, B.; Zhang, P.; Jiang, Z.; Wang, Y. Relationship between rock uniaxial compressive strength and digital core drilling parameters and its forecast method. Int. J. Coal Sci. Technol. 2021, 8, 605–613. [Google Scholar] [CrossRef]
  7. Kim, B.-H.; Walton, G.; Larson, M.K.; Berry, S. Investigation of the anisotropic confinement-dependent brittleness of a Utah coal. Int. J. Coal Sci. Technol. 2021, 8, 274–290. [Google Scholar] [CrossRef]
  8. Li, Y.; Mitri, H.S. Determination of mining-induced stresses using diametral rock core deformations. Int. J. Coal Sci. Technol. 2022, 9, 80. [Google Scholar] [CrossRef]
  9. Li, Y.; Yang, R.; Fang, S.; Lin, H.; Lu, S.; Zhu, Y.; Wang, M. Failure analysis and control measures of deep roadway with composite roof: A case study. Int. J. Coal Sci. Technol. 2022, 9, 2. [Google Scholar] [CrossRef]
  10. Liu, B.; Zhao, Y.; Zhang, C.; Zhou, J.; Li, Y.; Sun, Z. Characteristic strength and acoustic emission properties of weakly cemented sandstone at different depths under uniaxial compression. Int. J. Coal Sci. Technol. 2021, 8, 1288–1301. [Google Scholar] [CrossRef]
  11. Liu, T.; Lin, B.; Fu, X.; Liu, A. Mechanical criterion for coal and gas outburst: A perspective from multiphysics coupling. Int. J. Coal Sci. Technol. 2021, 8, 1423–1435. [Google Scholar] [CrossRef]
  12. Ma, D.; Duan, H.; Zhang, J.; Bai, H. A state-of-the-art review on rock seepage mechanism of water inrush disaster in coal mines. Int. J. Coal Sci. Technol. 2022, 9, 50. [Google Scholar] [CrossRef]
  13. Ulusay, R.; Hudson, J.A. The Complete ISRM Suggested Methods for Rock Characterization, Testing and Monitoring, 1974–2006; International Society for Rock Mechanics (ISRM): Ankara, Turkey; Pergamon, Turkey; Oxford, UK, 2007. [Google Scholar]
  14. Standard Test Method for Unconfined Compressive Strength of Intact Rock Core Specimens; ASTM 2938. ASTM International: West Conshohocken, PA, USA, 1995.
  15. Ali, Z.; Karakus, M.; Nguyen, G.D.; Amrouch, K. Effect of loading rate and time delay on the tangent modulus method (TMM) in coal and coal measured rocks. Int. J. Coal Sci. Technol. 2022, 9, 81. [Google Scholar] [CrossRef]
  16. Bai, Q.; Zhang, C.; Paul Young, R. Using true-triaxial stress path to simulate excavation-induced rock damage: A case study. Int. J. Coal Sci. Technol. 2022, 9, 49. [Google Scholar] [CrossRef]
  17. Chen, Y.; Zuo, J.; Liu, D.; Li, Y.; Wang, Z. Experimental and numerical study of coal-rock bimaterial composite bodies under triaxial compression. Int. J. Coal Sci. Technol. 2021, 8, 908–924. [Google Scholar] [CrossRef]
  18. Chi, X.; Yang, K.; Wei, Z. Breaking and mining-induced stress evolution of overlying strata in the working face of a steeply dipping coal seam. Int. J. Coal Sci. Technol. 2021, 8, 614–625. [Google Scholar] [CrossRef]
  19. Cavaleri, L.; Barkhordari, M.S.; Repapis, C.C.; Armaghani, D.J.; Ulrikh, D.V.; Asteris, P.G. Convolution-based ensemble learning algorithms to estimate the bond strength of the corroded reinforced concrete. Constr. Build. Mater. 2022, 359, 129504. [Google Scholar] [CrossRef]
  20. Ceryan, N. Application of support vector machines and relevance vector machines in predicting uniaxial compressive strength of volcanic rocks. J. Afr. Earth Sci. 2014, 100, 634–644. [Google Scholar] [CrossRef]
  21. Asadi, A. Application of artificial neural networks in prediction of uniaxial compressive strength of rocks using well logs and drilling data. Procedia Eng. 2017, 191, 279–286. [Google Scholar] [CrossRef]
  22. Skentou, A.D.; Bardhan, A.; Mamou, A.; Lemonis, M.E.; Kumar, G.; Samui, P.; Armaghani, D.J.; Asteris, P.G. Closed-Form Equation for Estimating Unconfined Compressive Strength of Granite from Three Non-destructive Tests Using Soft Computing Models. Rock Mech. Rock Eng. 2022, 56, 487–514. [Google Scholar] [CrossRef]
  23. Zhang, L.; Ding, X.; Budhu, M. A rock expert system for the evaluation of rock properties. Int. J. Rock Mech. Min. Sci. 2012, 50, 124–132. [Google Scholar] [CrossRef]
  24. Singh, T.N.; Verma, A.K. Comparative analysis of intelligent algorithms to correlate strength and petrographic properties of some schistose rocks. Eng. Comput. 2012, 28, 1–12. [Google Scholar] [CrossRef]
  25. Gokceoglu, C.; Sonmez, H.; Zorlu, K. Estimating the uniaxial compressive strength of some clay-bearing rocks selected from Turkey by nonlinear multivariable regression and rule-based fuzzy models. Expert Syst. 2009, 26, 176–190. [Google Scholar] [CrossRef]
  26. Sarkar, K.; Vishal, V.; Singh, T. An empirical correlation of index geomechanical parameters with the compressional wave velocity. Geotech. Geol. Eng. 2012, 30, 469–479. [Google Scholar] [CrossRef]
  27. Shan, F.; He, X.; Armaghani, D.J.; Zhang, P.; Sheng, D. Success and challenges in predicting TBM penetration rate using recurrent neural networks. Tunn. Undergr. Space Technol. 2022, 130, 104728. [Google Scholar] [CrossRef]
  28. Verwaal, W.; Mulder, A. Estimating rock strength with the Equotip hardness tester. Int. J. Rock Mech. Min. Sci. Geomech. Abstr. 1993, 30, 659–662. [Google Scholar] [CrossRef]
  29. Yagiz, S.; Sezer, E.; Gokceoglu, C. Artificial neural networks and nonlinear regression techniques to assess the influence of slake durability cycles on the prediction of uniaxial compressive strength and modulus of elasticity for carbonate rocks. Int. J. Numer. Anal. Methods Géoméch. 2012, 36, 1636–1650. [Google Scholar] [CrossRef]
  30. Grima, M.A.; Babuška, R. Fuzzy model for the prediction of unconfined compressive strength of rock samples. Int. J. Rock Mech. Min. Sci. 1999, 36, 339–349. [Google Scholar] [CrossRef]
  31. Indraratna, B.; Armaghani, D.J.; Correia, A.G.; Hunt, H.; Ngo, T. Prediction of resilient modulus of ballast under cyclic loading using machine learning techniques. Transp. Geotech. 2023, 38, 100895. [Google Scholar] [CrossRef]
  32. Feng, F.; Chen, S.; Zhao, X.; Li, D.; Wang, X.; Cui, J. Effects of external dynamic disturbances and structural plane on rock fracturing around deep underground cavern. Int. J. Coal Sci. Technol. 2022, 9, 15. [Google Scholar] [CrossRef]
  33. Gao, R.; Kuang, T.; Zhang, Y.; Zhang, W.; Quan, C. Controlling mine pressure by subjecting high-level hard rock strata to ground fracturing. Int. J. Coal Sci. Technol. 2021, 8, 1336–1350. [Google Scholar] [CrossRef]
  34. Gorai, A.K.; Raval, S.; Patel, A.K.; Chatterjee, S.; Gautam, T. Design and development of a machine vision system using artificial neural network-based algorithm for automated coal characterization. Int. J. Coal Sci. Technol. 2021, 8, 737–755. [Google Scholar] [CrossRef]
  35. He, S.; Qin, M.; Qiu, L.; Song, D.; Zhang, X. Early warning of coal dynamic disaster by precursor of AE and EMR “quiet period”. Int. J. Coal Sci. Technol. 2022, 9, 46. [Google Scholar] [CrossRef]
  36. Jangara, H.; Ozturk, C.A. Longwall top coal caving design for thick coal seam in very poor strength surrounding strata. Int. J. Coal Sci. Technol. 2021, 8, 641–658. [Google Scholar] [CrossRef]
  37. Nikolenko, P.V.; Epshtein, S.A.; Shkuratnik, V.L.; Anufrenkova, P.S. Experimental study of coal fracture dynamics under the influence of cyclic freezing–thawing using shear elastic waves. Int. J. Coal Sci. Technol. 2021, 8, 562–574. [Google Scholar] [CrossRef]
  38. Demir Sahin, D.; Isik, E.; Isik, I.; Cullu, M. Artificial neural network modeling for the effect of fly ash fineness on compressive strength. Arab. J. Geosci. 2021, 14, 2705. [Google Scholar] [CrossRef]
  39. Chen, S.; Xiang, Z.; Eker, H. Curing Stress Influences the Mechanical Characteristics of Cemented Paste Backfill and Its Damage Constitutive Model. Buildings 2022, 12, 1607. [Google Scholar] [CrossRef]
  40. Köken, E. Assessment of Los Angeles Abrasion Value (LAAV) and Magnesium Sulphate Soundness (Mwl) of Rock Aggregates Using Gene Expression Programming and Artificial Neural Networks. Arch. Min. Sci. 2022, 67, 401–422. [Google Scholar]
  41. Şahin, D.D.; Kumaş, C.; Eker, H. Research of the Use of Mine Tailings in Agriculture. JoCREST 2022, 8, 71–84. [Google Scholar]
  42. Strzałkowski, P.; Köken, E. Assessment of Böhme Abrasion Value of Natural Stones through Artificial Neural Networks (ANN). Materials 2022, 15, 2533. [Google Scholar] [CrossRef]
  43. Hussain, S.; Muhammad Khan, N.; Emad, M.Z.; Naji, A.M.; Cao, K.; Gao, Q.; Ur Rehman, Z.; Raza, S.; Cui, R.; Salman, M. An Appropriate Model for the Prediction of Rock Mass Deformation Modulus among Various Artificial Intelligence Models. Sustainability 2022, 14, 15225. [Google Scholar] [CrossRef]
  44. Chen, L.; Asteris, P.G.; Tsoukalas, M.Z.; Armaghani, D.J.; Ulrikh, D.V.; Yari, M. Forecast of Airblast Vibrations Induced by Blasting Using Support Vector Regression Optimized by the Grasshopper Optimization (SVR-GO) Technique. Appl. Sci. 2022, 12, 9805. [Google Scholar] [CrossRef]
  45. Zhou, J.; Lin, H.; Jin, H.; Li, S.; Yan, Z.; Huang, S. Cooperative prediction method of gas emission from mining face based on feature selection and machine learning. Int. J. Coal Sci. Technol. 2022, 9, 51. [Google Scholar] [CrossRef]
  46. Huang, F.; Xiong, H.; Chen, S.; Lv, Z.; Huang, J.; Chang, Z.; Catani, F. Slope stability prediction based on a long short-term memory neural network: Comparisons with convolutional neural networks, support vector machines and random forest models. Int. J. Coal Sci. Technol. 2023, 10, 18. [Google Scholar] [CrossRef]
  47. Vagnon, F.; Colombero, C.; Colombo, F.; Comina, C.; Ferrero, A.M.; Mandrone, G.; Vinciguerra, S.C. Effects of thermal treatment on physical and mechanical properties of Valdieri Marble-NW Italy. Int. J. Rock Mech. Min. Sci. 2019, 116, 75–86. [Google Scholar] [CrossRef]
  48. Manouchehrian, A.; Sharifzadeh, M.; Moghadam, R.H. Application of artificial neural networks and multivariate statistics to estimate UCS using textural characteristics. Int. J. Min. Sci. Technol. 2012, 22, 229–236. [Google Scholar] [CrossRef]
  49. Torabi-Kaveh, M.; Naseri, F.; Saneie, S.; Sarshari, B. Application of artificial neural networks and multivariate statistics to predict UCS and E using physical properties of Asmari limestones. Arab. J. Geosci. 2015, 8, 2889–2897. [Google Scholar] [CrossRef]
  50. Abdi, Y.; Garavand, A.T.; Sahamieh, R.Z. Prediction of strength parameters of sedimentary rocks using artificial neural networks and regression analysis. Arab. J. Geosci. 2018, 11, 587. [Google Scholar] [CrossRef]
  51. Prabakar, J.; Dendorkar, N.; Morchhale, R. Influence of fly ash on strength behavior of typical soils. Constr. Build. Mater. 2004, 18, 263–267. [Google Scholar] [CrossRef]
  52. Matin, S.; Farahzadi, L.; Makaremi, S.; Chelgani, S.C.; Sattari, G. Variable selection and prediction of uniaxial compressive strength and modulus of elasticity by random forest. Appl. Soft Comput. 2018, 70, 980–987. [Google Scholar] [CrossRef]
  53. Suthar, M. Applying several machine learning approaches for prediction of unconfined compressive strength of stabilized pond ashes. Neural Comput. Appl. 2020, 32, 9019–9028. [Google Scholar] [CrossRef]
  54. Wang, M.; Wan, W.; Zhao, Y. Prediction of the uniaxial compressive strength of rocks from simple index tests using a random forest predictive model. Comptes Rendus Mécanique 2020, 348, 3–32. [Google Scholar] [CrossRef]
  55. Ren, Q.; Wang, G.; Li, M.; Han, S. Prediction of rock compressive strength using machine learning algorithms based on spectrum analysis of geological hammer. Geotech. Geol. Eng. 2019, 37, 475–489. [Google Scholar] [CrossRef]
  56. Ghasemi, E.; Kalhori, H.; Bagherpour, R.; Yagiz, S. Model tree approach for predicting uniaxial compressive strength and Young’s modulus of carbonate rocks. Bull. Eng. Geol. Environ. 2018, 77, 331–343. [Google Scholar] [CrossRef]
  57. Saedi, B.; Mohammadi, S.D. Prediction of uniaxial compressive strength and elastic modulus of migmatites by microstructural characteristics using artificial neural networks. Rock Mech. Rock Eng. 2021, 54, 5617–5637. [Google Scholar] [CrossRef]
  58. Shahani, N.M.; Zheng, X.; Liu, C.; Hassan, F.U.; Li, P. Developing an XGBoost regression model for predicting young’s modulus of intact sedimentary rocks for the stability of surface and subsurface structures. Front. Earth Sci. 2021, 9, 761990. [Google Scholar] [CrossRef]
  59. Armaghani, D.J.; Tonnizam Mohamad, E.; Momeni, E.; Monjezi, M.; Sundaram Narayanasamy, M.S. Prediction of the strength and elasticity modulus of granite through an expert artificial neural network. Arab. J. Geosci. 2016, 9, 48. [Google Scholar] [CrossRef]
  60. Fairhurst, C.; Hudson, J.A. Draft ISRM suggested method for the complete stress-strain curve for intact rock in uniaxial compression. Int. J. Rock Mech. Min. Sci. Geomech. Abstr. 1999, 36, 279–289. [Google Scholar]
  61. Małkowski, P.; Niedbalski, Z.; Balarabe, T. A statistical analysis of geomechanical data and its effect on rock mass numerical modeling: A case study. Int. J. Coal Sci. Technol. 2021, 8, 312–323. [Google Scholar] [CrossRef]
  62. Ramraj, S.; Uzir, N.; Sunil, R.; Banerjee, S. Experimenting XGBoost algorithm for prediction and classification of different datasets. Int. J. Control. Theory Appl. 2016, 9, 651–662. [Google Scholar]
  63. Chandrahas, N.S.; Choudhary, B.S.; Teja, M.V.; Venkataramayya, M.; Prasad, N.K. XG Boost Algorithm to Simultaneous Prediction of Rock Fragmentation and Induced Ground Vibration Using Unique Blast Data. Appl. Sci. 2022, 12, 5269. [Google Scholar] [CrossRef]
  64. Shahani, N.M.; Zheng, X.; Guo, X.; Wei, X. Machine learning-based intelligent prediction of elastic modulus of rocks at thar coalfield. Sustainability 2022, 14, 3689. [Google Scholar] [CrossRef]
  65. Choi, H.-y.; Cho, K.-H.; Jin, C.; Lee, J.; Kim, T.-H.; Jung, W.-S.; Moon, S.-K.; Ko, C.-N.; Cho, S.-Y.; Jeon, C.-Y. Exercise therapies for Parkinson’s disease: A systematic review and meta-analysis. Park. Dis. 2020, 2020, 2565320. [Google Scholar] [CrossRef] [PubMed]
  66. Ogunkunle, T.F.; Okoro, E.E.; Rotimi, O.J.; Igbinedion, P.; Olatunji, D.I. Artificial intelligence model for predicting geomechanical characteristics using easy-to-acquire offset logs without deploying logging tools. Petroleum 2022, 8, 192–203. [Google Scholar] [CrossRef]
  67. Yang, Z.; Wu, Y.; Zhou, Y.; Tang, H.; Fu, S. Assessment of machine learning models for the prediction of rate-dependent compressive strength of rocks. Minerals 2022, 12, 731. [Google Scholar] [CrossRef]
  68. Gu, J.-C.; Lee, S.-C.; Suh, Y.-H. Determinants of behavioral intention to mobile banking. J. Agric. Food Res. 2009, 36, 11605–11616. [Google Scholar] [CrossRef]
  69. Qin, P.; Wang, T.; Luo, Y. A review on plant-based proteins from soybean: Health benefits and soy product development. J. Agric. Food Res. 2022, 7, 100265. [Google Scholar] [CrossRef]
  70. Frimpong, E.A.; Okyere, P.Y.; Asumadu, J. Prediction of transient stability status using Walsh-Hadamard transform and support vector machine. In Proceedings of the 2017 IEEE PES PowerAfrica, Accra, Ghana, 27–30 June 2017; pp. 301–306. [Google Scholar]
  71. Hassan, M.Y.; Arman, H. Comparison of six machine-learning methods for predicting the tensile strength (Brazilian) of evaporitic rocks. Appl. Sci. 2021, 11, 5207. [Google Scholar] [CrossRef]
  72. Ogutu, J.O.; Schulz-Streeck, T.; Piepho, H.-P. Genomic selection using regularized linear regression models: Ridge regression, lasso, elastic net and their extensions. BMC Proc. 2012, 6, S10. [Google Scholar] [CrossRef]
  73. Ozanne, M.; Dyar, M.; Carmosino, M.; Breves, E.; Clegg, S.; Wiens, R. Comparison of lasso and elastic net regression for major element analysis of rocks using laser-induced breakdown spectroscopy (LIBS). In Proceedings of the 43rd Annual Lunar and Planetary Science Conference, The Woodlands, TX, USA, 19–23 March 2012; p. 2391. [Google Scholar]
  74. Sarkar, K.; Tiwary, A.; Singh, T. Estimation of strength parameters of rock using artificial neural networks. Bull. Eng. Geol. Environ. 2010, 69, 599–606. [Google Scholar] [CrossRef]
  75. Tayarani, N.S.; Jamali, S.; Zadeh, M.M. Combination of artificial neural networks and numerical modeling for predicting deformation modulus of rock masses. Arch. Min. Sci. 2020, 65, 337–346. [Google Scholar]
  76. Lawal, A.I.; Kwon, S.J. Application of artificial intelligence to rock mechanics: An overview. J. Rock Mech. Geotech. Eng. 2021, 13, 248–266. [Google Scholar] [CrossRef]
  77. Ma, L.; Khan, N.M.; Cao, K.; Rehman, H.; Salman, S.; Rehman, F.U. Prediction of Sandstone Dilatancy Point in Different Water Contents Using Infrared Radiation Characteristic: Experimental and Machine Learning Approaches. Lithosphere 2022, 2021, 3243070. [Google Scholar] [CrossRef]
  78. Khan, N.M.; Ma, L.; Cao, K.; Hussain, S.; Liu, W.; Xu, Y. Infrared radiation characteristics based rock failure indicator index for acidic mudstone under uniaxial loading. Arab. J. Geosci. 2022, 15, 343. [Google Scholar] [CrossRef]
  79. Kutty, A.A.; Wakjira, T.G.; Kucukvar, M.; Abdella, G.M.; Onat, N.C. Urban resilience and livability performance of European smart cities: A novel machine learning approach. J. Clean. Prod. 2022, 378, 134203. [Google Scholar] [CrossRef]
  80. Bekdaş, G.; Cakiroglu, C.; Kim, S.; Geem, Z.W. Optimal dimensioning of retaining walls using explainable ensemble learning algorithms. Materials 2022, 15, 4993. [Google Scholar] [CrossRef]
Figure 1. (A) Afghanistan marble, (B) Mardan (spin kala) marble, (C) Chitral marble, (D) Mohmand marble, (E) Bunir Bumbpoha, (F) Chitral marble, and (G) Mohmand (super white) marble. (H) Core drilling, (I) core cutting, (J) triaxial testing machine, (K) Schmidt hammer apparatus, (L) slake durability apparatus, (M) sample dipped for 24 h, (N) desiccator used to cool sample, (O) volumetric cylinder used to find volume, (P) sampling weight, (Q) core samples, (R) oven for drying samples, (S) core failure after UCS testing, and (T) sampling using a Schmidt hammer.
Figure 1. (A) Afghanistan marble, (B) Mardan (spin kala) marble, (C) Chitral marble, (D) Mohmand marble, (E) Bunir Bumbpoha, (F) Chitral marble, and (G) Mohmand (super white) marble. (H) Core drilling, (I) core cutting, (J) triaxial testing machine, (K) Schmidt hammer apparatus, (L) slake durability apparatus, (M) sample dipped for 24 h, (N) desiccator used to cool sample, (O) volumetric cylinder used to find volume, (P) sampling weight, (Q) core samples, (R) oven for drying samples, (S) core failure after UCS testing, and (T) sampling using a Schmidt hammer.
Sustainability 15 08835 g001
Figure 2. Structure of a XGBoost model.
Figure 2. Structure of a XGBoost model.
Sustainability 15 08835 g002
Figure 3. Fundamental Structure of a Random Forest Regression Model.
Figure 3. Fundamental Structure of a Random Forest Regression Model.
Sustainability 15 08835 g003
Figure 4. General Architecture of a Support Vector Machine [70].
Figure 4. General Architecture of a Support Vector Machine [70].
Sustainability 15 08835 g004
Figure 5. Flow chart representing the procedure for developing ANN code using MATLAB.
Figure 5. Flow chart representing the procedure for developing ANN code using MATLAB.
Sustainability 15 08835 g005
Figure 6. Artificial Neural Network framework for estimating UCS.
Figure 6. Artificial Neural Network framework for estimating UCS.
Sustainability 15 08835 g006
Figure 7. Scatter plot showing the input parameters with uniaxial compressive strength (UCS).
Figure 7. Scatter plot showing the input parameters with uniaxial compressive strength (UCS).
Sustainability 15 08835 g007
Figure 8. Correlation between input variables and UCS.
Figure 8. Correlation between input variables and UCS.
Sustainability 15 08835 g008
Figure 9. Hyperparameter tuning using grid search and 10-fold cross-validation.
Figure 9. Hyperparameter tuning using grid search and 10-fold cross-validation.
Sustainability 15 08835 g009
Figure 10. ANN phases of training, validation, and testing and the regression coefficient of determination for UCS.
Figure 10. ANN phases of training, validation, and testing and the regression coefficient of determination for UCS.
Sustainability 15 08835 g010aSustainability 15 08835 g010b
Figure 11. Coefficient of determination between the measured and predicted UCS.
Figure 11. Coefficient of determination between the measured and predicted UCS.
Sustainability 15 08835 g011
Figure 12. Ridge regression model with the coefficient of determination between the measured and predicted UCS.
Figure 12. Ridge regression model with the coefficient of determination between the measured and predicted UCS.
Sustainability 15 08835 g012
Figure 13. Elastic Net model with the coefficient of determination between the measured and predicted UCS.
Figure 13. Elastic Net model with the coefficient of determination between the measured and predicted UCS.
Sustainability 15 08835 g013
Figure 14. Lasso regression with the coefficient of determination between the measured and predicted UCS.
Figure 14. Lasso regression with the coefficient of determination between the measured and predicted UCS.
Sustainability 15 08835 g014
Figure 15. Support Vector Machine model with the coefficient of determination between the measured and predicted UCS.
Figure 15. Support Vector Machine model with the coefficient of determination between the measured and predicted UCS.
Sustainability 15 08835 g015
Figure 16. Random Forest Regression with the coefficient of determination between the measured and predicted UCS.
Figure 16. Random Forest Regression with the coefficient of determination between the measured and predicted UCS.
Sustainability 15 08835 g016
Figure 17. XG Boost Model with the coefficient of determination between the measured and predicted UCS.
Figure 17. XG Boost Model with the coefficient of determination between the measured and predicted UCS.
Sustainability 15 08835 g017
Table 1. Descriptive statistics for the input and output variables.
Table 1. Descriptive statistics for the input and output variables.
S.NoInput and OutputN totalMeanStandard DeviationSumMinMedianMax
1bulk density (g/mL)70.002.730.27191.342.122.693.53
2dry density (g/mL)70.002.670.24187.162.122.653.61
3moisture content (MC (%))70.000.360.1925.460.000.350.99
4water absorption (%)70.000.360.2425.280.000.341.20
5slake durability index (Id2)70.0097.083.216795.8583.2498.2599.11
6rebound number (R)70.0045.886.313211.5734.7044.8264.14
7porosity (η)70.000.360.2425.280.000.341.20
8void ratio (e)70.001.153.0380.250.000.00340.012
9P-wave (km/s)70.004.740.20331.524.434.705.49
10S-wave (km/s)70.003.020.01211.142.983.023.03
11UCS (Mpa)70.0052.1712.103651.5934.8949.5193.76
Table 2. Optimized hyperparameters for all models.
Table 2. Optimized hyperparameters for all models.
OutputModelParameters
UCS (Mpa)Artificial Neural NetworkNeuron = 48
XG Boost Regressorlearning_rate = 0.01, max_depth = 3, n_estimators = 100
Support Vector Machinen_split = 10, n_repeats = 5, random state = 42, C = 1 function = SVR (kernal ‘rbf’)
Random Forest Regressionn_split = 10, n_repeats = 5, random state = 42, max_depth = 3
LassoAlpha = 0.01, n_split = 10, n_repeats = 5, random state = 42
Elastic NetAlpha = 0.01, l1_ratio = 0.95, n_split = 10, n_repeats = 5, random state = 42
RidgeAlpha = 0.1, n_split = 10, n_repeats = 5, random state = 42
Table 3. Comparative analysis of the performance of different AI techniques.
Table 3. Comparative analysis of the performance of different AI techniques.
S.noModelsTraining AccuracyTesting Accuracy
R2MAEMSERMSER2MAEMSERMSE
1Artificial Neural Network0.99900.14280.07820.27960.99950.16420.06940.2634
2XG Boost Regressor0.99890.56940.86640.93080.99900.11450.17320.4162
3Support Vector Machine0.99870.36490.30220.54980.99830.28910.25950.5094
4Random Forest Regression0.99430.71761.32941.15300.99490.35550.65840.8114
5Lasso0.98871.36703.06661.75120.97551.89183.57881.2555
6Elastic Net0.98871.37513.20711.79080.97551.24103.63081.9055
7Ridge0.98761.39063.04921.74620.97901.21493.00101.7347
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jan, M.S.; Hussain, S.; e Zahra, R.; Emad, M.Z.; Khan, N.M.; Rehman, Z.U.; Cao, K.; Alarifi, S.S.; Raza, S.; Sherin, S.; et al. Appraisal of Different Artificial Intelligence Techniques for the Prediction of Marble Strength. Sustainability 2023, 15, 8835. https://doi.org/10.3390/su15118835

AMA Style

Jan MS, Hussain S, e Zahra R, Emad MZ, Khan NM, Rehman ZU, Cao K, Alarifi SS, Raza S, Sherin S, et al. Appraisal of Different Artificial Intelligence Techniques for the Prediction of Marble Strength. Sustainability. 2023; 15(11):8835. https://doi.org/10.3390/su15118835

Chicago/Turabian Style

Jan, Muhammad Saqib, Sajjad Hussain, Rida e Zahra, Muhammad Zaka Emad, Naseer Muhammad Khan, Zahid Ur Rehman, Kewang Cao, Saad S. Alarifi, Salim Raza, Saira Sherin, and et al. 2023. "Appraisal of Different Artificial Intelligence Techniques for the Prediction of Marble Strength" Sustainability 15, no. 11: 8835. https://doi.org/10.3390/su15118835

APA Style

Jan, M. S., Hussain, S., e Zahra, R., Emad, M. Z., Khan, N. M., Rehman, Z. U., Cao, K., Alarifi, S. S., Raza, S., Sherin, S., & Salman, M. (2023). Appraisal of Different Artificial Intelligence Techniques for the Prediction of Marble Strength. Sustainability, 15(11), 8835. https://doi.org/10.3390/su15118835

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop