Next Article in Journal
Conventional and Innovative Drying/Roasting Technologies: Effect on Bioactive and Sensorial Profiles in Nuts and Nut-Based Products
Next Article in Special Issue
Parameter Study and Engineering Verification of the Hardening Soil Model with Small-Strain Stiffness for Loess in the Xi’an Area
Previous Article in Journal
Pedestrian Re-Identification Algorithm Based on Unmanned Aerial Vehicle Imagery
Previous Article in Special Issue
Hypoplastic Modeling of Soil–Structure Contact Surface Considering Initial Anisotropy and Roughness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Open-Pit Bench Blasting Fragmentation Prediction Based on Stacking Integrated Strategy

1
School of Resource and Safety Engineering, Central South University, Changsha 410083, China
2
Division of Mining and Geotechnical Engineering, Luleå University of Technology, 97187 Luleå, Sweden
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(3), 1254; https://doi.org/10.3390/app15031254
Submission received: 16 December 2024 / Revised: 14 January 2025 / Accepted: 24 January 2025 / Published: 26 January 2025

Abstract

:
The size distribution of rock fragments significantly influences subsequent operations in geotechnical and mining engineering projects. Thus, accurate prediction of this distribution according to the relevant blasting design parameters is essential. This study employs artificial intelligence methods to predict the fragmentation of open-pit bench blasting. The study employed a dataset comprising 97 blast fragment samples. Random forest and XGBoost models were utilized as base learners. A prediction model was developed using the stacking integrated strategy to enhance predictive performance. The model’s performance was evaluated using the coefficient of determination (R2), the mean square error (MSE), the root mean square error (RMSE), and the mean absolute error (MAE). The results indicated that the model achieved the highest prediction accuracy, with an R2 of 0.943. In the training set, the model achieved MSE, RMSE, and MAE values of 0.00269, 0.05187, and 0.03320, while in the testing set, these values were 0.00197, 0.04435, and 0.03687, respectively. The model was validated using five sets of actual blasting block data from a northeastern mining area, which yielded more accurate prediction results. These findings demonstrate that the stacking strategy effectively enhances the prediction performance of a single model and offers innovative approaches to predicting blasting block size.

1. Introduction

The proper fragmentation of blasting has always been the goal of rock mining production. Many subsequent operations after blasting, including loading, transport, and grinding, are affected by the degree of fragmentation [1]. High-quality breakage can avoid the need for the secondary blasting of large rock pieces, reduce the energy consumed by crushers and grinders, increase digging capacity and loading productivity, and reduce production costs [2,3]. The breakage of rock is a complex and dynamic process, and rock fragmentation can be affected by a number of variables. Numerous previous studies have shown that the parameters that determine fragmentation can be categorized into controllable and uncontrollable. The design parameters of blasting and explosive-related parameters are controllable, whereas uncontrollable parameters include the physical and mechanical properties of the rock and its structure [4,5]. When establishing the prediction model for the blasting situation, the controllable and uncontrollable parameters should be considered comprehensively. However, due to the difficulty in obtaining parameters in actual projects, ensuring the validity of collected parameters is challenging. Therefore, it is urgent to establish a feasible model considering a variety of factors to predict the size of fragmentation.
As early as the last century, attempts have been made to establish empirical formulas for predicting blasting fragmentation by summarizing and analyzing data. Rosin and Rammler [6] proposed the Rosin–Rammler semi-empirical formula, which provides a rough judgment of block size through the relationship between characteristic bulkiness and the cumulative passing percentage. The Kuznetsov formula combines factors such as explosive quality and rock strength for block size prediction [7]. Cunningham further developed the Kuz–Ram model based on the Rosin–Rammler and Kuznetsov formulations [8]. Due to the shortcomings of the Kuz–Ram model in fine fragment prediction, many scholars have subsequently continued to modify and improve it. Ouchterlony et al. [9] developed the Kuznetsov–Cunningham–Ouchterlony (KCO) model based on the Kuz–Ram model, which compensates for its shortcomings in predicting fine fragments and its limitations regarding the upper limit of block size.
During blasting, uncontrolled parameters affect rock fragmentation to a greater extent compared to controlled parameters [10], making it difficult to propose a fitting equation for fragmentation that accounts for all blasting influences. With the extensive development and application of AI technology in many fields, it has gradually become an effective method for solving complex prediction problems [11,12,13,14,15,16,17,18]. Kulatilake et al. [19] used a Back Propagation Neural Network (BPNN) model to predict the average particle size of blast fragmentation, training it with four learning algorithms. Its predictive ability was confirmed to be superior to multiple regression models used by previous scholars. Monjezi et al. [20] constructed an artificial neural network (ANN) to predict the degree of rock fragmentation by comparing several methods. He pointed out through sensitivity analysis that the most influential parameters in the fragmentation process were powder factor, burden, and bench height. In addition to neural networks, there are multiple other methods for prediction. For example, Philip et al. [21] established the XGBoost model to predict rock fragmentation and compared it with the prediction results of BPNN, confirming that XGBoost has better prediction ability and faster prediction speed. Esmaeili et al. [22] constructed a Support Vector Machine (SVM) model and an Adaptive Network-based Fuzzy Inference System (ANFIS) based on Principal Component Analysis (PCA) and compared the prediction results with the Kuz–Ram method. Their findings further proved that the accuracy of AI methods is better compared to previous empirical formulas. It is evident that random forest, XGBoost, SVM, and neural network algorithms are common choices among researchers for prediction.
To ensure and improve prediction accuracy, many scholars choose to optimize the prediction process using optimization algorithms. For example, Asl et al. [23] optimized the ANN model prediction process using the firefly algorithm, resulting in coefficients of determination R2 of 0.94 and 0.93 for the prediction of rock fragments and fly-rock. Jia et al. [24] established an extreme learning machine model using Grey Wolf Optimization (GWO) and compared the prediction results of the average size of blast fragmentation before and after optimization, proving that the optimization algorithm can effectively improve the model’s prediction accuracy. Some scholars have further improved optimization algorithms. Rong et al. [25] conducted a study on the parameters of Convolutional Neural Network (CNN) and Multilayer Perceptron (MLP) for predicting the mean fragment size, and utilized the GWO algorithm to optimize the training process, further improving model performance. Rather than using a single algorithm to optimize the model, using multiple optimization algorithms and comparing the prediction quality can make the prediction results more convincing. Li et al. [26] used five optimization algorithms, including the Salp Swarm Algorithm, GWO, and Particle Swarm Optimization (PSO), to optimize the forecasting performance of the Support Vector Regression (SVR) model. They combined these algorithms with three mathematical indicators to compare and evaluate the best prediction model.
Although the prediction method for optimizing the basic algorithm has gradually developed and matured, enhancing the model’s accuracy and prediction ability, there are still some limitations of the single prediction method due to the relatively complex factors affecting the fragmentation produced by rock blasting. To improve the generalization ability of a single algorithm, some scholars have integrated algorithms to predict the blasting effect. Barkhordari et al. [27] established various integrated models, including the simple averaging ensemble, weighted averaging ensemble, separate stacking model, and Bayesian–eXtreme Gradient Boosting, filling the gap in predicting blasting fly-rock using integrated models. Kaffayatullah et al. [28] also introduced the AdaBoost algorithm in their study on predicting compressive strength. However, AdaBoost and XGBoost are essentially integrated models of homogeneous algorithms, with poor integration of the advantages of non-homogeneous algorithms. Koopialipoor et al. [29] combined the different characteristics of four models and achieved the highest prediction accuracy for rock deformation using the stacking-tree-RF-KNN-MLP structure. Kadingdi et al. [30] utilized random forest, Gaussian Process (GP), and Gradient Boosting Machine (GBM) as base learners to establish a stacking ensemble model for predicting ground vibration, significantly improving the model’s predictive performance. Wu et al. [31] employed a stacking ensemble learning method that combined LightGBM, random forest, and XGBoost as base learners, and Ridge, Lasso, and Linear Regression as meta-learners to predict rock compressive strength (RCS). The model demonstrated excellent prediction accuracy, achieving an R² of 0.946. It can be seen that the stacking integrated model gives better prediction results than a single model in all cases. It also offers better performance in fusing different qualitative single models to combine their respective strengths, as well as in model generalization. Currently, the effectiveness of applying the stacking ensemble model to blast fragmentation prediction remains unknown, and this area requires further exploration.
This study aims to improve prediction accuracy by applying a stacking ensemble model to blast fragmentation prediction. In this study, 97 samples of data collected from blasts conducted around the world by T. Hudaverdi et al. [4] are used as datasets. The model performance of random forest, XGBoost, and SVM was compared, and the base learner was selected. An integrated model was developed using the stacking integration strategy to predict the size of rock fragments produced by blasting. The predictive performance of the model was evaluated using four mathematical indicators: R2, MSE, RMSE, and MAE. The importance of features using random forest and XGBoost was analyzed, and the impact of various input parameters on the size of blasting blocks was compared. Finally, the predictive performance of this model was validated using actual engineering blasting data.

2. Ensemble Machine Learning (ML) Models

Unlike the traditional approach of developing a single learner, ensemble learning aims to construct a model by combining multiple weak learners, seeking an approach to create a strong learner that surpasses the performance of a single one, thereby improving the efficiency, stability, and resilience of the model. Common ensemble learning methods include bagging, boosting, and stacking.

2.1. Bagging and Boosting

The bagging algorithm, also known as ‘bootstrap aggregating’, was proposed by Breiman in 1996 [32]. The random forest algorithm, also proposed by Breiman, is based on the bagging concept, combining multiple single decision trees. The principle of the bagging algorithm involves randomly sampling multiple subsets from the original dataset with replacement, ensuring that the size of each subset is the same as the original dataset. Each subset is used to independently train a learner, and during prediction, the bagging algorithm combines the predictions of all learners to derive the final prediction result. Bagging can significantly reduce model variance and improve generalization performance, but it is sensitive to noisy data and prone to consistency issues.
The boosting algorithm, on the other hand, combines weak learners into a strong learner by chaining them together. It enhances the accuracy of the algorithm and is more generalizable [33]. Boosting trains its base learners using the initial training set and, based on their performance, constructs the next base learner. If the previous round’s prediction was incorrect, higher weights are assigned to the misclassified instances in the next round of prediction. Ultimately, the outputs of the weighted combination of multiple learners are used to derive the final result. Compared to bagging, boosting is more effective in reducing bias and improving training accuracy, thus addressing the problem of underfitting. However, both bagging and boosting are essentially ensemble algorithms of homogeneous learners, and their schematic diagrams are shown in Figure 1.

2.2. Stacking Strategy

The stacking strategy was first proposed by David H. Wolpert and further developed by Leo Breiman, who introduced ‘Stacked Regressions’ using generalized linear models [34]. Unlike bagging and boosting algorithms, the stacking algorithm combines multiple base learners using a nonlinear integration method to form a multilayered strong learner. These base learners can be different types of heterogeneous learners. For example, in a two-layer learner structure, the stacking algorithm first trains the base learners in the first layer. The results from the first layer are then used as a new training and testing set for the second layer (meta-learners) to train and predict. The basic principle of the stacking strategy is shown in Figure 2.
The blending algorithm is very similar to the stacking algorithm, as both of them are effective ensemble learning methods designed to enhance prediction performance. Compared to the stacking algorithm, the blending algorithm directly uses non-overlapping data to train different base learners. Therefore, when sample data are insufficient, the blending method uses fewer data in the second layer, making it prone to overfitting. In contrast, the stacking method uses all data at each training layer, resulting in a more robust forecasting ability. In this study, with only 97 sets of prediction data, the sample data are relatively insufficient. Thus, the stacking method was chosen for model fusion.

2.3. Base Learner

2.3.1. Random Forest

Random forest is an ensemble algorithm based on classification and regression trees, proposed by Breiman by combining the bagging method and the Random Subspace method [35]. By combining multiple individual trees to form a forest, random forest can achieve more reliable prediction results. The randomness of the random forest algorithm is mainly reflected in two aspects: first, samples are drawn from the original dataset with replacement using bootstrap sampling, forming training subsets to train decision trees. Next, the Random Subspace method is used to randomly select the feature variables required for each node split in the decision trees, generating multiple decision trees. Finally, the predictions of all decision trees are aggregated to produce the final prediction result. The random forest method requires few parameters to be tuned during implementation, and its randomness effectively prevents overfitting issues, making it widely used in the prediction of blasting effects [36,37,38]. The principle formula for generating random forests is as follows:
R F ( x ) = 1 N trees   i = 1 N trees     f i ( x )
where R F ( x ) is the prediction result of the random forest method on sample x; N t r e e s represents the number of decision trees in a random forest; f i ( x ) indicates the prediction result of the ith decision tree on sample x.

2.3.2. XGBoost

Extreme Gradient Boosting (XGBoost) was proposed by Chen et al. based on improvements to the Gradient Boosting Decision Tree (GBDT) algorithm [39]. Based on decision trees, the XGBoost algorithm leverages advantages such as parallel tree boosting, regularization, and efficient tree pruning to handle regression and classification problems more effectively [40]. The decision trees in XGBoost are sequentially constructed, with each new tree considering the prediction errors of the previous tree. Adjustments are made based on the modified sample distribution to train the next tree. The core idea is to optimize the objective function. XGBoost can be seen as a soft computing library that combines new algorithms with gradient-boosting decision tree algorithms to improve prediction accuracy [41]. Figure 3 shows the pseudocode for the XGBoost algorithm.

2.3.3. SVM

Support Vector Machine (SVM) can be regarded as a supervised learning technique suitable for classification and regression problems, and it is widely used to solve various engineering problems [42,43]. Support Vector Regression (SVR) is a non-probabilistic algorithm based on VC dimension theory and a branch of SVM. It fits the regression function by constructing an optimal hyperplane, optimizing the model by minimizing total loss, and maximizing the margin. Its innovation lies in introducing an artificially set ‘margin band’, which transforms the regression problem into an optimization problem and allows for certain errors. In SVR, input features are mapped into a high-dimensional feature space, giving it a strong capability to handle high-dimensional data [44]. The computational principles of SVR can be represented by the following formulas [45]:
f(x) = [ω⋅φ(x)] + b
where f ( x ) is the constructed linear regression function; ω is the weight coefficient; φ ( x )   is a nonlinear mapping function; b R is the deviation term.

3. Database

3.1. Data Description

The database used in this study was established by Hudaverdi et al. [4] based on the collection and evaluation of several blasting tests in different mines and rock formations around the world, all of which involved open-pit bench blasting. This database contains a total of 97 samples of blasting data covering various open-pit mines in the world, and takes into account five blasting design parameters, including burden (B), spacing (S), bench height (H), stemming (T), and hole diameter (D). The inputs for the blast design parameters are in the form of ratios to avoid the influence of different parameter units on predictions. Additionally, the explosive parameters, structural parameters, and physical and mechanical properties of the rock were used as input parameters. Specifically, this study used bench height to drilled burden (H/B), spacing to burden (S/B), burden to hole diameter (B/D), stemming to burden (T/B), powder factor (Pf), in situ block size (XB), and modulus of elasticity (E) as input variables, with mean fragment size (X50) as the output variable.
In order to better characterize the data used, Figure 4 shows the detailed distribution of the seven input parameters through a combination of bar charts and violin plots, with the mean and median of the data indicated by solid and hollow dots in the half violin plots. It can be observed that the sample point distribution of most input parameters is relatively concentrated, particularly for input parameter E, which is mostly concentrated in the 0–20 and 40–70 intervals. Table 1 details the four statistical characteristics of all input parameters, including minimum, maximum, mean, and standard deviation. Figure 5 illustrates the degree of correlation between the input parameters using a heat map. It can be observed that there is a certain correlation between the input parameters, which makes them reasonable as input variables for the model. The dataset was randomly divided into two groups: 80% was used as the training set to construct the prediction model, and the remaining 20% was used as the testing set to evaluate the model’s performance. To reduce bias among the input variables and enhance the model’s accuracy, the data were normalized using the StandardScaler function before prediction. The standardization formula is as follows:
x * = x μ σ
where μ is the mean of the data set, σ is the standard deviation of the dataset, and x and x* are the data values before and after standardization, respectively.

3.2. Model Validation and Evaluation

To ensure the accuracy and predictive effectiveness of the established model, the validation and evaluation process of model performance is essential. This study compared and screened the predictive performance of base learners and utilized the stacking strategy to establish a prediction model. Figure 6 illustrates the overall analysis and modelling process of the prediction model. To assess the effectiveness of the model, the mean square error (MSE), mean absolute error (MAE), determination coefficient (R2), and root mean squared error (RMSE) were utilized as the evaluation metrics. When the value of R2 is closer to 1, and the values of MSE, MAE, and RMSE are closer to 0, it indicates a better predictive performance of the model. The evaluation indicators are calculated using the following formulas [26,46,47,48]:
M S E = 1 n i = 1 n   y ^ i y i 2
M A E = 1 n i = 1 n   y ^ i y i
R M S E = 1 n i = 1 n   y ^ i y i 2
R 2 = 1 i = 1 n   y i ^ y i 2 i = 1 n   y i ¯ y i 2
where n is the number of input–output data in the algorithm, y ^ is the predicted value of the model, y is the true value of the model, and y ¯ is the average value of the model predictions.

4. Results and Discussion

4.1. Model Construction and Prediction Results

To construct the prediction model, this study followed three steps: determining evaluation indexes, selecting the best-performing single model (random forest, SVM, or XGBoost) as the base learner, and training the meta-learner to predict the average fragment size (X50) of the blasted rock. This study used Python 3.9 as the programming language, Pycharm 2023.3.6 as the program’s running software, and the packages in the configured environment, including Pandas, Numpy, and Scikit-learn.
As mentioned earlier, after constructing the models, their parameters were selected and tuned to improve the prediction of the three single learners. Specifically, the parameters for the random forest model were set to random_state = 1 and n_estimators = 50; the parameters of the SVM model were set to kernel = poly, degree = 5, and C = 1; and the parameter of the XGBoost model was set to learning_rate = 1.9. Some of the prediction results of the three models are presented in Table 2.
By drawing predictive regression diagrams, the predicted values of the four models are compared with the actual values, facilitating a clearer judgment of the models’ prediction performance. To compare the prediction accuracy of each model in the training and the testing sets, the results for each set were plotted in the same regression plot, as shown in Figure 7. In these plots, the vertical axis represents the predicted values, and the horizontal axis represents the true values. Diagonal dashed lines are added to each regression plot to indicate a perfect fit. When the predicted value is larger than the true value, the point is located above the diagonal; when the predicted value is smaller, the point is below the diagonal. As the predicted values converge closer to the true values, their distance to the diagonal decreases until they appear on the line when they are equal. The more target points located on the diagonal, the better the predictive performance of the model.
The results in Figure 7 show that the XGBoost model has the best prediction accuracy in the training set, where the predicted values are essentially the same as the true values, and only a few sample points fall outside the perfect fit diagonal. The prediction accuracy of the random forest model is slightly weaker, but most of the sample points are distributed near the diagonal. The model is more accurate in predicting samples with an X50 range of 0–0.6. The SVR model’s predictions have a larger bias, and the sample points are very dispersed, making the model less accurate. However, excellent model performance in the training set does not represent the overall performance of the model. In the testing set, the predictions of all three models show certain deviations, and the prediction accuracy for the minimum and maximum sample points is lower than for those with intermediate values. Among them, the prediction bias of the XGBoost model in the testing set is more pronounced, and its prediction accuracy is much worse than in the training set, indicating that the model is overfitting.
Overall, the random forest and XGBoost models provide better predictions, while the SVR algorithm provides poorer predictions. Thus, random forest and XGBoost were chosen as the base learners for the integrated model. The parameters of the random forest and XGBoost models were continuously adjusted during the model construction process, and the final parameters were random_state = 27, n_estimators = 76 for the random forest model; random_state = 42, learning_rate = 0.5 for the XGBoost model.
When constructing the stacking integrated model, considering the size of the dataset, the first attempt was to use cross-validation to increase the generalization ability of the model, but the cross-validated model had a poor prediction effect on the test set, and the stacking fusion model itself could effectively improve the residuals, thus canceling the cross-validation in order to improve the prediction effect. At the same time, the XGBoost algorithm has the phenomenon of overfitting, so when building the meta-learner, linear regression, a relatively simple model, was chosen to avoid overfitting caused by excessive complexity.
The meta-model was trained, and the prediction results of the stacking integrated model were obtained, as shown in Figure 7d. For the training set, the prediction effect of the stacking integrated model is relatively close to that of the random forest, but the accuracy is not as good as that of XGBoost; for the testing set, the prediction effect of the stacking integrated model is the best, with the sample points closer to the diagonal. In summary, the stacking integrated model demonstrates better prediction capability in the training and testing sets, particularly in the testing set, indicating that the integrated model is more stable and can avoid overfitting.

4.2. Model Performance Analysis and Discussion

During the research process, four evaluation indicators were used to assess the predictive performance of the model. Table 3 presents the results of the metrics for the training and testing sets of the stacking integrated model. From the results in the table, it is evident that the MSE and RMSE values of the integrated model on the testing set are lower than those on the training set, while the MAE values on the testing set are slightly higher than those on the training set, though they are very close to each other. A comparison of the indicators of the four models is shown in Figure 8. It is evident that the R2 value of the stacking integrated model is the highest, at 0.943. The R2 value of the random forest model is relatively close to that of the XGBoost model, with values of 0.797 and 0.758, respectively. The R2 value of the SVR model is the lowest, at 0.578. The stacking ensemble algorithm demonstrates significant improvements in R2, achieving increases of 0.146, 0.365, and 0.185 compared to the respective base algorithms.
The comparison of the MSE, RMSE, and MAE results of the four prediction models, shown in Figure 9, reveals that the MSE values of the four models are relatively close, with the stacking integrated model having the lowest value of 0.00197. The MSE values of the random forest and XGBoost models are 0.00903 and 0.01081, respectively. The SVR model has the highest MSE value, at 0.01883. The distribution of RMSE and MAE values is very similar to MSE, with values of 0.04436 and 0.03688 for the stacking integrated model, 0.09503 and 0.0668 for the random forest model, 0.10397 and 0.05264 for the XGBoost model, and 0.13722 and 0.11388 for the SVM model. Figure 9 clearly shows that the MSE, RMSE, and MAE results of the stacking integrated model are the lowest among the models, proving that the integrated model has the best predictive performance. By integrating models, the issues of insufficient prediction accuracy in base algorithms like random forest and SVR, as well as the overfitting problem in XGBoost on the training set, can be effectively addressed and avoided. In summary, using the stacking strategy to construct an integrated model for predicting X50 can effectively improve predictive performance and better avoid overfitting.
Feature importance analysis can effectively assess the degree of influence of input variables on X50 prediction. In this study, the importance assessment tools in random forest and XGBoost were used to extract the importance of input variables of the stacking integrated model and compare the assessment results of the two. The importance assessment of input features in the random forest algorithm is based on impurity calculation, and the features are sorted before taking the average value. The final result also requires normalizing the scores of all features, ensuring that the sum of the importance values of the features equals 1. The XGBoost algorithm determines the importance of features based on their ability to split data at nodes. The stronger the splitting ability, the greater the contribution to the final result.
Figure 10 illustrates the extraction results of the two methods. The input parameter E is identified as the most important feature variable for predicting X50 in both methods, with importance values of 0.7129 and 0.4608, respectively. XB and T/B follow, exhibiting similar importance values. In the random forest method, the importance value of the input variable T/B is slightly higher, at 0.1183, while in the XGBoost method, XB has a relatively higher importance value of 0.1503. The importance of the input variables S/B, H/B, and B/D is low, with B/D having the lowest importance, at 0.0023 in the random forest method and S/B having the lowest importance, at 0.0335 in the XGBoost method. Among the seven input parameters in this study, it can be concluded that the input parameter E has the greatest effect on the X50 prediction results, followed by XB and T/B, while the three parameters S/B, H/B, and B/D have the least effect on the X50 prediction results.
In the field of rock engineering, the elastic modulus (E) reflects the stiffness of rock and serves as an indicator of the material’s resistance to deformation. The elastic modulus influences wave velocity, which in turn affects the extent of energy dissipation, resulting in variations in fragment size during rock blasting. The in situ block size, representing the fundamental block size of rock mass in its natural state, is determined by the characteristics of joints and fractures. Rock masses with well-developed fractures are more susceptible to fragmentation. Stemming length and burden distance are critical parameters in blast design, influencing the utilization and release of energy by constraining gas escape and controlling the distance between the blast hole and the free face, respectively. Consequently, they affect fragmentation efficiency [49,50,51]. Scholars, in related studies, validated that E, XB, and T/B are the primary influencing factors in predicting outcomes [52], which further supports the correctness of the aforementioned analytical results.
All of the above results demonstrate that high prediction accuracy can be obtained by constructing a stacking integrated model for the prediction of blasting fragmentation. Previous research on the prediction of blasting effects mainly compared some single prediction models or optimized a single model [53,54,55]. Although these methods can achieve good prediction results, they lack the combination of various prediction methods. Additionally, the tuning process of these models is usually quite complex and time-consuming. One major advantage of utilizing the stacking integrated strategy is that it can combine the strengths of multiple single models and allows for the combination of non-homogeneous models. Moreover, the construction process of the model is relatively simple, and the training speed is faster.

4.3. Engineering Validation Results of the Model

In this study, field data from five blasting operations in a northeastern mining area were collected to validate the performance of the constructed model. The rocks in this mining area are mainly granite, with moderate hardness, making it a relatively explosive region. Based on the model’s required input parameters, the specific values of the collected relevant parameters are shown in Table 4.
The fragmentation size X50 of the blasted rock was obtained using the fragmentation recognition software Split Desktop v4.0.0.42 [56]. Figure 11 illustrates the software’s recognition of the blasting results on-site. The software effectively maps the boundaries of rock fragmentation blocks and calculates the distribution of block sizes for this blasting, using a 1-meter-long ruler placed on-site as a reference.
To maintain consistency with the input parameters used during model training, the validation still uses the value of the ratio between the parameters as the model’s input parameters. Table 5 shows the input parameters and their values for the validation process. Figure 12 shows the comparison between the model-predicted results and the actual results of rock fragmentation. To make it easier to compare the difference between the true and predicted values, a purple color legend representing the true values has been added to the figure. The model’s predicted values closely match the actual values, with prediction errors within 0.03 m. This indicates that the prediction model developed in this study can accurately predict rock fragmentation in open blast projects, which is of practical significance. However, the cross-validation method commonly adopted by most scholars was not employed in constructing the stacking model. This may be due to the small dataset size in this study, which resulted in lower prediction accuracy when using cross-validation methods. Additionally, during the engineering verification process, Split-Desktop software v4.0.0.42 inevitably has some errors in recognizing the block size. Therefore, in future research, it is advisable to consider more diverse datasets and increase the number of base learners. In practical engineering applications, blasting outcomes can be predicted in advance based on various design parameters. This enables iterative adjustments to parameter values, ensuring that the desired fragmentation index is achieved for the project.
Based on the results of the engineering verification mentioned above, the model can be used to predict the fragmentation size of the blast in advance, according to the blasting parameters, before production blasting. The parameters of on-site blasting operations can be further optimized based on the predicted results to meet the required fragmentation size for production. Similarly, under the guidance of a large amount of actual on-site parameters, the predictive accuracy of the stacking model established in this study can be further improved, providing the potential to develop an intelligent prediction system for on-site blasting.

5. Conclusion and Summary

During the blasting process, numerous factors, including controllable and uncontrollable ones, affect the blasting results. Thus, it is challenging to use a single-fitting equation to comprehensively describe the relationship between various influencing factors and fragmentation size. Compared to traditional empirical equation prediction methods, artificial intelligence methods can effectively simulate the relationship between various input and output parameters. This study utilized a dataset containing 97 data samples and set seven input parameters to predict the fragmentation size X50. To improve prediction performance, the prediction effects of three single models—random forest, SVR, and XGBoost—were compared and selected. A stacking integrated strategy was employed, using random forest and XGBoost models as base learners to construct the fusion model. This method has been applied in engineering fields, but its application in predicting blasting fragmentation is still relatively limited. The predictive performance of the established models was evaluated using four mathematical indicators. The following conclusions can be drawn from this study:
  • The stacking integrated model outperformed the single models and had higher predictive accuracy. The model evaluation yielded an R2 value of 0.943, with MSE, RMSE, and MAE values of 0.00269, 0.05187, and 0.03320, respectively, on the training set; and 0.00197, 0.04435, and 0.03687, respectively, on the testing set;
  • From the feature importance evaluation results of the two methods, it can be seen that in the process of constructing the model, the input feature E has the highest influence on the predicted fragmentation size, followed by T/B and XB;
  • Compared with the prediction methods of other researchers, the prediction method established in this study better integrates the advantages of individual algorithms. The engineering verification results also demonstrated that the constructed algorithm has good predictive accuracy, and its prediction results can provide references for blasting design.

Author Contributions

Validation, Y.S.; formal analysis, Y.S., R.Z. and Z.Y.; investigation, Y.S.; methodology, Y.S.; writing—original draft preparation, Y.S.; writing—review and editing, Z.Z., R.Z., Z.Y. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The research presented in this paper was funded by the National Natural Science Foundation of China (Grant Nos. 12072376 and 52274105).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
R2Coefficient of Determination
MSEMean Square Error
RMSERoot Mean Square Error
MAEMean Absolute Error
KCOKuznetsov-Cunningham-Ouchterlony
BPNNBack Propagation Neural Network
ANNArtificial Neural Network
SVMSupport Vector Machine
ANFISAdaptive Network-based Fuzzy Inference System
PCAPrincipal Component Analysis
GWOGrey Wolf Optimization
CNNConvolutional Neural Network
MLPMultilayer Perceptron
PSOParticle Swarm Optimization
SVRSupport Vector Regression
KNNK-Nearest Neighbor
GPGaussian Process
RCSRock Compressive Strength

References

  1. Abbaspour, H.; Drebenstedt, C.; Badroddin, M.; Maghaminik, A. Optimized design of drilling and blasting operations in open pit mines under technical and economic uncertainties by system dynamic modelling. Int. J. Min. Sci. Technol. 2018, 28, 839–848. [Google Scholar] [CrossRef]
  2. Kanchibotla, S.S.; Valery, W.; Morrell, S. Modeling fines in blast fragmentation and its impact on crushing and grinding. In Proceedings of the Explo’99: A Conference on Rock Breaking, Kalgoorlie, WA, Australia, 7–11 November 1999; AusIMM: Melbourne, Australia, 1999. [Google Scholar]
  3. Dimitraki, L.; Christaras, B.; Marinos, V.; Vlahavas, I.; Arampelos, N. Predicting the average size of blasted rocks in aggregate quarries using artificial neural networks. Bull. Eng. Geol. Environ. 2019, 78, 2717–2729. [Google Scholar] [CrossRef]
  4. Hudaverdi, T.; Kulatilake, P.; Kuzu, C. Prediction of blast fragmentation using multivariate analysis procedures. Int. J. Numer. Anal. Methods Geomech. 2011, 35, 1318–1333. [Google Scholar] [CrossRef]
  5. Chandrahas, N.S.; Choudhary, B.S.; Teja, M.V.; Venkataramayya, M.S.; Prasad, N.S.R.K. XGBoost Algorithm to Simultaneous Prediction of Rock Fragmentation and Induced Ground Vibration Using Unique Blast Data. Appl. Sci. 2022, 12, 5269. [Google Scholar] [CrossRef]
  6. Peleg, M. Determination of the parameters of the Rosin-Rammler and beta distributions from their mode and variance using equation-solving software. Powder Technol. 1996, 87, 181–184. [Google Scholar] [CrossRef]
  7. Kuznetsov, V.M. The mean diameter of the fragments formed by blasting rock. Sov. Min. Sci. 1973, 9, 144–148. [Google Scholar] [CrossRef]
  8. Cunningham, C.V.B. The kuz-ram fragmentation model—20 years on. In Brighton Conference Proceedings; European Federation of Explosives Engineer: Brighton, UK, 2005. [Google Scholar]
  9. Ouchterlony, F. The Swebrec function: Linking fragmentation by blasting and crushing. Min. Technol. 2005, 114, 29–44. [Google Scholar] [CrossRef]
  10. Mehrdanesh, A.; Monjezi, M.; Sayadi, A.R. Evaluation of effect of rock mass properties on fragmentation using robust techniques. Eng. Comput. 2017, 34, 253–260. [Google Scholar] [CrossRef]
  11. Ebrahimi, E.; Monjezi, M.; Khalesi, M.R.; Armaghani, D.J. Prediction and optimization of back-break and rock fragmentation using an artificial neural network and a bee colony algorithm. Bull. Eng. Geol. Environ. 2016, 75, 27–36. [Google Scholar] [CrossRef]
  12. Hasanipanah, M.; Amnieh, H.B.; Arab, H.; Zamzam, M.S. Feasibility of PSO–ANFIS model to estimate rock fragmentation produced by mine blasting. Neural Comput. Appl. 2018, 30, 1015–1024. [Google Scholar] [CrossRef]
  13. Armaghani, D.J. Rock Fragmentation Prediction through a New Hybrid Model Based on Imperial Competitive Algorithm and Neural Network. Smart Constr. Res. 2018, 2, 1–12. [Google Scholar]
  14. Cui, X.J.; Li, Q.Y.; Tao, M.; Hong, Z.X.; Zhao, M.S.; Li, J.; Zhou, J.M.; Yu, H.B. Research on Explosive-Rock Matching System based on XGBoost. Blasting 2023, 40, 31–38+58. [Google Scholar]
  15. Li, C.Q.; Zhou, J.; Du, K. Towards lightweight excavation: Machine learning exploration of rock size distribution prediction after tunnel blasting. J. Comput. Sci. 2024, 78, 20. [Google Scholar] [CrossRef]
  16. Rui, Y.C.; Chen, J.; Chen, J.K.; Qiu, J.D.; Zhou, Z.L.; Wang, W.Z.; Fan, J.Y. A robust triaxial localization method of AE source using refraction path. Int. J. Min. Sci. Technol. 2024, 34, 521–530. [Google Scholar] [CrossRef]
  17. Chen, J.; Chen, J.K.; Rui, Y.C.; Pu, Y.Y. Joint Inversion of AE/MS Sources and Velocity with Full Measurements and Residual Estimation. Rock Mech. Rock Eng. 2024, 57, 7371–7386. [Google Scholar] [CrossRef]
  18. Chen, J.; Tong, J.; Rui, Y.C.; Cui, Y.; Pu, Y.Y.; Du, J.S.; Apel, D.B. Step-path failure mechanism and stability analysis of water-bearing rock slopes based on particle flow simulation. Theor. Appl. Fract. Mech. 2024, 131, 14. [Google Scholar] [CrossRef]
  19. Kulatilake, P.; Qiong, W.; Hudaverdi, T.; Kuzu, C. Mean particle size prediction in rock blast fragmentation using neural networks. Eng. Geol. 2010, 114, 298–311. [Google Scholar] [CrossRef]
  20. Monjezi, M.; Mohamadi, H.A.; Barati, B.; Khandelwal, M. Application of soft computing in predicting rock fragmentation to reduce environmental blasting side effects. Arab. J. Geosci. 2014, 7, 505–511. [Google Scholar] [CrossRef]
  21. Philip, D.; Abbaspour, H.; Kansake, B.A.; Drebenstedt, C. Unraveling the Capability of Artificial Intelligence for Prediction of Rock Fragmentation. In Proceedings of the REAL TIME MINING—Conference on Innovation on Raw Material Extraction, Freiberg, Germany, 27 March 2019. [Google Scholar]
  22. Esmaeili, M.; Salimi, A.; Drebenstedt, C.; Abbaszadeh, M.; Bazzazi, A.A. Application of PCA, SVR, and ANFIS for modeling of rock fragmentation. Arab. J. Geosci. 2015, 8, 6881–6893. [Google Scholar] [CrossRef]
  23. Asl, P.F.; Monjezi, M.; Hamidi, J.K.; Armaghani, D.J. Optimization of flyrock and rock fragmentation in the Tajareh limestone mine using metaheuristics method of firefly algorithm. Eng. Comput. 2018, 34, 241–251. [Google Scholar] [CrossRef]
  24. Jia, Z.Z.; Song, Z.L.; Fan, J.F.; Jiang, J.Y. Prediction of Blasting Fragmentation Based on GWO-ELM. Shock. Vib. 2022, 2022, 7385456. [Google Scholar] [CrossRef]
  25. Rong, K.; Xu, X.; Wang, H.B.; Yang, J. Prediction of the mean fragment size in mine blasting operations by deep learning and grey wolf optimization algorithm. Earth Sci. Inform. 2024, 17, 2903–2919. [Google Scholar] [CrossRef]
  26. Li, E.M.; Yang, F.H.; Ren, M.H.; Zhang, X.L.; Zhou, J.; Khandelwal, M. Prediction of blasting mean fragment size using support vector regression combined with five optimization algorithms. J. Rock Mech. Geotech. Eng. 2021, 13, 1380–1397. [Google Scholar] [CrossRef]
  27. Barkhordari, M.S.; Armaghani, D.J.; Fakharians, P. Ensemble machine learning models for prediction of flyrock due to quarry blasting. Int. J. Environ. Sci. Technol. 2022, 19, 8661–8676. [Google Scholar] [CrossRef]
  28. Khan, K.; Ahmad, W.; Amin, M.N.; Ahmad, A.; Nazar, S.; Alabdullah, A.A.; Arab, A.M.A. Exploring the Use of Waste Marble Powder in Concrete and Predicting Its Strength with Different Advanced Algorithms. Materials 2022, 15, 4108. [Google Scholar] [CrossRef]
  29. Koopialipoor, M.; Asteris, P.G.; Mohammed, A.S.; Alexakis, D.E.; Mamou, A.; Armaghani, D.J. Introducing stacking machine learning approaches for the prediction of rock deformation. Transp. Geotech. 2022, 34, 100756. [Google Scholar] [CrossRef]
  30. Kadingdi, F.A.; Ayawah, P.E.A.; Azure, J.W.A.; Bruno, K.A.; Kaba, A.G.A.; Frimpong, S. Stacked Generalization for Improved Prediction of Ground Vibration from Blasting in Open-Pit Mine Operations. Min. Metall. Explor. 2022, 39, 2351–2363. [Google Scholar] [CrossRef]
  31. Wu, L.Y.; Li, J.H.; Zhang, J.W.; Wang, A.F.; Tong, J.B.; Ding, F.; Li, M.; Feng, Y.; Li, H. Prediction model for the compressive strength of rock based on stacking ensemble learning and shapley additive explanations. Bull. Eng. Geol. Environ. 2024, 83, 439. [Google Scholar] [CrossRef]
  32. Breiman, L. Bagging Predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
  33. Yu, L.; Wu, T.J. Assemble learning: A survey of boosting algrithms. Pattern Recognit. Artif. Intell. 2004, 17, 52–59. [Google Scholar]
  34. Breiman, L. Stacked regressions. Mach. Learn. 1996, 24, 49–64. [Google Scholar] [CrossRef]
  35. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  36. Han, H.; Armaghani, D.J.; Tarinejad, R.; Zhou, J.; Tahir, M.M. Random Forest and Bayesian Network Techniques for Probabilistic Prediction of Flyrock Induced by Blasting in Quarry Sites. Nat. Resour. Res. 2020, 29, 655–667. [Google Scholar] [CrossRef]
  37. Yu, Z.; Shi, X.Z.; Qiu, X.Y.; Zhou, J.; Chen, X.; Gou, Y.G. Optimization of postblast ore boundary determination using a novel sine cosine algorithm-based random forest technique and Monte Carlo simulation. Eng. Optim. 2021, 53, 1467–1482. [Google Scholar] [CrossRef]
  38. Zhou, J.; Asteris, P.G.; Armaghani, D.J.; Pham, B. Prediction of ground vibration induced by blasting operations through the use of the Bayesian Network and random forest models. Soil. Dyn. Earthq. Eng. 2020, 139, 106390. [Google Scholar] [CrossRef]
  39. Chen, T.; He, T.; Benesty, M. XGBoost: Extreme Gradient Boosting. arXiv 2016, arXiv:1603.02754. [Google Scholar]
  40. Nabavi, Z.; Mirzehi, M.; Dehghani, H.; Ashtari, P. A Hybrid Model for Back-Break Prediction using XGBoost Machine learning and Metaheuristic Algorithms in Chadormalu Iron Mine. J. Min. Environ. 2023, 14, 689–712. [Google Scholar]
  41. Qiu, Y.G.; Zhou, J.; Khandelwal, M.; Yang, H.T.; Yang, P.X.; Li, C.Q. Performance evaluation of hybrid WOA-XGBoost, GWO-XGBoost and BO-XGBoost models to predict blast-induced ground vibration. Eng. Comput. 2022, 38 (Suppl. S5), 4145–4162. [Google Scholar] [CrossRef]
  42. Yu, Q.; Monjezi, M.; Mohammed, A.S.; Dehghani, H.; Armaghani, D.J.; Ulrikh, D.V. Optimized Support Vector Machines Combined with Evolutionary Random Forest for Prediction of Back-Break Caused by Blasting Operation. Sustainability 2021, 13, 12797. [Google Scholar] [CrossRef]
  43. Mehrdanesh, A.; Monjezi, M.; Khandelwal, M.; Bayat, P. Application of various robust techniques to study and evaluate the role of effective parameters on rock fragmentation. Eng. Comput. 2023, 39, 1317–1327. [Google Scholar] [CrossRef]
  44. Rad, H.N.; Hasanipanah, M.; Rezaei, M.; Eghlim, A.L. Developing a least squares support vector machine for estimating the blast-induced flyrock. Eng. Comput. 2018, 34, 709–717. [Google Scholar] [CrossRef]
  45. Xu, G.Q.; Wang, X.Y. Support vector regression optimized by black widow optimization algorithm combining with feature selection by MARS for mining blast vibration prediction. Measurement 2023, 218, 113106. [Google Scholar] [CrossRef]
  46. Amoako, R.; Jha, A.; Zhong, S. Rock Fragmentation Prediction Using an Artificial Neural Network and Support Vector Regression Hybrid Approach. Mining 2022, 2, 233–247. [Google Scholar] [CrossRef]
  47. Zhou, J.; Chen, Y.X.; Chen, H.; Khandelwal, M.; Monjezi, M.; Peng, K. Hybridizing five neural-metaheuristic paradigms to predict the pillar stress in bord and pillar method. Front. Public Health 2023, 11, 1119580. [Google Scholar] [CrossRef]
  48. Zhou, J.; Zhang, R.; Qiu, Y.G.; Khandelwal, M. A true triaxial strength criterion for rocks by gene expression programming. J. Rock Mech. Geotech. Eng. 2023, 15, 2508–2520. [Google Scholar] [CrossRef]
  49. Jern, M. Determination of the in situ block size distribution in fractured rock, an approach for comparing in-situ rock with rock sieve analysis. Rock Mech. Rock Eng. 2004, 37, 391–401. [Google Scholar] [CrossRef]
  50. Tao, M.; Xu, Y.Q.; Zhao, R.; Liu, Y.L.; Wu, C.Q. Energy control and block performance optimization of bench blasting. Int. J. Rock. Mech. Min. Sci. 2024, 180, 105830. [Google Scholar] [CrossRef]
  51. Mpofu, M.; Ngobese, S.; Maphalala, B.; Roberts, D.; Khan, S. The influence of stemming practice on ground vibration and air blast. J. S. Afr. Inst. Min. Metall. 2021, 121, 1–10. [Google Scholar] [CrossRef]
  52. Zhao, J.; Li, D.; Zhou, J.; Armaghani, D.J.; Zhou, A.H. Performance evaluation of rock fragmentation prediction based on RF-BOA, AdaBoost-BOA, GBoost-BOA, and ERT-BOA hybrid models. Deep. Undergr. Sci. Eng. 2024; early view. [Google Scholar]
  53. Gao, W.; Karbasi, M.; Hasanipanah, M.; Zhang, X.; Guo, J. Developing GPR model for forecasting the rock fragmentation in surface mines. Eng. Comput. 2018, 34, 339–345. [Google Scholar] [CrossRef]
  54. Shams, S.; Monjezi, M.; Majd, V.J.; Armaghani, D.J. Application of fuzzy inference system for prediction of rock fragmentation induced by blasting. Arab. J. Geosci. 2015, 8, 10819–10832. [Google Scholar] [CrossRef]
  55. Mojtahedi, S.F.F.; Ebtehaj, I.; Hasanipanah, M.; Bonakdari, H.; Amnieh, H.B. Proposing a novel hybrid intelligent model for the simulation of particle size distribution resulting from blasting. Eng. Comput. 2019, 35, 47–56. [Google Scholar] [CrossRef]
  56. Xie, C.Y.; Nguyen, H.; Bui, X.N.; Choi, Y.S.; Zhou, J.; Thao, N.T. Predicting rock size distribution in mine blasting using various novel soft computing models based on meta-heuristics and machine learning algorithms. Geosci. Front. 2021, 12, 101108. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the bagging and boosting algorithms: (a) the bagging algorithm; (b) the boosting algorithm.
Figure 1. Schematic diagram of the bagging and boosting algorithms: (a) the bagging algorithm; (b) the boosting algorithm.
Applsci 15 01254 g001
Figure 2. Stacking strategy schematic diagram.
Figure 2. Stacking strategy schematic diagram.
Applsci 15 01254 g002
Figure 3. Splitting pseudocode of a single leaf node in XGBoost algorithm.
Figure 3. Splitting pseudocode of a single leaf node in XGBoost algorithm.
Applsci 15 01254 g003
Figure 4. Distribution of input parameters: (a) spacing to burden; (b) bench height to drilled burden; (c) burden to hole diameter; (d) stemming to burden; (e) powder factor; (f) insitu block size; (g) modulus of slasticity.
Figure 4. Distribution of input parameters: (a) spacing to burden; (b) bench height to drilled burden; (c) burden to hole diameter; (d) stemming to burden; (e) powder factor; (f) insitu block size; (g) modulus of slasticity.
Applsci 15 01254 g004aApplsci 15 01254 g004b
Figure 5. Heat map of input parameter correlation distribution.
Figure 5. Heat map of input parameter correlation distribution.
Applsci 15 01254 g005
Figure 6. Stacking integrated model prediction flowchart.
Figure 6. Stacking integrated model prediction flowchart.
Applsci 15 01254 g006
Figure 7. Comparison of the prediction results between the training and testing sets of the four models: (a) random forest; (b) SVR; (c) XGBoost; (d) stacking integrated model.
Figure 7. Comparison of the prediction results between the training and testing sets of the four models: (a) random forest; (b) SVR; (c) XGBoost; (d) stacking integrated model.
Applsci 15 01254 g007
Figure 8. Comparison of R2 values of four models.
Figure 8. Comparison of R2 values of four models.
Applsci 15 01254 g008
Figure 9. Comparison of assessment indicators for the four models.
Figure 9. Comparison of assessment indicators for the four models.
Applsci 15 01254 g009
Figure 10. Comparison of importance values results for input parameters.
Figure 10. Comparison of importance values results for input parameters.
Applsci 15 01254 g010
Figure 11. Block size recognition results from Split-Desktop software.
Figure 11. Block size recognition results from Split-Desktop software.
Applsci 15 01254 g011
Figure 12. Comparison of results from engineering validation.
Figure 12. Comparison of results from engineering validation.
Applsci 15 01254 g012
Table 1. Statistical characterization of model input parameters.
Table 1. Statistical characterization of model input parameters.
ParametersMinimumMaximumMeanStandard Deviation
S/B1.001.751.200.109
H/B1.336.823.441.64
B/D17.9839.4727.214.77
T/B0.504.671.270.688
Pf(kg/m3)0.221.260.530.238
XB(m)0.022.351.170.479
E(GPa)9.5760.0030.7417.72
Table 2. Partial prediction results of the testing sets.
Table 2. Partial prediction results of the testing sets.
ModelTrue ValuePredicted Value
Random Forest0.440.4154
0.10.1259
0.350.4042
0.250.2578
0.740.6014
Support Vector Machine0.440.2504
0.10.1826
0.350.2503
0.250.3287
0.740.4342
XGBoost0.440.3273
0.10.2011
0.350.2740
0.250.2437
0.740.7453
Table 3. Results of stacking integrated model evaluation indicators.
Table 3. Results of stacking integrated model evaluation indicators.
MSERMSEMAE
Training set0.002690.051870.03320
Testing set0.001970.044350.03687
Table 4. Blasting parameter statistics.
Table 4. Blasting parameter statistics.
No.B
(m)
S
(m)
H
(m)
D
(mm)
T
(m)
Pf
(kg/m3)
XB
(m)
E
(GPa)
X50
(m)
178.5152507.50.621.115.60.1456
279152507.50.641.115.60.163
3710152507.50.611.115.60.1962
479.5152507.50.571.115.60.1997
579.5152507.50.621.115.60.1786
Table 5. Validation process input parameters and values.
Table 5. Validation process input parameters and values.
No.S/BH/BB/DT/BPfXBE
11.21432.1429281.07140.621.115.6
21.28572.1429281.07140.641.115.6
31.42862.1429281.07140.611.115.6
41.35712.1429281.07140.571.115.6
51.35712.1429281.07140.621.115.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sui, Y.; Zhou, Z.; Zhao, R.; Yang, Z.; Zou, Y. Open-Pit Bench Blasting Fragmentation Prediction Based on Stacking Integrated Strategy. Appl. Sci. 2025, 15, 1254. https://doi.org/10.3390/app15031254

AMA Style

Sui Y, Zhou Z, Zhao R, Yang Z, Zou Y. Open-Pit Bench Blasting Fragmentation Prediction Based on Stacking Integrated Strategy. Applied Sciences. 2025; 15(3):1254. https://doi.org/10.3390/app15031254

Chicago/Turabian Style

Sui, Yikun, Zhiyong Zhou, Rui Zhao, Zheng Yang, and Yang Zou. 2025. "Open-Pit Bench Blasting Fragmentation Prediction Based on Stacking Integrated Strategy" Applied Sciences 15, no. 3: 1254. https://doi.org/10.3390/app15031254

APA Style

Sui, Y., Zhou, Z., Zhao, R., Yang, Z., & Zou, Y. (2025). Open-Pit Bench Blasting Fragmentation Prediction Based on Stacking Integrated Strategy. Applied Sciences, 15(3), 1254. https://doi.org/10.3390/app15031254

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop