Next Article in Journal
Modeling Tide–Induced Groundwater Response in a Coastal Confined Aquifer Using the Spacetime Collocation Approach
Next Article in Special Issue
Practical Risk Assessment of Ground Vibrations Resulting from Blasting, Using Gene Expression Programming and Monte Carlo Simulation Techniques
Previous Article in Journal
Phase Optimized Photoacoustic Sensing of Gas Mixtures
Previous Article in Special Issue
Development of Hybrid Machine Learning Models for Predicting the Critical Buckling Load of I-Shaped Cellular Beams
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing a New Computational Intelligence Approach for Approximating the Blast-Induced Ground Vibration

1
School of Mines, China University of Mining and Technology, Xuzhou 221116, China
2
Key Laboratory of Deep Coal Resource Mining, Ministry of Education of China, Xuzhou 221116, China
3
Department of Civil Engineering, National Institute of Technology Patna, Patna, Bihar 800005, India
4
College of Computer Science, Tabari University of Babol, Babol 4713575689, Iran
5
Department of Computer Science and Engineering, National Institute of Technology Patna, Patna, Bihar 800005, India
6
Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(2), 434; https://doi.org/10.3390/app10020434
Submission received: 23 November 2019 / Revised: 22 December 2019 / Accepted: 24 December 2019 / Published: 7 January 2020
(This article belongs to the Special Issue Meta-heuristic Algorithms in Engineering)

Abstract

:
Ground vibration induced by blasting operations is an important undesirable effect in surface mines and has significant environmental impacts on surrounding areas. Therefore, the precise prediction of blast-induced ground vibration is a challenging task for engineers and for managers. This study explores and evaluates the use of two stochastic metaheuristic algorithms, namely biogeography-based optimization (BBO) and particle swarm optimization (PSO), as well as one deterministic optimization algorithm, namely the DIRECT method, to improve the performance of an artificial neural network (ANN) for predicting the ground vibration. It is worth mentioning this is the first time that BBO-ANN and DIRECT-ANN models have been applied to predict ground vibration. To demonstrate model reliability and effectiveness, a minimax probability machine regression (MPMR), extreme learning machine (ELM), and three well-known empirical methods were also tested. To collect the required datasets, two quarry mines in the Shur river dam region, located in the southwest of Iran, were monitored, and the values of input and output parameters were measured. Five statistical indicators, namely the percentage root mean square error (%RMSE), coefficient of determination (R2), Ratio of RMSE to the standard deviation of the observations (RSR), mean absolute error (MAE), and degree of agreement (d) were taken into account for the model assessment. According to the results, BBO-ANN provided a better generalization capability than the other predictive models. As a conclusion, BBO, as a robust evolutionary algorithm, can be successfully linked to the ANN for better performance.

1. Introduction

Drilling and blasting are important parts of an effective method to excavate the hard rock in the mining industry. One of the fundamental problems induced by blasting is the ground vibration (see Figure 1). Therefore, the ability to make accurate predictions of the ground vibration is a crucial need in this field. As shown in Figure 2, body waves including the compressional P-wave, transverse S-wave, and surface waves including the Love wave (Q-wave) and the Rayleigh wave (R-wave) are generated by blasting. It is worth mentioning that the majority of the energy is transmitted by the R-wave, and the P-wave has the fastest speed.
According to the literature [1,2,3,4] the intensity of ground vibration can be measured based on some descriptors, including the frequency and peak particle velocity (PPV). Among them, PPV is accepted globally and is used in many studies to evaluate the blast-induced ground vibration [5,6]. In the production cycle of mines, PPV plays a significant role during blasting operations and may result in undesirable effects in terms of anthropogenic hazards. Thus, a precise prediction of PPV is very crucial in terms of process and health safety. This key parameter is highly dynamic and non-linear because it depends upon various process attributes. Therefore, the estimation of PPV through modeling is a challenging and haphazard task.
A review of previous studies [7,8] revealed that two categories of parameters significantly affect PPV: the blasting design and rock properties. As can be guessed, the blasting design parameters such as burden, weight charge per delay, stemming, and powder factor are controllable, while the rock properties parameters such as the tensile strength of the rock mass are uncontrollable and cannot be changed by engineers. In the past, various physical-based models have been used to study the blasting operations and simulations in order to minimize environmental damage. These types of physical models made use of governing formulations to account for blasting parameters. However, these governing equations have some limitations. The main limitation is that they consider ideal conditions rather than the conditions which exist in real-life situations. Additionally, the physical-based studies are typically costlier since they require a more experimental set-up as well as experts who understand complex mathematical formulations. In such circumstances, when the number of variables is limited and forecasting is more crucial than understanding the underlying causative mechanism, pattern recognition-based models are a viable tool. In order to overcome the problem stated above and establish the relationship between the blasting process through pattern recognition, the researchers used different ML techniques that can mimic and establish relationships to obtain a higher prediction accuracy.
A literature review showed that a wide range of recently-published papers have demonstrated soft computing methods for predicting aims in different fields [9,10,11,12,13,14,15,16,17,18], especially for predicting the PPV, such as an artificial neural network (ANN), support vector machine (SVM), and adaptive neuro-fuzzy inference system (ANFIS). Khandelwal [19] compared SVM and empirical models to predict the PPV, with a better result for the former. In another study, Mohamadnejad et al. [20] predicted the blast-induced PPV using SVM and a general regression neural network (GRNN) with the conclusion that SVM is a promising alternative tool for both empirical and GRNN models. On the other hand, Ghasemi et al. [21] explored the potential application of fuzzy system (FIS) and empirical models using 120 samples. The result showed that FIS predicts PPV more accurately than the empirical models. Jahed Armaghani et al. [22] compared ANFIS, and ANN, as two well-known soft computing models, to predict PPV, then the result was compared to that of empirical models. The results confirmed that ANFIS and ANN outperformed the empirical models in terms of providing a prediction, while ANFIS was found to be more feasible than ANN in this regard. Amiri et al. [8] predicted the PPV using ANN combined with the K-nearest neighbords (KNN) method. In their study, a common empirical model introduced by United States Bureau of Mines (USBM) was also applied. Based on their results, the hybrid of ANN and KNN predicted PPV with higher accuracy compared to the ANN and USBM models. Nguyen et al. [23] used an extreme gradient boosting (XGBoost) model to predict ground vibration. For comparison aims, SVM, random forest (RF), and KNN models were also used in their study. Based on the results, it was found that the XGBoost model was more accurate than SVM, RF, and KNN models in predicting ground vibration. In another study, a Gaussian process regression (GPR) model was employed to predict ground vibration by Arthur et al. [24]. They showed the superiority of the GPR for predicting the blast-induced ground vibration compared to empirical models. Recently, Nguyen et al. [25] predicted blast-induced ground vibration using a hybrid model of ANN and the k-means clustering algorithm (HKM). Their results were then compared with support vector regression (SVR) results. They showed that the HKM–ANN model can predict ground vibration more effectively than ANN and SVR models. Overall, it could be seen that machine learning models are capable of producing better prediction results.
However, there are some disadvantages to ML techniques such as ANN. One of them is slow convergence and the other is the trap in local minima. Generally, the optimization algorithm is used to improve the convergence rate and isolation from the local optimum phenomenon. The two categories of optimization algorithms can be defined as global optimization methods which do not follow the derivative. They are the stochastic metaheuristic algorithm and the deterministic optimization algorithm. Stochastic metaheuristic algorithms are interpreted using simplicity stating and are therefore used for many engineering problems, while the deterministic optimization algorithm guarantees convergence in complexity problems due to its theoretical features. However, deterministic methods, such as the DIRECT method [26], obtain superiority in the analytical approach, while heuristic methods have presented more flexible and efficient approaches [27,28]. In this paper, both metaheuristic and deterministic algorithms are used to improve the ANN results.
The aim of the present study is to develop two stochastic metaheuristic algorithms, namely PSO and biogeography-based optimization (BBO), as well as one deterministic optimization algorithm, namely the DIRECT method, to improve the performance of the ANN model in predicting the PPV. For comparison aims, minimax probability machine regression (MPMR), the extreme learning machine (ELM), and three common empirical methods were also employed. It is worth mentioning that this was the first time that BBO-ANN and DIRECT-ANN models were applied to predict PPV, which can be a contribution of the present paper to the body of knowledge in this field of study.

2. Methods

2.1. DIRECT-ANN

One of the important deterministic optimization algorithms called DIRECT was developed by Joens et al. [26]. The DIRECT optimization algorithm can discover the global optimum of the objective function for complex problems, which has an extremely robust direct search approach. The DIRECT algorithm evaluates the objective function without needing any extra information. Although the DIRECT algorithm is based on a very powerful search, it needs a certain number of iterations to obtain a global minimum, especially when target points are at certain boundaries.
In real-world problems, there is no understanding of the global solution, thus the quality of the solution cannot be checked. Therefore finding different approaches that are close to the global solution is very important for improving the optimization algorithm. In the complexity problem, the objective function f(x) can touch many local optima. In global optimization, it is essential to gain the global optimum x* and in accordance with it, a value of f* such that
f * = f ( x * ) f ( x ) ,   x D N
where D is a search space so the profound minimum f* is the global optimum and the objective function f(x) meets the Lipschitz condition
| f ( x ) f ( x ) | L x x ,         x , x D ,     0 < L <
where L is an unknown Lipschitz constant. This condition implies that any restricted variation in the parameters yields some constrained variation in the values of the objective function. The global optimization problem (1) where f(x) satisfies (2) and can be non-differentiable, multi-extremum, hard to measure, and given as a “black box” is considered for combining with ANN in this paper.
In this algorithm, the weights and biases in the ANN are demonstrated by the initial solution set. In the next step, the initial solution is optimized by many iterations in the DIRECT algorithm to fix the weights of ANN as well as to converge the lowest error.

2.2. PSO-ANN

Particle swarm optimization (PSO) is a bird simulation metaheuristic approach developed by Kennedy and Eberhart [29]. The PSO approach is a decision-making process using the populated swarm. In this research, PSO was used to search and optimize the weights of the model, and once the ANN model was configured, its input weights, biases, and output weights were transformed into the coordinates of each particle in the swarm. Herein, each particle is a solution for the ANN model. Consequently, all particles are searching in a defined search space to find the best position in which the difference of the measured PPV and the predicted PPV is the lowest possible. Theoretically, each swarm makes a decision depending on the following factors:
  • The best results are obtained through the personal experience of each individual during the search completed in each iteration.
  • Experienced individuals in the swarm help the others to achieve the best results in the generated entire swarm population.
During the initialization phase, a certain number of individuals (i.e., the particles each of which contains feasible solution) are placed in a random pattern within the search domain. The optimization of the objective function is determined with the help of pre-defined coefficients: C1 and C2 signify the personal best position ( p b e s t ) of each individual particle and global best position ( g b e s t ) among the populated particles, respectively [30].
The hybrid PSO-ANN ensemble method starts with initialization of random particles. In this process, the ANN connection factors (weights and biases) are represented by positions of the particles. In the subsequent step, the initial particles (bias and weights) are trained followed by different iterations to stabilize the weights as well as to converge the computing error (using different statistical indices). The convergence of the computing error is achieved by updating the positions of particles through the velocity equation (Equation (1)). The value of g b e s t (the lowest computing error achieved until that moment) of p b e s t (the lowest computing error by particle at current time) was updated in each iteration using Equation (1) to obtain the best solution of the problem until the relevant condition was satisfied (the lowest error).

2.3. MPMR

The basic aim of the MPMR model is to maximize the minimum probability of future data points in the classification process. The advantage of this model is that it considers a minimal assumption of underlying distribution for true functions in the trivial regression problem within its bounds [31]. MPMR maximizes the minimum probability of future predictions within some bound of the true regression function. Hence, it has control over future predictions. It uses only two tuning parameters (the width of the radial basis function and the error insensitive zone). It also reduces the chance of over-fitting. MPMR provides an alternative justification for discriminative approaches. Furthermore, this model closely works on the formulation of classification as proposed by Marshall and Olkin [32], which was later improved by Bertsimas and Popescu [33].

2.4. ELM

ELM is an advancement of a single layer feed-forward Network (SLFNs) developed by Huang et al. [34]. ELM with a fast learning speed and smallest training error make it a non-linear model at the cost of linear model. ELM is able to initialize weights analytically, meaning it is semi-random, and the weights are not tuned through back-propagation. The background theory of ELM shows that although the presented neurons in the hidden layer are important, it is not necessary for them to be turned, and the learning process can be done simply without tuning the hidden neurons [34]. The brief topological structure of ELM is presented in Figure 3, including an input layer, feature optimization space, and an output layer. To find a detailed discussion on this issue, you can refer to the studies conducted by Huang et al. [35].

2.5. BB0-ANN

The evolutionary algorithm BBO was proposed by Simon [36]. The basic idea of BBO is based on biogeography concepts: (i) migration of habitants (species) from a habitat (island) to another habitat, (ii) arising of habitants, and (iii) extinction of habitants. BBO is the most popular optimization algorithm used to solve complex, non-linear real-world problems [37]. In this paper, BBO is used to optimize the network weights and biases of ANN. Each habitat contains network weights and biases as the number of features or habitants.
The main operation of BBO is a migration process which involves an immigration (I) and emigration (E) rate. The exploration and exploitation tasks of BBO depend on migration operators. It is a successful technique for bringing local search and good convergence capability to a global optimum. The BBO algorithm modifies features of a selected habitat based on the immigration rate. Then the algorithm choses other habitats based on their emigration rate for migration of an inhabitant of a habitat to another selected habitat. Initially, BBO generates number of problem solutions (ANN generations) based on the size of the habitats. After each of the run of the main algorithm best fitness (minimum) the valued habitat is stored and this process is continued for a maximum number of iterations run or until the accepted level of the fitness score is reached. Mean-square error (MSE) is used as the fitness function for BBO algorithm. A more detailed review of BBO algorithm can be found in reference [37,38]. The Algorithm 1 shows a rudimentary structure of BBO-ANN.
Algorithm 1: Basic structure of the BBO-ANN for prediction of blast-induced ground vibration, rand function produces a random number which is uniformly distributed in [0, 1], the jth dimensional lower and upper bound are lj and uj, respectively.
  •  Select calibration and validation dataset
  • Begin ANN calibration period
  •     Get ANN learning operators in BBO (decision variables (N));
  •     Set objective/fitness function (MSE);
  •     Initialize habitats (say S);
  •     Set mutation probability (mk);
  •     Calculate E and I.
  •     Evaluate the fitness measure for every habitat;
  •     Sort habitats (ascending order) according to the fitness value;
  •     B = best so far habitat (least fitness valued habitat);
  •     for it = 1 to maximum iteration do
  •       for i = 1 to S do
  •         for j =1 to N do
  •           if rand() < I of ith habitat then
  •             choose an emigrating habitat with a probability proportional to E;
  •             Replace jth habitant of the immigrating habitat with a corresponding value of the emigrating habitat;
  •           end if
  •         for j = 1 to N do
  •           if rand() < mk then
  •             Update jth habitant with r a n d ( u j l j ) ;
  •           end if
  •         end for
  •         Evaluate fitness value of the ith habitat;
  •       end for
  •       Elitism
  •       Sort habitats.
  •       B = Keep the good solution;
  •     end for
  •  End
  •  Acquire the optimal parameter set for ANN using B;
  •  ANN test

3. Field Investigation

Two quarry mines in the Shur river dam region, located in the southwest of Iran, were investigated in this study. Andesite and tuff were the types of bed rock in these mines. The blasting method was performed to fragment rock mass in the monitored mines. Controlling blasting environmental issues like flyrock and ground vibration was considered to be an important task there.
Lots of equipment was used for construction purposes and to assist workers in mine sites. In addition, a residential area is very close to the mines and some local people live there. As a result, there is a need for the blasting engineers to predict, monitor and control the effects of ground vibrations on nearby residents and building structures. There was always a high risk of causing blast-induced ground vibration damage affecting nearby residents and buildings.
Based on the recommendations of mining engineers in the site, the measurement of ground vibration was conducted for every single blasting operation. For the mentioned descriptions, the amount of environmental risk due to ground vibration at blasting mines in the study area was high and because of this, the development of any models or techniques that can minimize the risk is useful. To this end, a research program was carried out to predict blast-induced ground vibration in these quarry mines. A total of 80 blasting events were monitored and 80 sets of data were prepared. In this database, several effective parameters affecting the PPV, including burden (B), spacing (S), stemming (ST), powder factor (PF), maximum weight charge per delay (W), rock mass rating (RMR), and distance from the blasting-point (D) were provided. These parameters were used as the input parameters in the modeling process of the predictive models. Additionally, PPV was used as the output parameter.
To measure the PPV values, a MR2002-CE SYSCOM seismograph was installed on the site. This instrument can measure PPV values in the range of 0.001 to 115 mm/s. Also, the values of B, S, ST, PF, and W were measured by controlling the blast-hole charge.
To measure D, GPS (global positioning system) was also used. The ranges of the model inputs and outputs together with some other information are provided in Table 1. Furthermore, the histograms of the input and output parameters are shown in Figure 4. According to this figure, for the B parameter, 23, 20, 19, and 18 data values were varied in the ranges of 0–3.2 m, 3.2–3.5 m, 3.5–3.8 m, and 3.8–4.5 m, respectively.

4. Empirical Methods to Predict PPV

In this study, three well-known empirical methods, namely the US Bureau of Mines (USBM, Washington, DC, USA) [39], Indian Standard [40], and Ambraseys–Hendron [41] models, were used to predict PPV. These methods are only related to W and D parameters, and are formulated as:
P P V = Q × [ D W ] z
P P V = Q × [ W D 2 3 ] z
P P V = Q × [ D W 0.33 ] z .
Equations (3)–(5) show the USBM, Indian Standard, and Ambraseys–Hendron methods. In these equations, Q and z are the site constants, and can be computed using the SPSS software. Using the database and our analysis, Equations (6)–(8) are updated as follows:
P P V = 0.367 × [ D W ] 0.228
P P V = 0.447 × [ W D 2 3 ] 0.283
P P V = 0.351 × [ D W 0.33 ] 0.237 .
Note that to construct the empirical methods, firstly, the datasets used were normalized, as well as the same AI models. In other words, Equations (6)–(8) are based on normalized datasets. The performance of these empirical methods is evaluated in Section 6.

5. Development of BBO-ANN, PSO-ANN, MPMR, and ELM to Predict PPV

The algorithm for the prediction of PPV was developed with the help of MATLAB sub-routines. The structure of the models drew on an input matrix (x) defined by x = (B, S, ST, PF, W, RMR, and D) which provided the predictor variables, while PPV induced by blasting was denoted as the target variable (y). In any modeling process, the most important task is to find the appropriate size of the training data and testing dataset. Therefore, in this research, 70% of the total dataset was randomly selected and used to develop the models and the developed models was tested on the remaining dataset. In other words, 56 and 24 datasets were used to develop and test the models, respectively. Prior to model development, the whole dataset was normalized to the range of zero to one. All the models (DIRECT-ANN, PSO-ANN, BBO-ANN, MPMR, and ELM) were tuned based on the trial and error method in order to optimize the PPV prediction. The values for tuning parameters of models were selected initially and thereafter varied in trials until the best fitness measures were achieved.
As stated in the literature [5,6,13], the most important work in ANN modeling is to choose a proper number of neurons in the hidden layer. In this study, different ANN models were constructed using 2–12 hidden neurons of, as shown in Table 2, and according to the results, the best performance (highest R2) was for the 7 × 5 × 1 architecture (seven inputs, one hidden layer with five neurons and one output layer). Additionally, the sigmoid activation function was used in the ANN modeling. It is worth mentioning that the mentioned architecture was employed in the DIRECT-ANN, PSO-ANN, and BBO-ANN modeling of this study.
For the DIRECT-ANN algorithm, the DIRECT uses an initial solution set that is assigned as a qualified n-dimensional vector (set of simplex vertices) between the upper and lower boundaries. DIRECT is driven by a set of operations that depend on the cost function value. The termination condition of this algorithm occurs when the simplex vertices are close enough to each other. Note that the value of the generic iterations and the mesh size were reported to be 1724 and 4096, respectively.
The most important work in PSO-ANN modeling is to select the appropriate values for the PSO parameters. By reviewing the literature, it was found that the cognitive acceleration (C1), social acceleration (C2), number of particles, number of iterations, and inertia weight are the most important parameters in PSO-ANN. The first step was to determine the most appropriate values of C1 and C2. For this work, different values of C1 and C2 were tested and their performances were evaluated based on R2, as shown in Table 3. Note that the values of 1.333, 2.667, 1.5, 2, and 1.75 were selected as the values of C1 and C2 in some studies. Hence, these values were tested in the present study. From Table 3, it can be seen that model number 5 has the best R2, therefore the values of 1.75 and 1.75 are selected as the C1 and C2 values, respectively.
In order to select the appropriate value of inertia weight, some previous studies were reviewed. Based on the literature, the value of 0.75 was used as the inertia weight. In this step, the number of particles (swarm size) was determined. For this work, different values for the number of particles were tested and their performances were checked based on R2, as shown in Table 4. According to Table 4, model number 4 with the number of particles = 350 had the best performance. Hence, the value of 350 was selected as the number of particles of this study. In the next step, the number of iterations was determined. In the literature, different values such as 400 and 450 were selected as the number of iterations. To determine the maximum number of iterations of this study, a 7 × 5 × 1 structure of ANN, C1 = 1.75, C2 = 1.75, inertia weight = 0.75 and an iteration number of 1000 were used. According to the obtained results, after an iteration number of 400, there were no significant changes in the network results. Therefore, the value of 400 was selected as the maximum number of iterations of this study.
Regarding the modeling process of BBO-ANN, the BBO has a number of parameters to initially set for a better ANN model performance. After an initial trial and error process for BBO parameters, the final parameter values were as follows: (i) Elitism rate: 0.3, (ii) Mutation probability: 0.015, (iii) Maximum value of immigration and emigration rate: 1. Figure 5 shows the process flow chart of the proposed BBO-ANN. Note that the different values for the number of habitats were tested in this study, as shown in Table 5. Based on this Table, model number 8 had the best performance with the highest R2 value. Hence, 350 was selected as the number of habitats in the BBO-ANN modeling for this study.
After the trial process, the final architecture of ELM consisted of 15 hidden neurons with the sigmoid activation function for training and validation. Meanwhile, the MPMR model had an error tube width of ε = 0.002 and C = 0.9.
To investigate the performance of the models, the root mean square error (RMSE), R2, Ratio of RMSE to the standard deviation of the observations (RSR), mean absolute error (MAE), and degree of agreement (d) were taken into account, which are shown in Equations (9)–(12) [42,43,44,45,46,47,48,49,50,51,52,53,54,55,56]:
R M S E = 1 n i = 1 n ( P P V a P P V p ) 2
M A E = 1 n i = 1 n | P P V a P P V p |
R 2 = [ i = 1 n ( P P V a P P V m e a n ) 2 ] [ i = 1 n ( P P V a P P V p ) 2 ] [ i = 1 n ( P P V a P P V m e a n ) 2 ]
d = 1 i = 1 n ( P P V a P P V p ) 2 i = 1 n ( | P P V p P P V m e a n | + | ( P P V a P P V m e a n ) | ) 2
where P P V p is the predicted PPV obtained using the proposed models; P P V a signifies the actual PPV; P P V m e a n represents the average of the PPV data; and n stands for the number of data values.

6. Results and Discussion

The predicted PPV values obtained from all predictive models for only the testing phase is given in Table 6. In this Table, k is ratio of the actual PPV to the predicted PPV values for each dataset ( k = Actual   PPV / Predicted   PPV ) . According to the predicted values, Table 7 and Table 8 show the values of different statistical indices of the models for both training and testing phases. Additionally, Figure 6 shows the scatter plot of the actual and the predicted PPV involving four soft computing techniques and empirical methods only for the testing phase. From these results, it is evident that all the models performed efficiently in predicting PPV in terms of statistical indices. Regarding the prediction accuracy, the R2 was found to be higher in the case of BBO-ANN (R2 = 0.988) compared to the other seven models: DIRECT-ANN (R2 = 0.981), PSO-ANN (R2 = 0.972), MPMR (R2 = 0.971), ELM (R2 = 0.965), USBM (R2 = 0.747), Indian standard (R2 = 0.799), and Ambraseys–Hendron (R2 = 0.724).
Furthermore, in terms of the prediction error (i.e., the lower the error, the better the model), the lowest value was found for BBO-ANN (MAE = 0.022, RMSE = 0.026, RSR = 0.109) compared to DIRECT-ANN (MAE = 0.024, RMSE = 0.036, RSR = 0.151), PSO-ANN (MAE = 0.034, RMSE = 0.041, RSR = 0.174), MPMR (MAE = 0.034, RMSE = 0.040, RSR = 0.169), ELM (MAE = 0.037, RMSE = 0.045, RSR = 0.188), USBM (MAE = 0.100, RMSE = 0.117, RSR = 0.494), Indian standard (MAE = 0.087, RMSE = 0.105, RSR = 0.444), and Ambraseys–Hendron (MAE = 0.105, RMSE = 0.123, RSR = 0.517).
To check the consistency of the developed models, the degree of agreement (d) was calculated using Equation (12) and the higher value was recorded for the BBO-ANN model. Therefore, it can be concluded that the BBO-ANN model (d = 0.997) had the best performance, followed by DIRECT-ANN (d = 0.994), PSO-ANN (d = 0.991), MPMR (d = 0.992), ELM (d = 0.991), USBM (d = 0.924), Indian standard (d = 0.943), and Ambraseys–Hendron (d = 0.914).
Moreover, for a better representation in terms of model deviations, the receiver operating characteristic (ROC) curve was plotted (Figure 7). It is evident that all the models captured a good relationship when determining the PPV during training, and the lowest deviation was recorded for BBO-ANN followed by DIRECT-ANN, PSO-ANN, MPMR, and ELM. During the testing period, the BBO-ANN model outperformed the rivals in terms of all the fitness parameters. The results analyzed showed a consistent performance of BBO-ANN during both training and testing periods.
Furthermore, during the training period, the convergence curve of two metaheuristic based optimized ANN models (Figure 8) showed that the BBO has a lower MSE (0.000899) compared to PSO (0.001777). Figure 8 shows the convergence plot of both hybridized ANN models. From analysis, it became evident that the BBO-ANN model reduces the fitness parameter MSE significantly compared to PSO-ANN for the same number of iterations. Therefore, based on the above analysis, it was found that BBO-ANN can be a new reliable technique for PPV analysis. In the present study, a sensitivity analysis was also performed using Yang and Zang’s [57] method to assess the impact of input parameters on PPV. This method has been used in some studies [58,59,60], and is formulated as:
r i j = k = 1 n ( y i k × y o k ) k = 1 n y i k 2 k = 1 n y o k 2
where n is the number of data values (this study used 80 data values), y i k and y o k are the input and output parameters. The value of r i j for each input parameter varied between zero and one, and the highest r i j values indicated the most effective output parameter (which was PPV in this study). Figure 9 shows the r i j values for all input parameters. From Figure 9, it can be seen that the W with r i j of 0.986 was the main parameter influencing PPV.

7. Conclusions

Ground vibration is considered to be the most adverse result induced by blasting. Accordingly, predictions of ground vibration are necessary, and this issue requires the application of appropriate prediction models. In this study, PPV was used as a descriptor to evaluate ground vibration. To predict the blast-induced PPV, this study proposed two novel hybrid AI models, namely the BBO-ANN and DIRECT-ANN models. In other words, one stochastic metaheuristic algorithm, namely BBO, and one deterministic optimization algorithm, namely DIRECT, were combined with the ANN model. To the best of our knowledge, this is the first work that predicted a blast-induced PPV using the DIRECT-ANN and BBO-ANN models. To demonstrate model reliability and effectiveness, the PSO-ANN, MPMR, ELM, and three empirical models, namely USBM, Indian Standard and Ambraseys–Hendron were also employed. In the first step, the empirical models were used to predict PPV, and according to the results, their performances were not good enough. The R2 values of 0.799, 0.747, and 0.724 obtained from Indian Standard, USBM and Ambraseys–Hendron models indicated that we need more accurate predictions. To consider all the attributes needed to predict the PPV, seven input parameters, namely B, S, ST, PF, W, RMR, and D were used in the modeling. Although the ELM, MPMR and PSO-ANN models, with the R2 of 0.963, 0.971 and 0.972, respectively, were capable of predicting PPV with reasonable performances, the accuracy of DIRECT-ANN and BBO-ANN models were the best. Based on the results, the DIRECT-ANN with R2 of 0.981 and the BBO-ANN with R2 of 0.988 possessed superior predictive ability compared to the other models. In other words, the effectiveness of the BBO and DIRECT methods for improving the ANN model’s performance was confirmed. Additionally, sensitivity analysis showed that the W was the main parameter influencing PPV.
These findings confirm that the BBO-ANN and DIRECT-ANN models are significant and reliable artificial intelligence techniques for producing precise predictions of PPV and can be used in various fields. Additionally, the use of deterministic optimization algorithms, such as the DIRECT method, to improve the ANN performance and other soft computing methods can be recommended.

Author Contributions

Conceptualization, M.H.; Data curation, M.H.; Investigation, G.L.; Methodology, G.L., D.K., H.N.R. and B.R.; Supervision, P.S.; Validation, G.L.; Writing—review & editing, M.H., G.L., D.K., H.N.R., B.R. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financially supported by the National Key Research and development program (2016YFC0600901), and the Project 51574224 supported by NSFC.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

ANFISAdaptive neuro-fuzzy inference system
ANNArtificial neural network
BBOBiogeography-based optimization
BBurden
R2Coefficient of determination
dDegree of agreement
DDistance from the blasting-point
XGBoostExtreme gradient boosting
ELMExtreme learning machine
FISFuzzy system
GPRGaussian process regression
GRNNGeneral regression neural network
HKMK-means clustering algorithm
KNNK-nearest neighbors
WMaximum weight charge per delay
MAEMean absolute error
MSEMean-square error
MPMRMinimax probability machine regression
PSOParticle swarm optimization
PPVPeak particle velocity
PFPowder factor
RFRandom forest
RSRRatio of RMSE to the standard deviation of the observations
RMRRock mass rating
RMSERoot mean square error
SLFNSingle layer feed-forward Network
SSpacing
STStemming
SVMSupport vector machine
SVRSupport vector regression
USBMUnited States Bureau of Mines

References

  1. Hustrulid, W. Blasting Principles for Open Pit Mining: General Design Concepts; Balkema: Rotterdam, The Netherlands, 1999. [Google Scholar]
  2. Hasanipanah, M.; Monjezi, M.; Shahnazar, A.; Jahed Armaghani, D.; Farazmand, A. Feasibility of indirect determination of blast induced ground vibration based on support vector machine. Measurement 2015, 75, 289–297. [Google Scholar] [CrossRef]
  3. Hasanipanah, M.; Faradonbeh, R.S.; Amnieh, H.B.; Armaghani, D.J.; Monjezi, M. Forecasting blast-induced ground vibration developing a CART model. Eng. Comput. 2017, 33, 307–316. [Google Scholar] [CrossRef]
  4. Yang, H.; Hasanipanah, M.; Tahir, M.M.; Bui, D.T. Intelligent Prediction of Blasting-Induced Ground Vibration Using ANFIS Optimized by GA and PSO. Nat. Resour. Res. 2019. [Google Scholar] [CrossRef]
  5. Khandelwal, M.; Singh, T.N. Prediction of blast-induced ground vibration using artificial neural network. Int. J. Rock Mech. Min. Sci. 2009, 46, 1214–1222. [Google Scholar] [CrossRef]
  6. Hajihassani, M.; Jahed Armaghani, D.; Monjezi, M.; Mohamad, E.T.; Marto, A. Blast-induced air and ground vibration prediction: A particle swarm optimization-based artificial neural network approach. Environ. Earth Sci. 2015, 74, 2799–2817. [Google Scholar] [CrossRef]
  7. Zhou, J.; Shi, X.; Li, X. Utilizing gradient boosted machine for the prediction of damage to residential structures owing to blasting vibrations of open pit mining. J. Vib. Control 2016, 22, 3986–3997. [Google Scholar] [CrossRef]
  8. Amiri, M.; Bakhshandeh Amnieh, H.; Hasanipanah, M.; Mohammad Khanli, L. A new combination of artificial neural network and K-nearest neighbors models to predict blast-induced ground vibration and air-overpressure. Eng. Comput. 2016, 32, 631–644. [Google Scholar] [CrossRef]
  9. Chen, H.; Asteris, P.G.; Armaghani, D.J.; Gordan, B.; Pham, B.T. Assessing dynamic conditions of the retaining wall using two hybrid intelligent models. Appl. Sci. 2019, 9, 1042. [Google Scholar] [CrossRef] [Green Version]
  10. Asteris, P.G.; Nikoo, M. Artificial bee colony-based neural network for the prediction of the fundamental period of infilled frame structures. Neural Comput. Appl. 2019. [Google Scholar] [CrossRef]
  11. Asteris, P.G.; Tsaris, A.K.; Cavaleri, L.; Repapis, C.C.; Papalou, A.; Di Trapani, F.; Karypidis, D.F. Prediction of the fundamental period of infilled RC frame structures using artificial neural networks. Comput. Intell. Neurosci. 2016, 2016, 5104907. [Google Scholar] [CrossRef] [Green Version]
  12. Asteris, P.G.; Kolovos, K.G.; Douvika, M.G.; Roinos, K. Prediction of self-compacting concrete strength using artificial neural networks. Eur. J. Environ. Civ. Eng. 2016, 20, s102–s122. [Google Scholar] [CrossRef]
  13. Asteris, P.G.; Armaghani, D.J.; Hatzigeorgiou, G.; Karayannis, C.G.; Pilakoutas, K. Predicting the shear strength of reinforced concrete beams using Artificial Neural Networks. Comput. Concr. 2019, 24, 469–488. [Google Scholar]
  14. Sarir, P.; Chen, J.; Asteris, P.G.; Armaghani, D.J.; Tahir, M.M. Developing GEP tree-based, neuro-swarm, and whale optimization models for evaluation of bearing capacity of concrete-filled steel tube columns. Eng. Comput. 2019. [Google Scholar] [CrossRef]
  15. Asteris, P.G.; Apostolopoulou, M.; Skentou, A.D.; Antonia Moropoulou, A. Application of Artificial Neural Networks for the Prediction of the Compressive Strength of Cement-based Mortars. Comput. Concr. 2019, 24, 329–345. [Google Scholar]
  16. Samui, P.; Hoang, N.D.; Nhu, V.H.; Nguyen, M.L.; Ngo, P.T.T.; Bui, D.T. A New Approach of Hybrid Bee Colony Optimized Neural Computing to Estimate the Soil Compression Coefficient for a Housing Construction Project. Appl. Sci. 2019, 9, 4912. [Google Scholar] [CrossRef] [Green Version]
  17. Bui, H.B.; Nguyen, H.; Choi, Y.; Bui, X.N.; Nguyen-Thoi, T.; Zandi, Y. A Novel Artificial Intelligence Technique to Estimate the Gross Calorific Value of Coal Based on Meta-Heuristic and Support Vector Regression Algorithms. Appl. Sci. 2019, 9, 4868. [Google Scholar] [CrossRef] [Green Version]
  18. Nguyen, H.L.; Pham, B.T.; Son, L.H.; Thang, N.T.; Ly, H.B.; Le, T.T.; Ho, L.S.; Le, T.H.; Bui, D.T. Adaptive Network Based Fuzzy Inference System with Meta-Heuristic Optimizations for International Roughness Index Prediction. Appl. Sci. 2019, 9, 4715. [Google Scholar] [CrossRef] [Green Version]
  19. Khandelwal, M. Blast-induced ground vibration prediction using support vector machine. Eng. Comput. 2011, 27, 193–200. [Google Scholar] [CrossRef]
  20. Mohamadnejad, M.; Gholami, R.; Ataei, M. Comparison of intelligence science techniques and empirical methods for prediction of blasting vibrations. Tunn. Undergr. Space Technol. 2012, 28, 238–244. [Google Scholar] [CrossRef]
  21. Ghasemi, E.; Ataei, M.; Hashemolhosseini, H. Development of a fuzzy model for predicting ground vibration caused by rock blasting in surface mining. J. Vib. Control 2013, 19, 755–770. [Google Scholar] [CrossRef]
  22. Jahed Armaghani, D.; Momeni, E.; Abad, S.V.A.N.K.; Khandelwal, M. Feasibility of ANFIS model for prediction of ground vibrations resulting from quarry blasting. Environ. Earth Sci. 2015, 74, 2845–2860. [Google Scholar] [CrossRef] [Green Version]
  23. Nguyen, H.; Bui, X.N.; Bui, H.B.; Cuong, D.T. Developing an XG Boost model to predict blast-induced peak particle velocity in an open-pit mine: A case study. Acta Geophys. 2019. [Google Scholar] [CrossRef]
  24. Arthur, C.K.; Temeng, V.A.; Ziggah, Y.Y. Novel approach to predicting blast-induced ground vibration using Gaussian process regression. Eng. Comput. 2019. [Google Scholar] [CrossRef]
  25. Nguyen, H.; Drebenstedt, C.; Bui, X.N.; Bui, D.T. Prediction of Blast-Induced Ground Vibration in an Open-Pit Mine by a Novel Hybrid Model Based on Clustering and Artificial Neural Network. Nat. Resour. Res. 2019. [Google Scholar] [CrossRef]
  26. Jones, D.R.; Perttunen, C.D.; Stuckman, B.E. Lipschitzian optimization without the Lipschitz constant. J. Optim. Theory Appl. 1993, 79, 157–181. [Google Scholar] [CrossRef]
  27. Kvasov, D.E.; Sergeyev, Y.D. Deterministic approaches for solving practical black-box global optimization problems. Adv. Eng. Softw. 2015, 80, 58–66. [Google Scholar] [CrossRef] [Green Version]
  28. Sergeyev, Y.D.; Kvasov, D.E.; Mukhametzhanov, M.S. On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget. Sci. Rep. 2018, 8, 453. [Google Scholar] [CrossRef] [Green Version]
  29. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Piscataway, NJ, USA, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  30. Hajihassani, M.; Jahed Armaghani, D.; Kalatehjari, R. Applications of particle swarm optimization in geotechnical engineering: A comprehensive review. Geotech. Geol. Eng. 2018, 36, 705–722. [Google Scholar] [CrossRef]
  31. Lanckriet, G.; Ghaoui, L.E.; Bhattacharyya, C.; Jordan, M.I. Minimax probability machine. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge MA, USA, 2002; pp. 801–807. [Google Scholar]
  32. Marshall, A.W.; Olkin, I. Multivariate chebyshev inequalities. Ann. Math. Stat. 1960, 31, 1001–1014. [Google Scholar] [CrossRef]
  33. Bertsimas, D.; Popescu, I. Optimal inequalities in probability theory: A convex optimization approach. SIAM J. Optim. 2005, 15, 780–804. [Google Scholar] [CrossRef]
  34. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: A new learning scheme of feedforward neural networks. In Proceedings of the IEEE International Joint Conference on Neural Networks, Budapest, Hungary, 25–29 July 2004; pp. 985–990. [Google Scholar]
  35. Huang, G.B.; Zhou, H.; Ding, X.; Zhang, R. Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2012, 42, 513–529. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  37. Haiping, M.; Simon, D.; Siarry, P.; Yang, Z.; Fei, M. Biogeography-based optimization: A 10-year review. IEEE Trans. Emerg. Top. Comput. Intell. 2017, 1, 391–407. [Google Scholar]
  38. Roy, B.; Singh, M.P. An empirical-based rainfall-runoff modelling using optimization technique. Int. J. River Basin Manag. 2019, 1–19. [Google Scholar] [CrossRef]
  39. Duvall, W.I.; Petkof, B. Spherical Propagation of Explosion Generated Strain Pulses in Rock; US Bureau of Mines Report of Investigation 5483; U.S. Department of the Interior, Bureau of Mines: Washington, DC, USA, 1959.
  40. Indian Standard Institute. Criteria for Safety and Design of Structures Subjected to Underground Blast; ISI Bull IS-6922; Bureau of Indian Standards: New Delhi, India, 1973. [Google Scholar]
  41. Ambraseys, N.R.; Hendron, A.J. Dynamic Behavior of Rock Masses: Rock Mechanics in Engineering Practices; Wiley: London, UK, 1968. [Google Scholar]
  42. Asteris, P.G.; Roussis, P.C.; Douvika, M.G. Feed-forward neural network prediction of the mechanical properties of sandcrete materials. Sensors 2017, 17, 1344. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Qi, C.; Fourie, A. Cemented paste backfill for mineral tailings management: Review and future perspectives. Miner. Eng. 2019, 144, 106025. [Google Scholar] [CrossRef]
  44. Asteris, P.G.; Argyropoulos, I.; Cavaleri, L.; Rodrigues, H.; Varum, H.; Thomas, J.; Lourenço, P.B. Masonry Compressive Strength Prediction using Artificial Neural Networks. In Proceedings of the International Conference on Transdisciplinary Multispectral Modeling and Cooperation for the Preservation of Cultural Heritage, Athens, Greece, 10–13 October 2018; Springer: Cham, Switzerland, 2018; pp. 200–224. [Google Scholar]
  45. Qi, C.; Fourie, A.; Chen, Q.; Tang, X.; Zhang, Q.; Gao, R. Data-driven modelling of the flocculation process on mineral processing tailings treatment. J. Clean. Prod. 2018, 196, 505–516. [Google Scholar] [CrossRef]
  46. Xu, H.; Zhou, J.; Asteris, P.J.; Jahed Armaghani, D.; Tahir, M.M. Supervised Machine Learning Techniques to the Prediction of Tunnel Boring Machine Penetration Rate. Appl. Sci. 2019, 9, 3715. [Google Scholar] [CrossRef] [Green Version]
  47. Le, L.T.; Nguyen, H.; Dou, J.; Zhou, J. A Comparative Study of PSO-ANN, GA-ANN, ICA-ANN, and ABC-ANN in Estimating the Heating Load of Buildings’ Energy Efficiency for Smart City Planning. Appl. Sci. 2019, 9, 2630. [Google Scholar] [CrossRef] [Green Version]
  48. Hajihassani, M.; Shah Abdullah, S.; Asteris, P.G.; Jahed Armaghani, D. A Gene Expression Programming Model for Predicting Tunnel Convergence. Appl. Sci. 2019, 9, 4650. [Google Scholar] [CrossRef] [Green Version]
  49. Ren, Q.; Li, M.; Zhang, M.; Shen, Y.; Si, W. Prediction of Ultimate Axial Capacity of Square Concrete-Filled Steel Tubular Short Columns Using a Hybrid Intelligent Algorithm. Appl. Sci. 2019, 9, 2802. [Google Scholar] [CrossRef] [Green Version]
  50. Cavaleri, L.; Asteris, P.G.; Psyllaki, P.P.; Douvika, M.G.; Skentou, A.D.; Vaxevanidis, N.M. Prediction of Surface Treatment Effects on the Tribological Performance of Tool Steels Using Artificial Neural Networks. Appl. Sci. 2019, 9, 2788. [Google Scholar] [CrossRef] [Green Version]
  51. Zhou, J.; Li, E.; Wei, H.; Li, C.; Qiao, Q.; Jahed Armaghani, D. Random Forests and Cubist Algorithms for Predicting Shear Strengths of Rockfill Materials. Appl. Sci. 2019, 9, 1621. [Google Scholar] [CrossRef] [Green Version]
  52. Dao, D.V.; Trinh, S.H.; Ly, H.B.; Pham, B.T. Prediction of Compressive Strength of Geopolymer Concrete Using Entirely Steel Slag Aggregates: Novel Hybrid Artificial Intelligence Approaches. Appl. Sci. 2019, 9, 1113. [Google Scholar] [CrossRef] [Green Version]
  53. Asteris, P.G.; Moropoulou, A.; Skentou, A.D.; Apostolopoulou, M.; Mohebkhah, A.; Cavaleri, L.; Rodrigues, H.; Varum, H. Stochastic Vulnerability Assessment of Masonry Structures: Concepts, Modeling and Restoration Aspects. Appl. Sci. 2019, 9, 243. [Google Scholar] [CrossRef] [Green Version]
  54. Qi, C.; Tang, X.; Dong, X.; Chen, Q.; Fourie, A.; Liu, E. Towards Intelligent Mining for Backfill: A genetic programming-based method for strength forecasting of cemented paste backfill. Miner. Eng. 2019, 133, 69–79. [Google Scholar] [CrossRef]
  55. Asteris, P.G.; Nozhati, S.; Nikoo, M.; Cavaleri, L.; Nikoo, M. Krill herd algorithm-based neural network in structural seismic reliability evaluation. Mech. Adv. Mater. Struct. 2019, 26, 1146–1153. [Google Scholar] [CrossRef]
  56. Huang, L.; Asteris, P.G.; Koopialipoor, M.; Armaghani, D.J.; Tahir, M.M. Invasive Weed Optimization Technique-Based ANN to the Prediction of Rock Tensile Strength. Appl. Sci. 2019, 9, 5372. [Google Scholar] [CrossRef] [Green Version]
  57. Yang, Y.; Zang, O. A hierarchical analysis for rock engineering using artificial neural networks. Rock Mech. Rock Eng. 1997, 30, 207–222. [Google Scholar] [CrossRef]
  58. Faradonbeh, R.S.; Armaghani, D.J.; Majid, M.Z.A.; Tahir, M.M.D.; Murlidhar, B.R.; Monjezi, M.; Wong, H.M. Prediction of ground vibration due to quarry blasting based on gene expression programming: A new model for peak particle velocity prediction. Int. J. Environ. Sci. Technol. 2018, 13, 1453–1464. [Google Scholar] [CrossRef] [Green Version]
  59. Chen, W.; Hasanipanah, M.; Rad, H.N.; Armaghani, D.J.; Tahir, M.M. A new design of evolutionary hybrid optimization of SVR model in predicting the blast-induced ground vibration. Eng. Comput. 2019. [Google Scholar] [CrossRef]
  60. Rad, H.N.; Bakhshayeshi, I.; Jusoh, W.A.W.; Tahir, M.M.; Foong, L.K. Prediction of Flyrock in mine blastig: A new computational intelligence approach. Nat. Resour. Res. 2019. [Google Scholar] [CrossRef]
Figure 1. Waves generated by blasting.
Figure 1. Waves generated by blasting.
Applsci 10 00434 g001
Figure 2. A view of the P, S, and R waves generated by blasting.
Figure 2. A view of the P, S, and R waves generated by blasting.
Applsci 10 00434 g002
Figure 3. A simple architecture of the extreme learning machine (ELM).
Figure 3. A simple architecture of the extreme learning machine (ELM).
Applsci 10 00434 g003
Figure 4. Histograms of the input and output parameters.
Figure 4. Histograms of the input and output parameters.
Applsci 10 00434 g004
Figure 5. Process flow chart of the proposed BBO-ANN.
Figure 5. Process flow chart of the proposed BBO-ANN.
Applsci 10 00434 g005
Figure 6. Scatter plot displaying the actual PPV values versus the predicted PPV values on the testing dataset using the (a) BBO-ANN, (b) MPMR, (c) ELM, (d) PSO-ANN, (e) DIRECT-ANN, (f) USBM, (g) Indian Standard, and (h) Ambraseys–Hendron methods.
Figure 6. Scatter plot displaying the actual PPV values versus the predicted PPV values on the testing dataset using the (a) BBO-ANN, (b) MPMR, (c) ELM, (d) PSO-ANN, (e) DIRECT-ANN, (f) USBM, (g) Indian Standard, and (h) Ambraseys–Hendron methods.
Applsci 10 00434 g006
Figure 7. ROC plot displaying the accuracy versus deviation for predicted PPV values on the training and testing dataset using the predictive models.
Figure 7. ROC plot displaying the accuracy versus deviation for predicted PPV values on the training and testing dataset using the predictive models.
Applsci 10 00434 g007
Figure 8. Convergence curve for PSO-ANN and BBO-ANN.
Figure 8. Convergence curve for PSO-ANN and BBO-ANN.
Applsci 10 00434 g008
Figure 9. Sensitivity analysis results.
Figure 9. Sensitivity analysis results.
Applsci 10 00434 g009
Table 1. The ranges of the model inputs and outputs and some other information.
Table 1. The ranges of the model inputs and outputs and some other information.
ParameterUnitMinMaxMean
BM2.74.13.50
SM3.45.34.37
STM1.83.42.70
PFgr/cm3153213172.24
WKg1801450791
RMR-385545.38
DM308944563.45
PPVmm/s3.39.96.37
Table 2. Testing the different architectures of ANN with their R2.
Table 2. Testing the different architectures of ANN with their R2.
Model ArchitectureNetwork Result
R2
TrainTest
7 × 2 × 10.8950.887
7 × 3 × 10.9150.914
7 × 4 × 10.9270.918
7 × 5 × 10.9310.922
7 × 6 × 10.9270.913
7 × 7 × 10.9290.910
7 × 8 × 10.9150.912
7 × 9 × 10.9180.901
7 × 10 × 10.9210.881
7 × 11 × 10.9250.869
7 × 12 × 10.9300.854
Table 3. Testing the different values of C1 and C2 with their R2.
Table 3. Testing the different values of C1 and C2 with their R2.
Model No.C1C2Network Result
R2
TrainTest
11.3332.6670.9220.917
22.6671.3330.9290.924
31.51.50.9320.918
4220.9350.931
51.751.750.9380.935
61.51.750.9300.922
71.751.50.9250.921
Table 4. Testing the different values of number of particles with their R2.
Table 4. Testing the different values of number of particles with their R2.
Model No.Number of ParticlesNetwork Result
R2
TrainTest
1500.9090.905
21000.9150.904
31500.9190.917
42000.9260.920
52500.9290.922
63000.9350.930
73500.9430.935
84000.9390.934
94500.9340.933
105000.9300.924
Table 5. Testing the different values of number of habitats with their R2.
Table 5. Testing the different values of number of habitats with their R2.
Model No.Number of HabitatsNetwork Result
R2
TrainTest
1300.8890.834
2500.9720.984
31000.960.968
41500.9740.976
52000.970.974
62500.9760.98
73000.9760.982
83500.9760.984
94000.9760.978
104500.9740.974
115000.9740.978
Table 6. Predicted PPV values (normalized) obtained from the models for only the testing phase.
Table 6. Predicted PPV values (normalized) obtained from the models for only the testing phase.
Number of Data ValuesBBO-ANNMPMRELMPSO-ANNDIRECT-ANNUSBMIndian StandardAmbraseys–Hendron
PkPkPkPkPkPkPkPk
10.1141.0390.0801.4840.0871.3630.1101.0750.1270.9340.2610.4550.1830.6490.2800.423
20.1501.1950.1411.2700.0931.9200.1970.9090.1711.0470.2750.6530.1910.9410.2960.607
30.0490.0000.0520.0000.0330.0000.0120.0000.0010.0000.0760.0000.0000.0000.1190.000
40.4750.9530.4540.9970.4660.9720.4321.0490.4820.9390.4421.0250.4121.1000.4471.012
50.4141.0270.4241.0040.3801.1190.3791.1210.4101.0370.3931.0810.3741.1380.3961.073
60.4551.0950.4641.0740.4381.1370.4291.1620.5100.9770.4091.2180.3861.2910.4131.207
70.2700.9300.2740.9160.2241.1190.2670.9410.2580.9710.3520.7120.3220.7780.3580.701
80.3710.9580.3750.9500.3221.1050.3620.9810.3371.0560.3780.9410.3411.0420.3850.924
90.3430.9390.3830.8400.3690.8730.3181.0150.3270.9860.3840.8400.3360.9600.3940.819
100.4121.0250.3471.2170.4051.0430.3711.1390.3671.1510.3711.1390.3261.2950.3801.112
110.4551.0950.4641.0740.4381.1370.4291.1620.5100.9770.4091.2180.3861.2910.4131.207
120.5641.0180.6380.9000.6690.8590.6270.9160.6690.8580.6590.8710.6450.8910.6610.869
130.2470.8500.2520.8320.2150.9750.2490.8420.1961.0710.3550.5910.3150.6670.3630.578
140.9861.0140.9731.0280.9841.0160.9571.0451.0440.9581.2490.8001.2310.8131.2520.798
150.7800.9550.7490.9940.7700.9670.7470.9970.7510.9920.6301.1820.6421.1610.6261.189
160.7201.0680.7241.0620.7331.0500.6921.1110.7491.0270.6061.2690.6251.2300.6001.281
170.6460.9830.6590.9650.6580.9660.6480.9810.7030.9030.5631.1290.5911.0750.5561.143
180.6861.0160.6161.1320.6061.1510.6701.0420.6901.0110.4411.5830.4731.4760.4331.612
190.7311.0210.7381.0110.7351.0150.7041.0600.7490.9970.5751.2990.5891.2670.5701.309
200.5180.9470.5120.9600.5130.9570.5260.9330.5090.9640.4361.1260.4681.0480.4281.147
210.2941.0740.3091.0220.2811.1240.3031.0450.3170.9960.3690.8560.3890.8120.3640.868
220.3010.9640.3320.8740.3220.9000.3650.7960.3810.7620.3770.7710.3950.7350.3720.781
230.3091.0240.2781.1390.3180.9930.3081.0280.3220.9810.3810.8310.4010.7880.3750.843
240.2820.9580.2960.9150.3130.8640.3300.8200.3050.8880.3880.6970.4010.6740.3840.705
P: predicted PPV; k: Actual PPV/Predicted PPV.
Table 7. Statistical indices obtained from the applied predictive models using training phase.
Table 7. Statistical indices obtained from the applied predictive models using training phase.
Statistical IndicesModels
BBO-ANNPSO-ANNMPMRELMDIRECT-ANNUSBMIndian StandardAmbraseys–Hendron
MAE0.0240.0340.0360.0350.0360.1120.1030.115
RMSE0.0290.0420.0430.0440.0510.1350.1250.138
RSR0.1290.1810.1850.1910.2180.5790.5390.594
d0.9960.9910.9910.9900.9880.8650.8910.856
R20.9830.9670.9650.9630.9530.6840.7240.668
Table 8. Statistical indices obtained from the applied predictive models using the testing phase.
Table 8. Statistical indices obtained from the applied predictive models using the testing phase.
Statistical IndicesModels
BBO-ANNPSO-ANNMPMRELMDIRECT-ANNUSBMIndian Standard Ambraseys–Hendron
MAE0.0220.0340.0340.0370.0240.1000.0870.105
RMSE0.0260.0410.0400.0450.0360.1170.1050.123
RSR0.1090.1740.1690.1880.1510.4940.4440.517
d0.9970.9910.9920.9910.9940.9240.9430.914
R20.9880.9720.9710.9650.9810.7470.7990.724

Share and Cite

MDPI and ACS Style

Li, G.; Kumar, D.; Samui, P.; Nikafshan Rad, H.; Roy, B.; Hasanipanah, M. Developing a New Computational Intelligence Approach for Approximating the Blast-Induced Ground Vibration. Appl. Sci. 2020, 10, 434. https://doi.org/10.3390/app10020434

AMA Style

Li G, Kumar D, Samui P, Nikafshan Rad H, Roy B, Hasanipanah M. Developing a New Computational Intelligence Approach for Approximating the Blast-Induced Ground Vibration. Applied Sciences. 2020; 10(2):434. https://doi.org/10.3390/app10020434

Chicago/Turabian Style

Li, Guichen, Deepak Kumar, Pijush Samui, Hima Nikafshan Rad, Bishwajit Roy, and Mahdi Hasanipanah. 2020. "Developing a New Computational Intelligence Approach for Approximating the Blast-Induced Ground Vibration" Applied Sciences 10, no. 2: 434. https://doi.org/10.3390/app10020434

APA Style

Li, G., Kumar, D., Samui, P., Nikafshan Rad, H., Roy, B., & Hasanipanah, M. (2020). Developing a New Computational Intelligence Approach for Approximating the Blast-Induced Ground Vibration. Applied Sciences, 10(2), 434. https://doi.org/10.3390/app10020434

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop