Next Article in Journal
A Novel Piezoelectric Ceramic Actuator with Scissoring Composite Vibration for Medical Applications
Next Article in Special Issue
Novel Nature-Inspired Hybrids of Neural Computing for Estimating Soil Shear Strength
Previous Article in Journal
A Novel Sensing Strategy Based on Energy Detector for Spectrum Sensing
Previous Article in Special Issue
Development of Two Novel Hybrid Prediction Models Estimating Ultimate Bearing Capacity of the Shallow Circular Footing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine-Learning-Based Classification Approaches toward Recognizing Slope Stability Failure

1
Department for Management of Science and Technology Development, Ton Duc Thang University, Ho Chi Minh City, Vietnam
2
Faculty of Civil Engineering, Ton Duc Thang University, Ho Chi Minh City, Vietnam
3
Institute of Research and Development, Duy Tan University, Da Nang, Vietnam
4
Geographic Information System Group, Department of Business and IT, University of South-Eastern Norway, N-3800 Bø i Telemark, Norway
5
RIKEN Center for Advanced Intelligence Project, Goal-Oriented Technology Research Group, Disaster Resilience Science Team, Tokyo 103-0027, Japan
6
Centre of Tropical Geoengineering (Geotropik), School of Civil Engineering, Faculty of Engineering, Universiti Teknologi Malaysia, Johor Bahru, 81310 Johor, Malaysia
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2019, 9(21), 4638; https://doi.org/10.3390/app9214638
Submission received: 7 October 2019 / Revised: 23 October 2019 / Accepted: 26 October 2019 / Published: 31 October 2019
(This article belongs to the Special Issue Artificial Intelligence in Smart Buildings)

Abstract

:
In this paper, the authors investigated the applicability of combining machine-learning-based models toward slope stability assessment. To do this, several well-known machine-learning-based methods, namely multiple linear regression (MLR), multi-layer perceptron (MLP), radial basis function regression (RBFR), improved support vector machine using sequential minimal optimization algorithm (SMO-SVM), lazy k-nearest neighbor (IBK), random forest (RF), and random tree (RT), were selected to evaluate the stability of a slope through estimating the factor of safety (FOS). In the following, a comparative classification was carried out based on the five stability categories. Based on the respective values of total scores (the summation of scores obtained for the training and testing stages) of 15, 35, 48, 15, 50, 60, and 57, acquired for MLR, MLP, RBFR, SMO-SVM, IBK, RF, and RT, respectively, it was concluded that RF outperformed other intelligent models. The results of statistical indexes also prove the excellent prediction from the optimized structure of the ANN and RF techniques.

1. Introduction

The stability of local slopes is a critical issue that needs to be meticulously investigated due to their great impact on the adjacent engineering projects (excavation and transmission roads, for instance). In addition to that, slope failures cause great psychological damage (e.g., the loss of property and human life) all over the world every year. In Iran, for example, based on a rough estimation announced by the Iranian Landslide Working Party (2007), 187 people have been killed every year by slope failure in this country [1]. The saturation degree, as well as other intrinsic properties of the soil, significantly affect the likelihood of slope failure [2,3]. Up to now, many scientists have intended to provide effective modeling for slope stability problems. Some drawbacks of traditional methods, such as the necessity of using laboratory equipment and the high level of complexity prevent them from being a simply used solution [4]. Moreover, they cannot be used as a comprehensive solution due to their limitation in investigating a specific slope condition (e.g., slope angle, height, groundwater level, soil properties, etc.). Different types of limit equilibrium methods (LEMs), finite element model (FEM), and numerical solutions have been extensively employed for the engineering problem [5,6,7,8]. Development of design charts has been investigated for years in order to provide a reliable tool for slope stability assessment [9]. However, this approach is not devoid of defects either. Generating an efficient design chart requires consuming a lot of time and cost. Furthermore, determining the precise mechanical parameters is a difficult task [10,11]. Hence, the application of artificial intelligence techniques is more highlighted due to their quick performance and sufficient accuracy. The prominent superiority of these approaches is that they are able to detect the non-linear relationship between the target parameter(s) and its key factors. Also, ANN can be implemented using any determined number of hidden nodes [12,13]. For example, the suitable efficiency of an artificial neural network (ANN) [14,15,16] and a support vector machine (SVM) [17] has been proven in many geotechnical studies. After reviewing the literature, the complexity of the slope stability problem is evident. What makes the issue even more complicated and critical is constructing various structures in the vicinity of slopes (i.e., indicating a noticeable number of loads applied on a rigid footing). The amount of surcharge and the distance from the slope’s crest are two parameters influencing the stability of the purposed slope [18]. This encouraged scholars to present an equation to calculate the factor of safety (FOS) of pure slopes or the slopes receiving a static load [19,20,21,22,23]. Prior to this study, Chakraborty and Goswami estimated the FOS for 200 slopes with different geometric and shear strength parameters using the multiple linear regression (MLR) and ANN models. They also compared the obtained results with an FEM model, and an acceptable rate of accuracy was obtained for both applied models. In this research, ANN outperformed MLR. Also, Pei et al. [23] effectively used the random forest (RF) and regression tree predictive models in functional soil-landscape modeling for regionalizing the depth of the failure plane and soil bulk density. They developed a classification for detecting the safe and hazardous slopes by means of FOS calculations. In some cases, the random forest and logistic regression techniques were implemented for a susceptibility modeling of a shallow landslide phenomena. They found that applied models have an excellent capability for this work. The SVM model synthesized by a minimal sequential optimization (SMO) algorithm has been used for surface grinding quality [24] and many medical classifications [25]. Also, a radial basis function regression (RBFR) has been successfully employed in genetics for predicting quantitative traits [26], and in agriculture for the discrimination of cover crops in olive orchards [27]. Another machine learning method that is used in the current paper is lazy k-nearest neighbor (IBK) tool. As the name implies, the lazy learning is the essence of this model and has shown good performance for many fields, such as spam e-mail filtering [28] and land cover assessment [29].
Though plenty of effort has been carried out to employ various soft computing tools for evaluating the stability of slopes [30,31], no prior study was found to address a comprehensive comparison of the applicability of the mentioned models within the same paper. Therefore, in this paper, we investigated the efficiency of seven machine-learning models (i.e., in their optimal structure), namely multiple linear regression (MLR), multi-layer perceptron (MLP), radial basis function regression (RBFR), improved support vector machine using sequential minimal optimization algorithm (SMO-SVM), lazy k-nearest neighbor (IBK), random forest (RF), and random tree (RT), for appraising the stability of a cohesive soil slope. Furthermore, since the authors did not find any former application of IBK, RBFR, or SMO-SVM for estimating the FOS of the slope, the further novelty of the current study can be announced as applying the mentioned models for the first time in this field. In this regard, the Optum G2 software was used to give the FOS of 630 different slopes. Four conditioning factors that affected the values of FOS were chosen: undrained shear strength (Cu), slope angle (β), setback distance ratio (b/B), and applied surcharge on the shallow foundation installed over the slope (w). The utilized models were implemented in Waikato Environment for Knowledge Analysis (WEKA) software (stable version, Waikato, New Zealand) using the training dataset (i.e., 80% of the entire dataset). Then, the performance of each model was evaluated using testing samples (i.e., 20% of the entire dataset). The results were presented by means of several statistical indices. In the end, a new classification was carried out based on different degrees of stability to evaluate and compare the results of the models used. In addition, the design solution charts were developed using the outputs of the most efficient model.

2. Data Collection and Methodology

To acquire a reliable dataset, the use of a single-layer slope was proposed. It was supposed that a purely cohesive soil, which only had undrained cohesive strength (Cu), comprises the body of this slope (see Figure 1). The main factors that affect the strength of the slope against the failure (i.e., FOS) were considered to be the slope angle (β), setback distance ratio (b/B), and applied surcharge on the footing installed over the slope (w). These parameters are illustrated in Figure 1. To calculate the FOS, Optum G2 software, which is a comprehensive finite element program for geotechnical stability and deformation analysis, was used [32]. Regarding the possible values for the strength of the soil (Cu) and applied surcharge (w), different geometries of the slope (β), and displacements of the rigid foundation (b/B), 630 possible stages were modelled and analyzed in Optum G2 to calculate the FOS of each stage as the output. In the modeling process, the mechanical parameters of Poisson’s ratio, soil unit weight, and internal friction angle were assigned to be 0.35, 18 kN/m3, and 0°, respectively. Also, Young’s modulus (E) was varied for every value of Cu. In this sense, E was set to 1000, 2000, 3500, 5000, 9000, 15,000, and 30,000 kPa for the following respective values of Cu: 25, 50, 75, 100, 200, 300, and 400 kPa.
The graphical relationship between the FOS and its influential factors is depicted in Figure 2a–d. In these charts, Cu, β, b/B, and w are each placed on a horizontal (x) axis versus the obtained FOS (i.e., from Optum G2 analyses) on the vertical (y) axis. The FOS ranges within [0.8, 28.55] for all graphs. Not surprisingly, as a general view, the slope experienced more stability when a higher value of Cu was assigned to the soil (see Figure 2a). As Figure 2b,d illustrate, the parameters β (5°, 30°, 45°, 60°, and 75°) and w (50, 100, and 150 KN/m2) were inversely proportional to the FOS. This means the slope was more likely to fail when the model was performed with higher values of β and w. According to Figure 2c, the FOS did not show any significant sensitivity to the b/B ratio changes (0, 1, 2, 3, 4, and 5).
Table 1 provides an example of the dataset used in this paper. The dataset was randomly divided into training and testing sub-groups, with the respective ratios of 0.8 (504 samples) and 0.2 (126 samples), based on previous studies [33]. Note that the training samples were used to train the MLR, MLP, RBF, SMO-SVM, IBK, RF, and RT models, and their performance was validated by means of the testing dataset. The 10-fold cross-validation process was used to select the training and test datasets (Figure 3). The results achieved were based on the best average accuracy for each classifier using 10-fold cross-validation (TFCV) based on the 20%/80% training/test split. In short, TFCV (which is a specific operation of k-fold cross-validation) divided our dataset into 10-partitions. In the first fold, a specific set of eight partitions of the data was used to train each model, whereas two partitions were used for testing. The accuracy for this fold was then indicated. This complied with using a various 20%/80% combinations of the data for training/testing, whose accuracy is also indicated. In the end, after all 10 folds were finished, the overall average accuracy was calculated [34,35].
In the following, the aim was to evaluate the competency of the applied models through a novel procedure based on the classification. In this sense, the actual values of the safety factor (i.e., obtained from the Optum G2 analysis) were stratified into five stability classes. The safety factor values varied from 0.8 to 28.55. Hence, the classification was carried out with respect to the extents given below (Table 2):
As previously expressed, this study aimed to appraise the applicability of the seven most-employed machine learning models, namely multiple linear regression (MLR), multi-layer perceptron (MLP), radial basis function regression (RBFR), improved support vector machine using sequential minimal optimization algorithm (SMO-SVM), lazy k-nearest neighbor (IBK), random forest (RF), and random tree (RT), toward evaluating the stability of a cohesive slope. For this purpose, the factor of safety (FOS) of the slope was estimated using the above-mentioned methods. Four slope stability effective factors that were chosen for this study were the undrained shear strength (Cu), slope angle (β), setback distance ratio (b/B), and applied surcharge on the footing installed over the slope (w). To prepare the input target dataset, 630 different stages were developed and analyzed in Optum G2 finite element software. As a result of this process, a FOS was derived for each performed stage. To create the required dataset, the values of Cu, β, b/B, and w were listed with the corresponding FOS. In the following, 80% of the prepared dataset (training phase consisting of 504 samples) was randomly selected for training the utilized models (MLR, MLP, RBFR, SMO-SVM, IBK, RF, and RT), and the effectiveness of them was evaluated by means of the remaining 20% of the dataset (testing phase consisting of 126 samples). The training process was carried out in WEKA software, which is an applicable tool for classification utilization and data mining. Note that many scholars have employed WEKA before for various simulating objectives [36,37]. To evaluate the efficiency of the implemented model, five statistical indices, namely the coefficient of determination (R2), mean absolute error (MAE), root mean square error (RMSE), relative absolute error (RAE in %), and root relative squared error (RRSE in %), were used to develop a color intensity ranking and present a visualized comparison of the results. It should be noted that these criteria have been broadly used in earlier studies [38]. Equations (1)–(5) describe the formulation of R2, MAE, RMSE, RAE, and RRSE, respectively.
R 2 = 1 i = 1 s ( Y i p r e d i c t e d Y i o b s e r v e d ) 2 i = 1 s ( Y i o b s e r v e d Y ¯ o b s e r v e d ) 2
M A E = 1 N I = 1 s | Y i o b s e r v e d Y i p r e d i c t e d |
R M S E = 1 N i = 1 s [ ( Y i o b s e r v e d Y i p r e d i c t e d ) ] 2
R A E = i = 1 s | Y i p r e d i c t e d Y i o b s e r v e d | i = 1 s | Y i o b s e r v e d Y ¯ o b s e r v e d |
R R S E = i = 1 s ( Y i p r e d i c t e d Y i o b s e r v e d ) 2 i = 1 s ( Y i o b s e r v e d Y ¯ o b s e r v e d ) 2
In all the above equations, Yi observed and Yi predicted stand for the actual and predicted values of the FOS, respectively. The term S is the indicator of the number of data points and Y ¯ o b s e r v e d is the average of the actual values of the FOS. The outcomes of this paper are presented in various ways. In the next part, the competency of the applied models (i.e., MLR, MLP, RBFR, SMO-SVM, IBK, RF, and RT) for the approximation of FOS is extensively presented, compared, and discussed.

2.1. Machine Learning Techniques

The present research outlines the application of various machine-learning-based predictive models, namely multiple linear regression (MLR), multi-layer perceptron (MLP), radial basis function regression (RBFR), improved support vector machine using sequential minimal optimization algorithm (SMO-SVM), lazy k-nearest neighbor (IBK), random forest (RF), and random tree (RT), in slope stability assessment. Analysis was undertaken using WEKA software [39]. The mentioned models are described below.

2.1.1. Multiple Linear Regression (MLR)

Fitting a linear equation to the data samples (Figure 4) is the essence of the multiple linear regression (MLR) model. It tries to reveal the generic equation between two or more explanatory (independent) variables and a response (dependent) variable. Equation (6) defines the general equation of the MLR [40]:
y = ρ 0 + ρ 1 x 1 + ρ 2 x 2 + + ρ s x s + χ
where x and y are the independent and dependent variables, respectively. The terms ρ0, ρ1, ..., ρs indicate the unknown parameters of the MLR. Also, χ is the random variable in the MLR generic formula that has a normal distribution.
The main task of MLR is to forecast the unknown terms (i.e., ρ0, ρ1, ..., ρs) of Equation (6). By applying the least-squares technique, the practical form of the statistical regression method is represented as follows [40]:
y = p 0 + p 1 x 1 + p 2 x 2 + + p s x s + μ
In the above equation p0, p1, ..., ps are the approximated regression coefficients of ρ0, ρ1, ..., ρs, respectively. Also, the term μ gives the estimated error for the sample. Assuming the μ is the difference between the real and modelled y, the estimation of y is performed as follows
y ^ = p 0 + p 1 x 1 + p 2 x 2 + + p s x s

2.1.2. Multi-Layer Perceptron (MLP)

Imitating the human neural network, an artificial neural network (ANN) was first suggested by McCulloch and Pitts [41]. The non-linear rules between each set of inputs and their specific target(s) can be easily revealed using ANN [42]. Many scholars have used different types of ANN to model various engineering models [14,16,20,38,43,44]. Despite the explicit benefits of ANNs, the most significant disadvantages of them lie in being trapped in their local minima and overfitting. A highly connected pattern forms the structure of an ANN. A multi-layer perceptron is one of the most prevalent types of this model. A minimum of three layers is needed to form the MLP structure. Every layer contains a different number of so-called computational nodes called “neurons.” Figure 5a illustrates the schematic view of the MLP neural network that was used in this study. This network had two hidden layers comprising 10 neurons each. Every line in this structure is indicative of an interconnection weight (W). A more detailed view of the performance of a typical neuron is shown in Figure 5b:
Note that, in Figure 1b, the parameter hj is calculated as follows:
h j = i = 1 s W j i X i + b j
where the terms W, X, and b stand for the weight, input, and bias of the utilized neuron. Also, the function f denotes the activation function of the neuron that is applied to produce the output.

2.1.3. Radial Basis Function Regression (RBFR)

Th radial basis function (RBF) approach (shown in Figure 6) was first presented by Broomhead and Lowe [45] for exact function interpolation [46]. The RBF regression model attempts to fit a set of local kernels in high-dimensional space to solve problems [45,47,48]. The position of noisy instances is considered for executing the fitting. The number of purposed kernels is often estimated using a bath algorithm.
The RBF regression performs through establishing an expectation function with the overall form of Equation (10):
F ( x ) = i = 1 z k ( x x i )   r i
where ‖x‖ represents the Euclidean norm on x and { k ( x x i ) | i = 1 , 2 , , m } is a set of m RBFs, which are characterized by their non-linearity and being constant on x. Also, the regression coefficient is denoted by ri in this formula. Every training sample (i.e., xi) determines a center of the RBFs. In other words, the so-called function “radial” is impressively dependent on the distance (ψ) between the center (xi) and the origin (x) (i.e., ψ = x x i ) The Gaussian k(r) = exp⁡(−σψ2), for ψ ≥ 0 [49,50]; the multi-quadric k ( r ) = δ 2 + ψ 2 ,   for   δ 0 [51]; and the thin-plate-spline k(r) = ψ2logψ [52,53] functions are three extensively applied examples of the function k(x) following the RBF rules.

2.1.4. Improved Support Vector Machine using Sequential Minimal Optimization Algorithm (SMO-SVM)

Support Vector Machine (SVM)

Insulating different classes is the pivotal norm during the performance of the support vector machine (SVM). This happens through detecting the decision boundaries. SVMs have been shown to have a suitable accuracy in various classification problems when dealing with high-dimensional and non-separable datasets [54,55,56]. Theoretically, SVMs act according to statistical theories [57] and aim to transform the problem from non-linear to linear future spaces [58]. More specifically, assessing the support vectors that are set at the edge of the module descriptors will lead to locating the most appropriate hyperplane between the modules. Note that guiding cases that could not be a support vector are known as neglected vectors [59]. The graphical description of the mentioned procedure is depicted in Figure 7:
In the SVMs, the formulation of the hyperplane (i.e., the decision surface) is as follows:
j = 1 V l j · f j · K ( x , x j ) = 0
where lj, x, and fj represent the Lagrange coefficient, the vector that is drawn from the input space, and the jth output, respectively. Also, K(x, xi) is indicative of the inner-product kernel (i.e., the linear kernel function) and operates as given in Equation (12):
K ( x , x j ) = x · x j
where x and xj stand for the input vector and input pattern related to the jth instance, respectively.
The vector w and the biasing threshold b are obtained from Equations (13) and (14) [60]:
w = i = 1 V l j · f j · x j
b = w · x k f k
Also, Q(l), which is a quadratic function (in li), is given using the following formula:
m i n Q ( l ) = m i n 1 2 j = 1 V h = 1 V f j f h K ( x j , x h ) l j l h j = 1 V l j where , 0 l j C j = 1 V l j · f j = 0

Sequential Minimal Optimization (SMO)

Sequential minimal optimization (SMO) is a simple algorithm that was used in this study to improve the performance of the SVM. When corresponding the linear weight vector to the optimal values of li using SMO, adding a matrix repository and establishing a repetitive statistical is not required. Every pair of Lagrange multiplier, such as l1 and l2 (l1 < l2), vary from 0 to C. During the SMO performance, the l2 is first computed. The extent of the transverse contour fragment is determined based on the obtained l2 [60]. If the targets f1 and f2 are equal, the subsequent bounds apply to l1 (Equation (10)); otherwise, they apply to l2. Equation (16) defines the directional minimum along with the contour fragment acquired using SMO:
l 2 = l 2 + f 2 ( E 1 E 2 ) λ
where Ej is the difference between the jth SMV output and the target. The parameter λ stands for an unbiased function and is given by the following equation:
λ = K ( x 1 , x 1 ) + K ( x 2 , x 2 ) 2 K ( x 1 , x 2 )
In the following, the l2′ clipped to the range [A, B]:
A = m a x ( 0 , l 2 l 1 ) B = m i n ( C , C + l 2 l 1 )
A = m a x ( 0 , l 2 + l 1 C ) B = m i n ( C , l 2 + l 1 )
l 2 = { B if   l 2 B l 2 if   A < l 2 < B A if   l 2 A
l 1 = l 1 + υ ( l 2 l 2 ) , υ = f 1 f 2
Finally, l1 and l2 are moved to the bound with the lowest unbiased function.
After optimizing the Lagrange multipliers, SVM is eligible to give the output f1 using the threshold b1, if 0 < l1 < C [60]. Equations (22) and (23) show the way that b1 and b2 are calculated:
b 1 = E 1 + f 1 ( l 1 l 1 ) K ( x 1 , x 1 ) + f 2 ( l 1 l 2 ) K ( x 1 , x 2 ) + b
b 2 = E 1 + f 1 ( l 1 l 1 ) K ( x 1 , x 2 ) + f 2 ( l 1 l 2 ) K ( x 2 , x 2 ) + b

2.1.5. Lazy k-Nearest Neighbor (IBK)

In the lazy learning procedure, the generalization of the training sample commences with producing a query and learners do not tend to operate until the classification time. Generating the objective function through a local estimation can be stated as the main advantage of the models associated with lazy learning. One of the most prevalent techniques that follow these rules is the k-nearest neighbor algorithm [61]. Multiple problems can be simulated with this approach due to the local estimation accomplished for all queries to the system. Moreover, this feature causes the capability of such systems to work, even when the problem conditions change [62,63]. Even though lazy learning has been shown to have a suitable quickness for training, validation takes place over longer time. Furthermore, since the complete training process is stored in this method, a large space is required for the k-nearest neighborhood (IBK) classifier as a non-parametric model that has a good background for classification and regression utilization [61]. The number of nearest neighbors can be specified exactly in the object editor or set automatically through applying cross-validation to a certain upper bound. IBK utilizes various search algorithms (that are linear by default) to expedite finding the nearest neighbors. Also, the default function for evaluating the position of samples is the Euclidean distance, which can be substituted by Minkowski, Manhattan, and Chebyshev distance functions [64]. As is well-discussed in References [62,65], the distance from the validation data can be a valid criterion to weight the estimations from more than one neighbor. Note that altering the oldest training data with new possible samples takes place by determining a window size. It helps to have a constant number of samples [66].

2.1.6. Random Forest (RF)

The random forest (RF) is a capable tool used for classification problems [67]. RF follows the ensemble learning rules by employing several classification trees [68]. In this model, showing better results than entire individual trees (i.e., the better performance of an ensemble of classification trees) entails a greater than random accuracy, as well as the variety of the ensemble members [69]. During the FR implementation, it aims to change the predictive relations randomly to increase the diversity of trees in the utilized forest. The development of the forest necessitates the user to specify the values of the number of trees (t) and the number of variables that cleave the nodes (g). Note that either categorical or numerical variables are eligible for this task. The parameter t needs to be high enough to guarantee the reliability of the model. The unbiased estimation of the generalization error (GE), called the out-of-bag (OOB) error is obtained, when the random forest is being formed. Breiman [67] showed that a limiting value of GE is produced by RF. The OOB error is computed using the ratio of the misclassification error over the total out-of-bag elements. RF attempts to appraise the prominence of the predictive variables through considering the increase of the OOB error. This increment is caused by permuting the OOB data for the utilized variable, while other variables are left without any change. It must be noted that the prominence of the predictive variable rises as the OOB error increases [70]. Since each tree is counted as a completely independent random case, the affluence of random trees can lead to reducing the likelihood of overfitting. Note that this can be considered a key advantage of RF. This algorithm can automatically handle the missing values due to its suitable resistance against the outliers when making predictions.

2.1.7. Random Tree (RT)

Random trees (RT) symbolize a supervised classification model first introduced by Leo Breiman and Adele Cutler [67]. Similar to RF, ensemble learning forms the essence of RT. In this approach, many learners who operate individually are involved. To develop a decision tree, a bagging idea is used to provide a randomly chosen group of samples. The pivotal difference between a standard tree and random forest (FR) is found in the splitting of the node. In RF, this is executed using the best among the subgroup of predictors, while a standard tree uses the elite split among all variables. The RT is an ensemble of tree forecasters (i.e., forest) and can handle both regression and classification utilizations. During the RT execution, the input data are given to the tree classifier. Every existing tree performs the classification of the inputs. Eventually, the category with the highest frequency is the output of the system. This is noteworthy because the training error is calculated internally; none of cross-validation and bootstraps are needed to estimate the accuracy of the training stage. Note that the average of the responses of the whole forest members is computed to produce the output of the regression problems [71]. The error of this scheme is a proportion of the number of misclassified vectors to all vectors in the original data.

3. Results and Discussion

The calculated values of R2, MAE, RMSE, RAE, and RRSE for estimating the FOS are tabulated in Table 3 and Table 4 for the training and testing datasets. Also, the total ranking obtained for the models is featured in Table 5. A color intensity model is given to denote the quality of results graphically. A red collection is considered for particular results. A higher value of R2 and lower MAE, RMSE, RAE, and RRSE are represented using a more intense red color. In the last column, the intensity of a green collection is supposed to be inversely proportional to the ranking gained by each model.
At a glance, RF could be seen to be the outstanding model due to its highest score. However, based on the acquired R2 for the training (0.9586, 0.9937, 0.9948, 0.9529, 1.000, 0.9997, and 0.9998, calculated for MLR, MLP, RBFR, SMO-SVM, IBK, RF, and RT, respectively) and testing (0.9649, 0.9939, 0.9955, 0.9653, 0.9837, 0.9985, and 0.9950, respectively) datasets, an acceptable correlation could be seen for all models. Referring to the RMSE (28.4887, 1.7366, 0.7131, 0.6231, 1.9183, 0.00, 0.1486, and 0.1111) and RRSE (11.6985%, 10.2221%, 31.4703%, 0.0000%, 2.4385%, and 1.8221%) obtained for the training dataset, the IBK predictive model presented the most effective training compared to other models. Also, RT and RF could be seen as the second- and third-most accurate models, respectively, in the training stage. In addition, RBFR outperformed MLP, MLR, and SMO-SVM (as shown in Table 3).
The results for evaluating the performance of the models used are available in Table 4. Despite this fact that IBK showed an excellent performance during the training phase, a better validation was achieved using the RF, RBFR, RT, and MLP methods in this part. This claim can be proved by the obtained values of R2 (0.9649, 0.9939, 0.9955, 0.9653, 0.9837, 0.9985, and 0.9950), MAE (1.1939, 0.5149, 0.3936, 1.0360, 0.8184, 0.2152, and 0.3492) and RAE (24.1272%, 10.4047%, 7.9549%, 20.9366%, 16.5388%, 4.3498%, and 7.0576%) for MLR, MLP, RBFR, SMO-SVM, IBK, RF, and RT, respectively. Respective values of 0.9985, 0.2152, 0.3312, 4.3498%, and 5.5145% computed for R2, MAE, RMSE, RAE, and RRSE indices show that the highest level of accuracy in the validation phase was achieved by RF. Similar to the training phase, the lowest reliability was achieved by the SVM and MLR models, compared to the other utilized techniques.
Furthermore, a comprehensive comparison (regarding the results of both the training and testing datasets) of applied methods is conducted in Table 5. In this table, considering the assumption of individual ranks obtained for each model (based on the R2, MAE, RMSE, RAE, and RRSE in Table 3 and Table 4), a new total ranking is provided. According to this table, the RF (total score = 60) was the most efficient model in the current study due to its good performance in both the training and testing phases. After RF, RT (total score = 57) was the second-most reliable model. Other employed methods including IBK (total score = 50), RBFR (total score = 48), and MLP (total score = 35) were the third-, fourth-, and fifth-most accurate tools, respectively. Also, the poorest estimation was given by the SMO-SVM and MLR approaches (total score = 15 for both models).
Application of multiple linear regression (MLR), multi-layer perceptron (MLP), radial basis function regression (RBFR), improved support vector machine using sequential minimal optimization algorithm (SMO-SVM), lazy k-nearest neighbor (IBK), random forest (RF), and random tree (RT) was found to be viable for appraising the stability of a single-layered cohesive slope. The models that use ensemble learning (i.e., RF and RT) outperformed the models that are based on lazy-learning (IBK), regression rules (MLR and RBFR), neural learning (MLP), and statistical theories (SMO-SVM). Referring to the acquired total scores (the summation of scores obtained for the training and testing stages) of 15, 35, 48, 15, 50, 60, and 57 for MLR, MLP, RBFR, SMO-SVM, IBK, RF, and RT, respectively, shows the superiority of the RT algorithm. After that, RT, IBK, and RBFR were shown to have an excellent performance when estimating the FOS. Also, satisfying reliability was found for the MLP, SMO-SVM, and MLR methods. Based on the results obtained for the classification of the risk of failure, the closest approximation was achieved using the RF, RT, IBK, and SMO-SVM methods. In this regard, 14.60% of the dataset was assorted as hazardous slopes (i.e., “unstable” and “low stability” categories). This value was estimated as 18.25%, 12.54%, 12.70%, 14.76%, 14.60%, 13.81%, and 14.13% by the MLR, MLP, RBFR, SMO-SVM, IBK, RF, and RT models, respectively.
It is well established that the best relationship between the data shown on the horizontal and vertical axis is demonstrated using the line x = y, in the regression chart. According to Figure 8 and Figure 9, the IBK produced the closest outputs to the actual values of the safety factor in the training phase (R2 = 1). Also, the highest correlation was achieved by the RF outputs for the testing data (R2 = 0.9985). A higher scattering in samples, observed for MLR (R2 = 0.9586 and 0.9649) and SMO-SVM (R2 = 0.9529 and 0.9653) predictions indicates a lower sensitivity of these models in both the training and testing datasets.
The percentage of each stability class (i.e., the number of safety factor values in the utilized class over the whole number of data points) was calculated. The results of this part are depicted in the form of a column chart in Figure 10.
In addition, the regression between the actual and estimated values of FOS is depicted in Figure 6 and Figure 7 for the training and testing data.
Based on this chart, more than half of the safety factor values indicated a safe condition for the examined slope. Almost all employed models showed a good approximation for the classification of the risk of failure. Considering the categories “unstable” and “low stability” as the dangerous situation of the slope (i.e., the slope was more likely to fail with these safety factors), a total of 14.60% of the dataset was assorted as being dangerous. This value was estimated as 18.25%, 12.54%, 12.70%, 14.76%, 14.60%, 13.81%, and 14.13% by the MLR, MLP, RBFR, SMO-SVM, IBK, RF, and RT models, respectively.
FOSMLR = 0.042 × Cu − 0.0525 × β + 0.1718 × b/B − 0.0395 × w + 5.9289
FOSSMO-SVM = + 0.5385 × Cu − 0.0685 × β + 0.0213 × b/B − 0.0925 × w + 0.0825
FOSMLP = −0.2956 × Z1 + 0.0456 × Z2 + 0.9411 × Z3 + 1.3988 × Z4 + 0.1028 × Z5 − 0.0499 × Z6 + 0.0303 × Z7 − 0.1266 × Z8 − 0.7543 × Z9 − 0.0020 × Z10 − 1.0782
where the parameters Z1, Z2, …, Z10 are shown in Table 6 and weight and biases tabulated in Table 7:
where the parameters Y1, Y2, …, Y10 are in Table 7.

4. Conclusions

In the past, many scholars have investigated the slope failure phenomenon due to its huge impact on many civil engineering projects. Due to this, many traditional techniques offering numerical and finite element solutions have been developed. The appearance of machine learning has made these techniques antiquated. Therefore, the main incentive of this study was to evaluate the proficiency of various machine-learning-based models, namely multiple linear regression (MLR), multi-layer perceptron (MLP), radial basis function regression (RBFR), improved support vector machine using sequential minimal optimization algorithm (SMO-SVM), lazy k-nearest neighbor (IBK), random forest (RF), and random tree (RT), in estimating the factor of safety (FOS) of a cohesive slope. To provide the required dataset. Optum G2 software was effectively used to calculate the FOS of 630 different slope conditions. The inputs (i.e., the slope failure conditioning factors) chosen were the undrained shear strength (Cu), slope angle (β), setback distance ratio (b/B), and applied surcharge on the shallow foundation installed over the slope (w). To train the intelligent models, 80% of the provided dataset (i.e., 504 samples) was randomly selected. Then, the results of each model were validated using the remaining 20% (i.e., 126 samples). Five well-known statistical indices, namely coefficient of determination (R2), mean absolute error (MAE), root mean square error (RMSE), relative absolute error (RAE in %), and root relative squared error (RRSE in %), were used to evaluate the results of the MLR, MLP, RBFR, SMO-SVM, IBK, RF, and RT predictive models. A color intensity rating along with the total ranking method (i.e., based on the result of the above indices) was also developed. In addition to this, the results were compared by classifying the risk of slope failure using five classes: unstable, low stability (dangerous), moderate stability, good stability, and safe. The outcomes of this paper are as follows:
  • Application of multiple linear regression (MLR), multi-layer perceptron (MLP), radial basis function regression (RBFR), improved support vector machine using sequential minimal optimization algorithm (SMO-SVM), lazy k-nearest neighbor (IBK), random forest (RF), and random tree (RT) were viable for appraising the stability of a single-layered cohesive slope.
  • A good level of accuracy was achieved by all applied models based on the obtained values of R2 (0.9586, 0.9937, 0.9948, 0.9529, 1.000, 0.9997, and 0.9998), RMSE (28.4887, 1.7366, 0.7131, 0.6231, 1.9183, 0.0000, 0.1486, and 0.1111), and RRSE (11.6985%, 10.2221%, 31.4703%, 0.0000%, 2.4385%, and 1.8221%) for the training dataset, and R2 (0.9649, 0.9939, 0.9955, 0.9653, 0.9837, 0.9985, and 0.9950), MAE (1.1939, 0.5149, 0.3936, 1.0360, 0.8184, 0.2152, and 0.3492), and RAE (24.1272%, 10.4047%, 7.9549%, 20.9366%, 16.5388%, 4.3498%, and 7.0576%) for the testing dataset obtained for MLR, MLP, RBFR, SMO-SVM, IBK, RF, and RT, respectively.
  • The models that use ensemble-learning (i.e., RF and RT) outperformed the models that are based on lazy-learning (IBK), regression rules (MLR and RBFR), neural-learning (MLP), and statistical theories (SMO-SVM).
  • Referring to the acquired total scores (the summation of scores obtained for the training and testing stages) of 15, 35, 48, 15, 50, 60, and 57 for MLR, MLP, RBFR, SMO-SVM, IBK, RF, and RT, respectively, the superiority of the RF algorithm can be seen. After that, RT, IBK, and RBFR showed an excellent performance in estimating the FOS. Also, satisfactory reliability was achieved for the MLP, SMO-SVM, and MLR methods.
  • Based on the results obtained for the classification of the risk of failure, the closest approximation was achieved by the RF, RT, IBK, and SMO-SVM methods. In this regard, 14.60% of the dataset was assorted as being hazardous slopes (i.e., “unstable” and “low stability” categories). This value was estimated as 18.25%, 12.54%, 12.70%, 14.76%, 14.60%, 13.81%, and 14.13% by the MLR, MLP, RBFR, SMO-SVM, IBK, RF, and RT models, respectively.

Author Contributions

H.M. and B.K. performed experiments and field data collection; H.M. and L.K.F. wrote the manuscript, L.K.F. discussion and analyzed the data. B.K. and D.T.B. edited, restructured, and professionally optimized the manuscript.

Funding

This work was financially supported by Ton Duc Thang University and University of South-East Norway.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pourghasemi, H.R.; Pradhan, B.; Gokceoglu, C. Application of fuzzy logic and analytical hierarchy process (AHP) to landslide susceptibility mapping at Haraz watershed, Iran. Nat. Hazards 2012, 63, 965–996. [Google Scholar] [CrossRef]
  2. Latifi, N.; Rashid, A.S.A.; Siddiqua, S.; Majid, M.Z.A. Strength measurement and textural characteristics of tropical residual soil stabilised with liquid polymer. Measurement 2016, 91, 46–54. [Google Scholar] [CrossRef]
  3. Moayedi, H.; Huat, B.B.; Kazemian, S.; Asadi, A. Optimization of tension absorption of geosynthetics through reinforced slope. Electron. J. Geotechn. Eng. B 2010, 15, 1–12. [Google Scholar]
  4. Marto, A.; Latifi, N.; Janbaz, M.; Kholghifard, M.; Khari, M.; Alimohammadi, P.; Banadaki, A.D. Foundation size Effect on modulus of subgrade Reaction on sandy soils. Electron. J. Geotech. Eng. 2012, 17, 2523–2530. [Google Scholar]
  5. Gao, W.; Wang, W.; Dimitrov, D.; Wang, Y. Nano properties analysis via fourth multiplicative ABC indicator calculating. Arabian J. Chem. 2018, 11, 793–801. [Google Scholar]
  6. Gao, W.; Dimitrov, D.; Abdo, H. Tight independent set neighborhood union condition for fractional critical deleted graphs and ID deleted graphs. Discrete Continuous Dyn. Syst. 2018, 12, 711–721. [Google Scholar]
  7. Raftari, M.; Kassim, K.A.; Rashid, A.S.A.; Moayedi, H. Settlement of shallow foundations near reinforced slopes. Electron. J. Geotech. Eng. 2013, 18, 797–808. [Google Scholar]
  8. Nazir, R.; Ghareh, S.; Mosallanezhad, M.; Moayedi, H. The influence of rainfall intensity on soil loss mass from cellular confined slopes. Measurement 2016, 81, 13–25. [Google Scholar] [CrossRef]
  9. Javankhoshdel, S.; Bathurst, R.J. Simplified probabilistic slope stability design charts for cohesive and cohesive-frictional (c-ϕ) soils. Can. Geotech. J. 2014, 51, 1033–1045. [Google Scholar] [CrossRef]
  10. Duncan, J.M. State of the art: Limit equilibrium and finite-element analysis of slopes. J. Geotech. Eng. 1996, 122, 577–596. [Google Scholar] [CrossRef]
  11. Kang, F.; Xu, B.; Li, J.; Zhao, S. Slope stability evaluation using Gaussian processes with various covariance functions. Appl. Soft Comput. 2017, 60, 387–396. [Google Scholar] [CrossRef]
  12. Basarir, H.; Kumral, M.; Karpuz, C.; Tutluoglu, L. Geostatistical modeling of spatial variability of SPT data for a borax stockpile site. Eng. Geol. 2010, 114, 154–163. [Google Scholar] [CrossRef]
  13. Secci, R.; Foddis, M.L.; Mazzella, A.; Montisci, A.; Uras, G. Artificial Neural Networks and Kriging Method for Slope Geomechanical Characterization. In Engineering Geology for Society and Territory-Volume 2; Springer: Berlin/Heidelberg, Germany, 2015; pp. 1357–1361. [Google Scholar]
  14. Moayedi, H.; Nazir, R.; Mosallanezhad, M.; Noor, R.B.M.; Khalilpour, M. Lateral deflection of piles in a multilayer soil medium. Case study: The Terengganu seaside platform. Measurement 2018, 123, 185–192. [Google Scholar] [CrossRef]
  15. Gao, W.; Guirao, J.L.; Abdel-Aty, M.; Xi, W. An independent set degree condition for fractional critical deleted graphs. Discrete Continuous Dyn. Syst. 2019, 12, 877–886. [Google Scholar] [Green Version]
  16. Gao, W.; Guirao, J.L.; Basavanagoud, B.; Wu, J. Partial multi-dividing ontology learning algorithm. Inf. Sci. 2018, 467, 35–58. [Google Scholar]
  17. Zhang, Y.; Dai, M.; Ju, Z. Preliminary discussion regarding SVM kernel function selection in the twofold rock slope prediction model. J. Comput. Civ. Eng. 2015, 30, 04015031. [Google Scholar] [CrossRef]
  18. Moayedi, H.; Tien Bui, D.; Gör, M.; Pradhan, B.; Jaafari, A. The feasibility of three prediction techniques of the artificial neural network, adaptive neuro-fuzzy inference system, and hybrid particle swarm optimization for assessing the safety factor of cohesive slopes. ISPRS Int. J. Geo-Inf. 2019, 8, 9. [Google Scholar]
  19. Gao, W.; Wu, H.; Siddiqui, M.K.; Baig, A.Q. Study of biological networks using graph theory. Saudi J. Biol. Sci. 2018, 25, 1212–1219. [Google Scholar]
  20. Moayedi, H.; Hayati, S. Modelling and optimization of ultimate bearing capacity of strip footing near a slope by soft computing methods. Appl. Soft Comput. 2018, 66, 208–219. [Google Scholar] [CrossRef]
  21. Bui, D.T.; Moayedi, H.; Gör, M.; Jaafari, A.; Foong, L.K. Predicting slope stability failure through machine learning paradigms. ISPRS Int. J. Geo-Inf. 2019, 8, 395. [Google Scholar]
  22. Jellali, B.; Frikha, W. Constrained particle swarm optimization algorithm applied to slope stability. Int. J. Geomech. 2017, 17, 06017022. [Google Scholar] [CrossRef]
  23. Pei, H.; Zhang, S.; Borana, L.; Zhao, Y.; Yin, J. Slope Stability Analysis Based on Real-time Displacement Measurements. Measurement 2019, 131, 686–693. [Google Scholar] [CrossRef]
  24. Wang, L.; Li, H.; Chi, Y. Prediction of Surface Grinding Quality Based on SMO-SVM. Electron. Sci. Technol. 2017, 4, 29. [Google Scholar]
  25. Enyinnaya, R. Predicting Cancer-Related Proteins in Protein-Protein Interaction Networks using Network Approach and SMO-SVM Algorithm. Int. J. Comput. Appl. 2015, 115, 5–9. [Google Scholar] [CrossRef]
  26. Long, N.; Gianola, D.; Rosa, G.J.M.; Weigel, K.A.; Kranis, A.; Gonzalez-Recio, O. Radial basis function regression methods for predicting quantitative traits using SNP markers. Genet. Res. 2010, 92, 209–225. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Hervás-Martínez, C.; Gutiérrez, P.A.; Peñá-Barragán, J.M.; Jurado-Expósito, M.; López-Granados, F. A logistic radial basis function regression method for discrimination of cover crops in olive orchards. Expert Syst. Appl. 2010, 37, 8432–8444. [Google Scholar] [CrossRef] [Green Version]
  28. Panigrahi, P.K. A Comparative Study of Supervised Machine Learning Techniques for Spam E-Mail Filtering. In Proceedings of the Fourth International Conference on Computational Intelligence and Communication Networks, Mathura, India, 3–5 November 2012; pp. 506–512. [Google Scholar]
  29. Michel, J.; Malik, J.; Inglada, J. Lazy yet efficient land-cover map generation for HR optical images. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25–30 July 2010; pp. 1863–1866. [Google Scholar]
  30. Bui, D.T.; Tuan, T.A.; Klempe, H.; Pradhan, B.; Revhaug, I. Spatial prediction models for shallow landslide hazards: A comparative assessment of the efficacy of support vector machines, artificial neural networks, kernel logistic regression, and logistic model tree. Landslides 2016, 13, 361–378. [Google Scholar]
  31. Yilmaz, I. Comparison of landslide susceptibility mapping methodologies for Koyulhisar, Turkey: Conditional probability, logistic regression, artificial neural networks, and support vector machine. Environ. Earth Sci. 2010, 61, 821–836. [Google Scholar] [CrossRef]
  32. Krabbenhoft, K.; Lyamin, A.; Krabbenhoft, J. Optum Computational Engineering (Optum G2). Available online: https://www. optumce. com (accessed on 1 September 2015).
  33. Bui, X.-N.; Muazu, M.A.; Nguyen, H. Optimizing Levenberg–Marquardt backpropagation technique in predicting factor of safety of slopes after two-dimensional OptumG2 analysis. Eng. Comput. 2019, 36, 1–12. [Google Scholar] [CrossRef]
  34. Kalantar, B.; Al-Najjar, H.A.; Pradhan, B.; Saeidi, V.; Halin, A.A.; Ueda, N.; Naghibi, S.A. Optimized Conditioning Factors Using Machine Learning Techniques for Groundwater Potential Mapping. Water 2019, 11, 1909. [Google Scholar] [CrossRef]
  35. Schaffer, C. Selecting a classification method by cross-validation. Mach. Learn. 1993, 13, 135–143. [Google Scholar] [CrossRef]
  36. Naji, H.; Ali, R.H. Qualitative Analysis of Cost Risk Using WEKA Program. In Proceedings of the International Scientific Conference of Engineering Sciences—3rd Scientific Conference of Engineering Science, Diyala, Iraq, 10–11 January 2018; pp. 271–274. [Google Scholar]
  37. Nagar, A. A Comparative Study of Data Mining Algorithms for Decision Tree Approaches using WEKA Tool. Adv. Nat. Appl. Sci. 2017, 11, 230–243. [Google Scholar]
  38. Seyedashraf, O.; Mehrabi, M.; Akhtari, A.A. Novel approach for dam break flow modeling using computational intelligence. J. Hydrol. 2018, 559, 1028–1038. [Google Scholar] [CrossRef]
  39. Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software: An update. ACM SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
  40. Makridakis, S.; Wheelwright, S.C.; Hyndman, R.J. Forecasting Methods and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  41. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  42. ASCE Task Committee. Artificial neural networks in hydrology. II: Hydrologic applications. J. Hydrol. Eng. 2000, 5, 124–137. [Google Scholar] [CrossRef]
  43. Moayedi, H.; Hayati, S. Artificial intelligence design charts for predicting friction capacity of driven pile in clay. Neural Comput. Appl. 2018, 1–17. [Google Scholar] [CrossRef]
  44. Moayedi, H.; Rezaei, A. An artificial neural network approach for under-reamed piles subjected to uplift forces in dry sand. Neural Comput. Appl. 2017, 31, 327–336. [Google Scholar] [CrossRef]
  45. Broomhead, D.S.; Lowe, D. Radial Basis Functions, Multi-Variable Functional Interpolation and Adaptive Networks; Royal Signals and Radar Establishment Malvern (United Kingdom): Malvern, UK, 1988. [Google Scholar]
  46. Schaback, R. Multivariate interpolation by polynomials and radial basis functions. Constr. Approx. 2005, 21, 293–317. [Google Scholar] [CrossRef]
  47. Powell, M.J.D. Radial basis functions for multivariable interpolation: A review. Algorithms Approx. 1987, 9, 143–167. [Google Scholar]
  48. Poggio, T.; Girosi, F. Regularization algorithms for learning that are equivalent to multilayer networks. Science 1990, 247, 978–982. [Google Scholar] [CrossRef] [PubMed]
  49. Jayasumana, S.; Hartley, R.; Salzmann, M.; Li, H.; Harandi, M. Kernel methods on Riemannian manifolds with Gaussian RBF kernels. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 2464–2477. [Google Scholar] [CrossRef] [PubMed]
  50. Rashidinia, J.; Fasshauer, G.E.; Khasi, M. A stable method for the evaluation of Gaussian radial basis function solutions of interpolation and collocation problems. Comput. Math. Appl. 2016, 72, 178–193. [Google Scholar] [CrossRef]
  51. Soleymani, F.; Ullah, M.Z. A multiquadric RBF–FD scheme for simulating the financial HHW equation utilizing exponential integrator. Calcolo 2018, 55, 51. [Google Scholar] [CrossRef]
  52. Boztosun, I.; Charafi, A.; Zerroukat, M.; Djidjeli, K. Thin-plate spline radial basis function scheme for advection-diffusion problems. Electron. J. Bound. Elem. 2002, 2, 267–282. [Google Scholar]
  53. Xiang, S.; Shi, H.; Wang, K.M.; Ai, Y.T.; Sha, Y.D. Thin plate spline radial basis functions for vibration analysis of clamped laminated composite plates. Eur. J. Mech. A/Solids 2010, 29, 844–850. [Google Scholar] [CrossRef]
  54. Kavzoglu, T.; Colkesen, I. A kernel functions analysis for support vector machines for land cover classification. Int. J. Appl. Earth Obs. Geoinf. 2009, 11, 352–359. [Google Scholar] [CrossRef]
  55. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  56. Kavzoglu, T.; Sahin, E.K.; Colkesen, I. Landslide susceptibility mapping using GIS-based multi-criteria decision analysis, support vector machines, and logistic regression. Landslides 2014, 11, 425–439. [Google Scholar] [CrossRef]
  57. Vapnik, V. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 2013. [Google Scholar] [CrossRef]
  58. Cherkassky, V.S.; Mulier, F. Learning from Data: Concepts, Theory, and Methods; John Wiley and Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  59. Patil, S.; Soma, S.; Nandyal, S. Identification of growth rate of plant based on leaf features using digital image processing techniques. Int. J. Emerg. Technol. Adv. Eng. 2013, 3, 266–275. [Google Scholar]
  60. Platt, J. Sequential minimal optimization: A fast algorithm for training support vector machines. CiteSeerX 1998, 10, 4376. [Google Scholar]
  61. Altman, N.S. An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat. 1992, 46, 175–185. [Google Scholar]
  62. Raviya, K.H.; Gajjar, B. Performance Evaluation of different data mining classification algorithm using WEKA. Indian J. Res. 2013, 2, 19–21. [Google Scholar] [CrossRef]
  63. Bin Othman, M.F.; Yau, T.M.S. Comparison of Different Classification Techniques Using WEKA for Breast Cancer. In Proceedings of the 3rd Kuala Lumpur International Conference on Biomedical Engineering, Kuala Lumpur, Biomed 2006, Kuala Lumpur, Malaysia, 11–14 December 2006. [Google Scholar] [CrossRef]
  64. Ghosh, S.; Roy, S.; Bandyopadhyay, S. A tutorial review on Text Mining Algorithms. Int. J. Adv. Res. Comput. Commun. Eng. 2012, 1, 7. [Google Scholar]
  65. Vijayarani, S.; Sudha, S. Comparative analysis of classification function techniques for heart disease prediction. Int. J. Innov. Res. Comput. Commun. Eng. 2013, 1, 735–741. [Google Scholar]
  66. Aha, D.W.; Kibler, D.; Albert, M.K. Instance-based learning algorithms. Mach. Learn. 1991, 6, 37–66. [Google Scholar] [CrossRef] [Green Version]
  67. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  68. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
  69. Hansen, L.K.; Salamon, P. Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 993–1001. [Google Scholar] [CrossRef] [Green Version]
  70. Breiman, L.; Cutler, A. Random Forests Machine Learning; Kluwer Academic Publishers: Norwell, MA, USA, 2001; Volume 45, pp. 5–32. [Google Scholar] [CrossRef]
  71. Pfahringer, B. Random Model Trees: An Effective and Scalable Regression Method; Department of Computer Science, The University of Waikato, Private Bag 3105: Hamilton, New Zealand, 2010. [Google Scholar]
Figure 1. A view of the geometry of the slope.
Figure 1. A view of the geometry of the slope.
Applsci 09 04638 g001
Figure 2. The relationship between the input parameters (a) strength of the soil (Cu), (b) slope angle (β), (c) setback distance ratio (b/B), and (d) applied surcharge on the footing installed over the slope (w) versus the obtained factor of safety (FOS).
Figure 2. The relationship between the input parameters (a) strength of the soil (Cu), (b) slope angle (β), (c) setback distance ratio (b/B), and (d) applied surcharge on the footing installed over the slope (w) versus the obtained factor of safety (FOS).
Applsci 09 04638 g002aApplsci 09 04638 g002b
Figure 3. The k-fold cross-validation process.
Figure 3. The k-fold cross-validation process.
Applsci 09 04638 g003
Figure 4. A simple view of linear fitting.
Figure 4. A simple view of linear fitting.
Applsci 09 04638 g004
Figure 5. The schematic view of (a) the proposed model for the multilayer perceptron and (b) the operation of computational neurons.
Figure 5. The schematic view of (a) the proposed model for the multilayer perceptron and (b) the operation of computational neurons.
Applsci 09 04638 g005
Figure 6. The structure of Radial basis function.
Figure 6. The structure of Radial basis function.
Applsci 09 04638 g006
Figure 7. The graphical depiction of the mechanism of an SVM.
Figure 7. The graphical depiction of the mechanism of an SVM.
Applsci 09 04638 g007
Figure 8. The network result for the training dataset in (a) multiple linear regression (MLR), (b) multi-layer perceptron (MLP), (c) radial basis function regression (RBFR), (d) improved support vector machine using sequential minimal optimization algorithm (SMO-SVM), (e) lazy k-nearest neighbor (IBK), (f) random forest (RF), and (g) random tree (RT).
Figure 8. The network result for the training dataset in (a) multiple linear regression (MLR), (b) multi-layer perceptron (MLP), (c) radial basis function regression (RBFR), (d) improved support vector machine using sequential minimal optimization algorithm (SMO-SVM), (e) lazy k-nearest neighbor (IBK), (f) random forest (RF), and (g) random tree (RT).
Applsci 09 04638 g008aApplsci 09 04638 g008bApplsci 09 04638 g008c
Figure 9. The network result for the testing dataset in (a) multiple linear regression (MLR), (b) multi-layer perceptron (MLP), (c) radial basis function regression (RBFR), (d) improved support vector machine using sequential minimal optimization algorithm (SMO-SVM), (e) lazy k-nearest neighbor (IBK), (f) random forest (RF), and (g) random tree (RT).
Figure 9. The network result for the testing dataset in (a) multiple linear regression (MLR), (b) multi-layer perceptron (MLP), (c) radial basis function regression (RBFR), (d) improved support vector machine using sequential minimal optimization algorithm (SMO-SVM), (e) lazy k-nearest neighbor (IBK), (f) random forest (RF), and (g) random tree (RT).
Applsci 09 04638 g009aApplsci 09 04638 g009bApplsci 09 04638 g009c
Figure 10. The column chart showing the percentage of stability classes.
Figure 10. The column chart showing the percentage of stability classes.
Applsci 09 04638 g010
Table 1. Example of inputs and output dataset used for training and validating the applied models.
Table 1. Example of inputs and output dataset used for training and validating the applied models.
No.Cuβb/BwFOSStability DescriptionNo.Cuβb/BwFOSStability Description
125150501.872Moderate261006011502.682High
2251501001.196Low27100602504.671Very high
3251501500.8154Unstable281006021003.674High
425151501.852Moderate291006021502.904High
5251511001.324Low30100603504.72Very high
6251511500.8981Unstable311006031003.753High
725152501.834Moderate321006031503.017High
8251521001.354Low33100604504.833Very high
9251521500.9133Unstable341006041003.904High
1025153501.824Moderate351006041503.152High
11251531001.356Low36100605504.996Very high
12251531500.9169Unstable371006051004.059Very high
1325154501.825Moderate381006051503.297High
14251541001.356Low39100750504.227Very high
15251541500.9171Unstable401007501002.425High
1625155501.832Moderate411007501501.68Moderate
17251551001.357Low42100751504.154Very high
18251551500.9179Unstable431007511003.174High
1925300501.597Moderate441007511502.344High
20253001001.035Low45100752504.065Very high
21253001500.8Unstable461007521003.185High
2225301501.581Moderate471007521502.545High
23253011001.229Low48100753504.116Very high
24253011500.8449Unstable491007531003.272High
2525302501.556Moderate501007531502.655High
Table 2. Description of stability classification.
Table 2. Description of stability classification.
Class NumberStability DescriptionFOS RangeClass Value
1Unstable(FOS < 1)1
2Low(1 < FOS < 1.5)2
3Moderate(1.5 < FOS < 2)3
4High(2 < FOS < 4)4
5Very high(4 < FOS)5
Table 3. Total ranking of the training dataset in predicting the factor of safety.
Table 3. Total ranking of the training dataset in predicting the factor of safety.
Proposed ModelsNetwork ResultsRanking the Predicted ModelsTotal Ranking ScoreRank
R2MAERMSERAE (%)RRSE (%)R2MAERMSERAE (%)RRSE (%)
MLR0.95861.25271.736625.051528.48872121286
MLP0.99370.4940.71319.879611.698533333155
RBFR0.99480.39760.62317.952210.222144444204
SMO-SVM0.95291.1611.918323.218231.47031212177
IBK1.0000000077777351
RF0.99970.10630.14862.12542.438555555253
RT0.99980.08110.11111.6221.822166666302
Note: It is a ranking system based on colures. The one with higher intensity has better value. R2: correlation coefficient; MAE: mean absolute error; RMSE: root mean squared error; RAE: relative absolute error; RRSE: root relative squared error.
Table 4. Total ranking of the testing dataset in predicting the factor of safety.
Table 4. Total ranking of the testing dataset in predicting the factor of safety.
Proposed ModelsNetwork ResultsRanking the Predicted ModelsTotal Ranking ScoreRank
R2MAERMSERAE (%)RRSE (%)R2MAERMSERAE (%)RRSE (%)
MLR0.96491.19391.589124.127226.46131121277
MLP0.99390.51490.709310.404711.811644444204
RBFR0.99550.39360.56677.95499.437665656282
SMO-SVM0.96531.03601.636220.936627.2472212186
IBK0.98370.81841.106616.538817.24733333155
RF0.99850.21520.33124.34985.514577777351
RT0.99500.34920.60337.057610.04756565273
Table 5. Total ranking of both training and testing datasets in predicting the factor of safety.
Table 5. Total ranking of both training and testing datasets in predicting the factor of safety.
Proposed ModelNetwork ResultTotal ScoreTotal Rank
Training DatasetTesting Dataset
R2MAERMSERAE (%)RRSE (%)R2MAERMSERAERRSE
MLR2121211212156
MLP3333344444355
RBFR4444465656484
SMO-SVM1212122121156
IBK7777733333503
RF5555577777601
RT6666656565572
Table 6. Weight and biases of the ANN model.
Table 6. Weight and biases of the ANN model.
Neuron (i)Zi = Tansig (Wi1 × Y1 + Wi2 × Y2 + … + Wi10 × Y10 + bi)
Wi1Wi2Wi3Wi4Wi5Wi6Wi7Wi8Wi9Wi10bi
10.80140.2141−0.97390.3018−1.05770.36760.08190.08140.1832−0.2797−0.1420
20.7150−1.08192.61191.4173−1.7753−0.6882−0.3629−0.37580.37333.8024−0.8470
30.0539−0.4013−0.0124−0.13761.5404−0.1659−0.15690.09650.1123−1.02060.7256
40.09750.2092−0.2620−0.0167−0.20810.19040.06620.0342−0.02561.35290.6269
50.3407−0.23030.60020.7437−0.07430.27080.6381−0.4040−0.26031.63800.0664
60.2808−1.15121.60900.0751−0.90231.0409−0.18261.3060−0.0688−0.6791−0.8171
7−1.4209−1.3960−0.61750.01150.44151.3083−0.35461.1381−0.5560−1.3212−0.4100
8−0.8622−1.46061.0247−1.4288−2.09290.84960.11140.1647−0.4303−0.4968−0.1472
90.0532−0.0747−0.52530.0773−1.0538−0.1710−0.1178−0.03040.0095−0.7900−1.2037
101.6879−1.7526−0.54772.4484−0.1138−1.96880.56822.5085−4.1707−0.5766−0.5541
Table 7. Weight and biases of the ANN model.
Table 7. Weight and biases of the ANN model.
Neuron (i)Yi = Tansig (Wi1 × Cu + Wi2 × β + Wi10 × (b/B) + Wi2 × w + bi)
Wi1Wi2Wi3Wi4bi
11.11792.1216−0.9307−1.9126−3.1819
2−1.65400.76700.52260.59051.9420
3−1.75770.16821.0751−0.84481.5533
4−1.1786−1.20081.5102−1.41821.2264
50.31320.04030.0188−0.0171−0.0729
60.3103−0.17020.8267−2.18440.1736
7−0.2284−1.62771.72071.2254−0.0037
81.9007−1.87770.97802.92711.4079
90.1767−1.2933−1.2124−0.56311.4512
100.5156−0.3139−0.0902−0.3256−1.2137
Cu—undrained cohesion strength (kPa), β—slope angle (°), b/B—setback distance ratio, and w—applied surcharge on the footing.

Share and Cite

MDPI and ACS Style

Moayedi, H.; Tien Bui, D.; Kalantar, B.; Kok Foong, L. Machine-Learning-Based Classification Approaches toward Recognizing Slope Stability Failure. Appl. Sci. 2019, 9, 4638. https://doi.org/10.3390/app9214638

AMA Style

Moayedi H, Tien Bui D, Kalantar B, Kok Foong L. Machine-Learning-Based Classification Approaches toward Recognizing Slope Stability Failure. Applied Sciences. 2019; 9(21):4638. https://doi.org/10.3390/app9214638

Chicago/Turabian Style

Moayedi, Hossein, Dieu Tien Bui, Bahareh Kalantar, and Loke Kok Foong. 2019. "Machine-Learning-Based Classification Approaches toward Recognizing Slope Stability Failure" Applied Sciences 9, no. 21: 4638. https://doi.org/10.3390/app9214638

APA Style

Moayedi, H., Tien Bui, D., Kalantar, B., & Kok Foong, L. (2019). Machine-Learning-Based Classification Approaches toward Recognizing Slope Stability Failure. Applied Sciences, 9(21), 4638. https://doi.org/10.3390/app9214638

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop