Next Article in Journal
Project-Based Learning in a Transinstitutional Research Setting: Case Study on the Development of Sustainable Food Products
Next Article in Special Issue
Territorial Fragmentation and Renewable Energy Source Plants: Which Relationship?
Previous Article in Journal
A Methodology of Policy Assessment at the Municipal Level: Costa Rica´s Readiness for the Implementation of Nature-Based-Solutions for Urban Stormwater Management
Previous Article in Special Issue
Identification and Location of a Transitional Zone between an Urban and a Rural Area Using Fuzzy Set Theory, CLC, and HRL Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tunnel Surface Settlement Forecasting with Ensemble Learning

1
Department of Building, School of Design and Environment, National University of Singapore, Architecture Drive, Singapore 117566, Singapore
2
Zhejiang Geely Holding Group Co., LTD. 1760, Jiangling Road, Binjiang District, Hangzhou 310051, China
3
Nanhu College, Jiaxing University, Jiaxing 314001, China
4
Fujian Province University Key Laboratory of Computational Science, School of Mathematical Sciences, Huaqiao University, Quanzhou 362021, China
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(1), 232; https://doi.org/10.3390/su12010232
Submission received: 7 December 2019 / Revised: 19 December 2019 / Accepted: 20 December 2019 / Published: 26 December 2019
(This article belongs to the Special Issue Smart Urban Planning and Land Management)

Abstract

:
Ground surface settlement forecasting in the process of tunnel construction is one of the most important techniques towards sustainable city development and preventing serious damages, such as landscape collapse. It is evident that modern artificial intelligence (AI) models, such as artificial neural network, extreme learning machine, and support vector regression, are capable of providing reliable forecasting results for tunnel surface settlement. However, two limitations exist for the current forecasting techniques. First, the data provided by the construction company are usually univariate (i.e., containing only the settlement data). Second, the demand of tunnel surface settlement is immediate after the construction process begins. The number of training data samples is limited. Targeting at the above two limitations, in this study, a novel ensemble machine learning model is proposed to forecast tunnel surface settlement using univariate short period of real-world tunnel settlement data. The proposed Adaboost.RT framework fully utilizes existing data points with three base machine learning models and iteratively updates hyperparameters using current surface point locations. Experimental results show that compared with existing machine learning techniques and algorithms, the proposed ensemble learning method provides a higher prediction accuracy with acceptable computational efficiency.

1. Introduction

Modern sustainable city development involves various tunnel constructions, such as subway rail train tunnels, city underpass tunnels, and highway tunnels. Ground settlement in the process of tunnel construction is inevitable. Tunnel settlement not only affects the development of urban rail transit, it is also a great threat to the safety of the lives and property of urban area residents [1]. Data-driven forecasting method of the tunnel ground surface settlement is helpful to prevent serious damages and also useful for sustainable usage of the constructed tunnels [2,3,4].
Machine learning (ML), representing one of the hottest topics in data-driven methods, has achieved great successes in recent research studies. Compared to model-based methods, which require expert knowledge and experience to build physical or mathematical models, ML methods predict the future tunnel surface settlement purely based on historical data [5]. The established ML models are usually much more complex than the conventional mathematical model and can hardly be interpreted using several mathematical equations. The advantages of ML methods are easy for implementation and direct for real-world application usage. Moreover, since the models are more complex, the forecasting accuracy usually can achieve a relatively high level and outperform most of the existing model-based methods.
In 2019, Hu et al. [5] compared most existing ML techniques for ground surface settlement prediction in real-world tunnel construction processes and pointed out that deep learning techniques, such as convolutional neural networks (CNNs) and long short-term memory (LSTM) neural networks, are not suitable for analyzing short sequence time series data, such as tunnel settlement data [6]. This study is a follow-up work of [5]. There are two main difficulties for tunnel settlement forecasting. First, the lengths of the historical data of tunnel constructions are limited. Second, the data collected are usually univariate. Deep learning methods usually are not suitable under the above-mentioned situations [7]. In contrast, traditional ML techniques are more reliable prediction methods for tunnel settlement in the short-term. In a study by Hu et al. [5], through a series of comparative study, three machine learning techniques, namely, back-propagation neural network (BPNN) [8], extreme learning machine (ELM) [9], and support vector regression (SVR) [10], were selected as best predicting the tunnel settlement in the short-term for various cases. Following this study [5] and using the same dataset, in this study, we show that there exists a complex nonlinear relationship between tunnel settlement and many random uncertain factors; it is difficult to predict the tunnel settlements using one single machine learning technique.
In this research, an ensemble learning approach that integrates multiple machine learning models is proposed. To verify the performance of the method proposed, a real-world tunnel surface settlement from a tunnel construction corporation is selected. The motivation of this study is searching for the most suitable data-driven forecasting technique for short-term tunnel surface settlement. The study makes the following contributions to the literature.
(1) Utilizing multiple machine learning models for improving the tunnel settlement prediction accuracy: Based on the literature review conducted in Section 2, using one single model to predict the tunnel settlement accurately is indeed challenging. In this study, multiple different types of models, such as SVR, BPNN, and ELM, were integrated to construct a powerful ensemble learning framework for tunnel settlement forecasting.
(2) A customized ensemble learning algorithm based on traditional Adaboost.RT algorithm: The traditional Adaboost.RT algorithm was altered to best suit the short-term forecasting with a limited number of training data samples from the ground sensors during the tunnel construction period. The traditional Adaboost.RT algorithm usually integrates one type of prediction model for ensemble learning. In this study, we reimplement the Adaboost.RT algorithm integrating three types of different prediction models.
(3) Illustrating the prediction performance improvement through a comprehensive comparative study: In the experimental section, results are illustrated to compare the proposed method with existing works. First, we show that the ensemble learning framework outperforms individual base classifiers. Next, more experiments were carried out to compare the proposed framework with traditional ensemble learning approaches (i.e., the Adaboost.RT framework integrating a single type of classifiers). The performance of the proposed ensemble learning framework is proven through a series of comparative studies.

2. Related Works

Short-term time series data forecasting is a popular topic in the field of regression analysis, machine learning, and intelligent computing. In general, those research methods can be divided into two categories: model-based methods and data-driven methods [11,12]. The model-based methods include numerical analysis method, numerical simulation method, semi-theoretical analytical method, and stochastic theoretical model. For example, an empirical method based on the normal distribution function was proposed by Fang et al. to estimate the magnitude of tunnel surface settlement [13]. An elastic-visco-plastic (EVP) constitutive model in triaxial space and general stress space, used mainly for the study of isotropic clays, was proposed by Islam and Gnanendran [14]. They carried out many experimental studies on Kaolin clay, Hong Kong marine deposit clay, and Fukakusa clay, and all experiments achieved good prediction results. Mei et al. [15] started with the relationship between time and settlement, and mathematically proved that the settlement curve under the linear load applied to construction and embankment engineering is an “S” shape. On this basis, a new settlement prediction model called ‘Poison model’, provides an effective predictive study of building and surface settlement.
The data-driven methods mainly refer to ML methods. Recent study shows that ML-based forecasting strategies usually provide extremely high prediction results for time series data forecasting problems [3,16]. Ocak and Seker [17] compared artificial neural network (ANN), support vector machine (SVM), and Gaussian processes (GPs) on surface settlement forecasting calculation caused by earth pressure balance machines (EPBMs). The experiment results show that SVM has the best performance. Azadi and Pourakbar [18] used the finite element method to study the settlement of buildings around the tunnel, and then used neural network to analyze various settlements, and finally obtained the prediction conclusion of tunnel settlement. Moghaddasi et al. [19] utilized an imperialist competitive algorithm (ICA)-enhanced ANN algorithm to predict the tunnel settlement and achieved outstanding prediction results.
Ensemble or hybrid machine learning techniques are important solutions to raise the prediction accuracy of traditional machine learning techniques [20,21,22,23,24]. Tang et al. [25] proposed a hybrid ensemble learning framework to forecast nuclear energy consumption patterns. The experimental results show that the hybrid ensemble learning framework outperforms the single LSSVR learning method. Li et al. [26] introduced a wavelet-based ensemble learning framework for the short-term load forecasting problem. Wavelet transform was combined with multiple ELMs to boost the forecasting performance. Wang et al. [27] combined wavelet transform with convolutional neural network to forecast wind power data as time series data. The results showed that the volatility and irregularity of the wind power can be adaptively learned by the proposed hybrid learning method. In the current context of machine learning technology, the most popular ensemble learning algorithm is Adaboost and its extensions, including Adaboost.M1, Adaboost.M2 [28], Adaboost.R, Adaboost.R2 [29], and Adaboost.RT [30] algorithms.

3. Materials and Methods

Different from traditional forecasting problems, tunnel surface settlement forecasting has the challenges/properties of a short-period of time data available, univariate training data, and various hidden factors that are missing in the dataset. Existing works, such as [5], have shown the instability of prediction models while only single type models are used. To improve the robustness of the final prediction model, an integration of multiple models using ensemble learning algorithms is desired.

3.1. Selection of Base Prediction Models

In order to get an ensemble learner with better generalization performance and high prediction accuracy, the individual base prediction models have to be effective themselves and independent from each other. Nevertheless, completely independent models are hard to fit in actual tasks. For traditional Adaboost algorithms the same type of base classifiers is utilized. The base models differ from each other following different sample distribution in the training process. In this study, the original Adaboost.RT algorithm was altered by selecting different types of base learners to obtain the final ensemble generalized learning framework. According to our previous study on this topic [5], three base classifiers, namely, BPNN, ELM, and SVR were selected to build the ensemble learner.
Among the three base classifiers, SVR has strong generalization ability, which can easily solve small sample, over-learning, high-dimensional, and local minimum problems. BPNN has strong adaptive abilities, self-organizing, self-learning, and non-linear mapping characteristics. It overcomes the shortcomings of traditional feedback methods and has been used more widely in recent years. The ELM is a single-layer neural network, which has a fast convergence rate, high prediction accuracy, and strong generalization properties.

3.2. The Proposed Method

The proposed Adaboost.RT algorithm is an improved algorithm based on Adaboost.R2 by Solomatine and Shrestha [18]. They introduced a threshold in the Adaboost algorithm and compared the training error of every sample with the threshold, after that the training set was divided into two categories, and the regression problem was translated into a simple two-class problem, which can be dealt with by the traditional AdaBoost algorithm. In the process of converting a regression problem into a binary classification problem, AdaBoost.RT employs the absolute relative error (ARE) as an error metric to differentiate samples into true/false predictions; while the ARE value of one testing sample is higher than the threshold ϕ, the last predictor is recognized as the unsuitable prediction model and the tested sample will be tested again using other prediction models. The traditional Adaboost.RT algorithm integrates multiple base predictors of the same type and the threshold value ϕ is pre-set and manually adjusted.
In this study, we altered the traditional Adaboost.RT algorithm using three different types of base prediction models, namely, SVR, BPNN, and ELM. The initial weight or distribution of each data sample was equivalent. If there are m training data samples, the initial weight of each sample is 1/m. The distribution of each data sample was updated after evaluating the prediction result of SVM using the threshold value ϕ. The distributions were updated two more times for BPNN and ELM. The threshold value ϕ of Adaboost.RT was adaptively calculated instead of pre-setting using Zhang and Yang’s method [31].
The customized Adaboost.RT algorithm is elaborated in Algorithm 1. The overall flowchart is depicted in Figure 1.
Algorithm 1. A customized Adaboost.RT algorithm for tunnel settlement forecasting
Input: Training dataset M, weak learning algorithm (base learner) l, integer T specifying number of iterations (machines), threshold ϕ for differentiating correct, and incorrect predictions.
Initialization: Error rate εt, sample distribution Dt(i) = 1/m, machine number or iteration t = 1.
Iteration: While t < T:
Step 1: Calling base learner, providing it with distribution Dt(i) = 1/m.
Step 2: Building a regression model:
f t ( x ) y

Step 3: Calculating absolute relative error for each training example as
A R E t ( i ) = f t ( x i ) y i y i

Step 4: Calculating the error rate:
f t ( x ) : ε t = i : A R E t ( i ) > ϕ D t ( i )

Step 5: Setting βt = (εt)h, where h = 1, 2 or 3 (linear, square or cubic).
Step 6: Updating distribution Dt(i)
D t + 1 ( i ) = D t ( i ) Z t × β t , i f   AR E t ( i ) ϕ 1 , o t h e r w i s e
where Zt is a normalization factor chosen such that Dt+1 will be a distribution.
Step 7: Adjusting ϕ according to the algorithm proposed in [31].
Step 8: Setting t = t + 1
Output: Outputting the ensemble learner:
f e n s e m b l e ( x ) = t = 1 T log ( 1 β t ) f t ( x ) t = 1 T log ( 1 β t )

4. Results

A real-world dataset that was collected by a tunnel construction company located in Zhuhai City, China in 2015 was used. The data was recorded in chronological order for each collection point of the subway tunnel construction (attached as Supplementary File: Table S1). It is a single-dimension time series dataset. The tunnel ground surface points are indexed from 180 to 250 (see Supplementary Table S1). Among the 70 ground surface points, nine representative points are selected along the tunnel medial axis. The selected points are indexed: 181, 182, 184, 188, 189, 190, 210, 225, and 235.
Since the sample data is a one-dimension time series, one problem is that the sample dimension is too low. A suitable rolling window size was selected to expand the original single-dimensional data into multi-dimension data [5]. According to the dataset scales and experience, the best rolling window size is between 1 and 20. In the actual experiments, the rolling window size finally lied at 3. Then the original single-dimension sample data were expanded into three-dimension sample data, thereby solving the problem of too low dimension. In addition, for each tunnel ground surface point, 5/6 of the records was utilized as training dataset. The remaining 1/6 of the records was treated as testing data for verification purposes.
The data of subway tunnel surface settlement records from Zhuhai rail transit were selected for simulation, and the threshold value was adjusted by bisectional method using multiple training processes to obtain the final results.
To show the superiority of the performance of the proposed Adaboost.RT framework integrating multiple models, SVR, BPNN, and ELM were also developed for predicting the tunnel surface settlement. Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 show the forecasting results for surface point numbers 181, 184, 188, 189, 201, and 220. The testing results are enlarged in subfigures.
The testing computer’s configuration consists of an Intel Core (TM) i7-8700K CPU @ 3.70 GHz, NVIDIA GeForce GTX1080 graphics card, 16 GB RAM, and 8 GB graphical memory with Python version 3.7 (64-bit) and Keras version 2.0.3. Since all tested methods are machine learning models each test was finished within one minute.
All results are shown in different colors. The results of the proposed ensemble learning algorithm are shown in red and the actual data points are shown in black. The prediction results of the rest of the compared methods are shown in other colors. It can be clearly seen from the figures that the method of integrating multiple models with Adaboost.RT has an obviously high prediction accuracy.
The performance of each method was evaluated by three error measurement metrics, namely, root mean squared error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). The formulas of the three error metrics are listed in Equations (1)–(3):
R M S E = 1 n i = 1 n ( y i y i ) 2
M A E = 1 n i = 1 n y i y i
M A P E = 1 n i = 1 n y i y i y i × 100 %
where y’ is the predicted value and y is the actual value. The results of the three errors metrics (RMSE, MAE, and MAPE) of different methods for tunnel surface settlement are listed in Table 1. For all cases, the Adaboost.RT integrating multiple models had the best RMSE, MAE, and MAPE values according to Table 1.
In this research, to further validate the performance and robustness of the proposed method, a comparative experiment with traditional Adaboost.RT models integrating one type (SVR, BP or ELM) of classifier was conducted (Table 2). For example, the Adaboost.RT + SVR classifier is built by integrating three SVRs in the Adaboost.RT framework. In Table 2, the compared methods were also trained using the same sets of data to get the final ensemble learner. The results show that the proposed extended Adaboost.RT algorithm outperforms the traditional Adaboost.RT models integrating one type of classifier.
From Table 1 and Table 2, it is evident that the Adaboost.RT integrating multiple models method has better performance compared with the rest of the compared methods. The reason for this is because there exists a nonlinear relationship between time and tunnel settlement data, which can hardly be captured by one single machine learning technique. In summary, the proposed Adaboost.RT algorithm is a more appropriate method for tunnel surface settlement forecasting in the short-term compared to traditional predictive models.

5. Conclusions, Limitations, and Future Works

This study proposed a novel extended Adaboost.RT algorithm to forecast ground surface settlement in tunnel construction processes. The proposed extended Adaboost.RT algorithm integrates different types of base classifiers instead of a single type of classifiers (as in the traditional way). The base classifiers were selected according our previous study on the same dataset. In the training phase, the Adaboost.RT algorithm was used to improve different type models by distribution that were constantly being updated.
Comprehensive experiment results were shown in Section 4 to demonstrate the effectiveness of the proposed ensemble learning framework. First, we compared the proposed framework with individual machine learning classifiers, including SVR, BPNN, and ELM. Next, more experiments were carried out to compare the proposed framework with traditional ensemble learning approaches, (i.e., the Adaboost.RT framework integrating a single type of classifiers). The performance of the proposed ensemble learning framework is the best among all compared methods.
The main limitation of this study is that only one tunnel settlement dataset is tested. The dataset, as we attached to this manuscript as a Supplementary File (Table S1), contains only 70 surface points data. Out of the 70, we selected nine representative points along the tunnel medial axis to show the prediction results of the proposed Adaboost.RT forecasting framework. As one of the future works, the proposed method will be applied to more real-world tunnel settlement dataset to verify the robustness. In addition, more extensions of the Adaboost algorithms will be implemented and evaluated.

Supplementary Materials

The following are available online at https://www.mdpi.com/2071-1050/12/1/232/s1. Table S1: Tunnel settlement data collected from Zhuhai subway construction project in China.

Author Contributions

Conceptualization, K.Y. and Y.D.; methodology, K.Y.; software, Y.D.; validation, K.Y., M.X. and Y.M.; formal analysis, Y.M.; investigation, K.Y.; resources, K.Y.; data curation, Y.M.; writing—original draft preparation, K.Y.; writing—review and editing, K.Y.; visualization, M.X.; supervision, K.Y.; project administration, K.Y.; funding acquisition, K.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported the faculty start-up research grant of National University of Singapore under grant number R-296-000-208-133 (K.Y.), research project on the “13th Five-Year Plan” of higher education reform in Zhejiang Province, under grant number JG20180526 (M.X.), National Natural Science Foundation of China under trant 61972156, and Program for Innovative Research Team in Science and Technology in Fujian Province University (Y.M.).

Acknowledgments

The authors would like to thank Shanghai Tunnel Engineering Co., Ltd for providing the tunnel settlement data from Zhuhai subway construction project in China.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Zhang, L.; Wu, X.; Ding, L.; Skibniewski, M.J. A novel model for risk assessment of adjacent buildings in tunneling environments. Build. Environ. 2013, 65, 185–194. [Google Scholar] [CrossRef]
  2. Hu, M.; Feng, X.; Ji, Z.; Yan, K.; Zhou, S. A novel computational approach for discord search with local recurrence rates in multivariate time series. Inf. Sci. 2019, 477, 220–233. [Google Scholar] [CrossRef]
  3. Yan, K.; Wang, X.; Du, Y.; Jin, N.; Huang, H.; Zhou, H. Multi-Step Short-Term Power Consumption Forecasting with a Hybrid Deep Learning Strategy. Energies 2018, 11, 3089. [Google Scholar] [CrossRef] [Green Version]
  4. Li, H.Z.; Guo, S.; Li, C.J.; Sun, J.Q. A hybrid annual power load forecasting model based on generalized regression neural network with fruit fly optimization algorithm. Knowl. Based Syst. 2013, 37, 378–387. [Google Scholar] [CrossRef]
  5. Hu, M.; Li, W.; Yan, K.; Ji, Z.; Hu, H. Modern Machine Learning Techniques for Univariate Tunnel Settlement Forecasting: A Comparative Study. Math. Probl. Eng. 2019, 2019. [Google Scholar] [CrossRef] [Green Version]
  6. Yan, K.; Li, W.; Ji, Z.; Qi, M.; Du, Y. A Hybrid LSTM Neural Network for Energy Consumption Forecasting of Individual Households. IEEE Access 2019, 7, 157633–157642. [Google Scholar] [CrossRef]
  7. Yan, X.; Cui, B.; Xu, Y.; Shi, P.; Wang, Z. A Method of Information Protection for Collaborative Deep Learning under GAN Model Attack. IEEE/ACM Trans. Comput. Biol. Bioinform. 2019. [Google Scholar] [CrossRef]
  8. Moghaddam, M.A.; Golmezergi, R.; Kolahan, F. Multi-variable measurements and optimization of GMAW parameters for API-X42 steel alloy using a hybrid BPNN–PSO approach. Measurement 2016, 92, 279–287. [Google Scholar] [CrossRef]
  9. Yan, K.; Ji, Z.; Lu, H.; Huang, J.; Shen, W.; Xue, Y. Fast and accurate classification of time series data using extended ELM: Application in fault diagnosis of air handling units. IEEE Trans. Syst. Man Cybern. Syst. 2017, 49, 1349–1356. [Google Scholar] [CrossRef]
  10. Yan, K.; Zhong, C.; Ji, Z.; Huang, J. Semi-supervised learning for early detection and diagnosis of various air handling unit faults. Energy Build. 2018, 181, 75–83. [Google Scholar] [CrossRef]
  11. Freund, Y.; Schapire, R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef] [Green Version]
  12. Xu, Y.; Gao, W.; Zeng, Q.; Wang, G.; Ren, J.; Zhang, Y. A feasible fuzzy-extended attribute-based access control technique. Secur. Commun. Netw. 2018, 2018. [Google Scholar] [CrossRef] [Green Version]
  13. Fang, Y.S.; Wu, C.T.; Chen, S.F.; Liu, C. An estimation of subsurface settlement due to shield tunneling. Tunn. Undergr. Space Technol. 2014, 44, 121–129. [Google Scholar] [CrossRef]
  14. Islam, M.N.; Gnanendran, C.T. Elastic-viscoplastic model for clays: Development, validation, and application. J. Eng. Mech. 2017, 143, 04017121. [Google Scholar] [CrossRef] [Green Version]
  15. Mei, G.; Zhao, X.; Wang, Z. Settlement prediction under linearly loading condition. Mar. Georesour. Geotechnol. 2015, 33, 92–97. [Google Scholar] [CrossRef]
  16. Hu, M.; Ji, Z.; Yan, K.; Guo, Y.; Feng, X.; Gong, J.; Dong, L. Detecting anomalies in time series data via a meta-feature based approach. IEEE Access 2018, 6, 27760–27776. [Google Scholar] [CrossRef]
  17. Ocak, I.; Seker, S.E. Calculation of surface settlements caused by EPBM tunneling using artificial neural network, SVM, and Gaussian processes. Environ. Earth Sci. 2013, 70, 1263–1276. [Google Scholar] [CrossRef]
  18. Azadi, M.; Pourakbar, S.; Kashfi, A. Assessment of optimum settlement of structure adjacent urban tunnel by using neural network methods. Tunn. Undergr. Space Technol. 2013, 37, 1–9. [Google Scholar] [CrossRef]
  19. Moghaddasi, M.R.; Noorian-Bidgoli, M. ICA-ANN, ANN and multiple regression models for prediction of surface settlement caused by tunneling. Tunn. Undergr. Space Technol. 2018, 79, 197–209. [Google Scholar] [CrossRef]
  20. Yan, K.; Ji, Z.; Shen, W. Online fault detection methods for chillers combining extended kalman filter and recursive one-class SVM. Neurocomputing 2017, 228, 205–212. [Google Scholar] [CrossRef] [Green Version]
  21. Yan, K.; Ma, L.; Dai, Y.; Shen, W.; Ji, Z.; Xie, D. Cost-sensitive and sequential feature selection for chiller fault detection and diagnosis. Int. J. Refrig. 2018, 86, 401–409. [Google Scholar] [CrossRef]
  22. Du, Y.; Yan, K.; Ren, Z.; Xiao, W. Designing localized MPPT for PV systems using fuzzy-weighted extreme learning machine. Energies 2018, 11, 2615. [Google Scholar] [CrossRef] [Green Version]
  23. Lu, H.; Meng, Y.; Yan, K.; Xue, Y.; Gao, Z. Classifying Non-linear Gene Expression Data Using a Novel Hybrid Rotation Forest Method. In International Conference on Intelligent Computing, Proceedings of the 13th International Conference, ICIC 2017, Liverpool, UK, 7–10 August 2017; Springer: Cham, Switzerland, 2017. [Google Scholar]
  24. Zhong, C.; Yan, K.; Dai, Y.; Jin, N.; Lou, B. Energy Efficiency Solutions for Buildings: Automated Fault Diagnosis of Air Handling Units Using Generative Adversarial Networks. Energies 2019, 12, 527. [Google Scholar] [CrossRef] [Green Version]
  25. Tang, L.; Yu, L.; Wang, S.; Li, J.; Wang, S. A novel hybrid ensemble learning paradigm for nuclear energy consumption forecasting. Appl. Energy 2012, 93, 432–443. [Google Scholar] [CrossRef]
  26. Li, S.; Wang, P.; Goel, L. A novel wavelet-based ensemble method for short-term load forecasting with hybrid neural networks and feature selection. IEEE Trans. Power Syst. 2015, 31, 1788–1798. [Google Scholar] [CrossRef]
  27. Wang, H.Z.; Li, G.Q.; Wang, G.B.; Peng, J.C.; Jiang, H.; Liu, Y.T. Deep learning based ensemble approach for probabilistic wind power forecasting. Appl. Energy 2017, 188, 56–70. [Google Scholar] [CrossRef]
  28. Drucker, H. Improving regressors using boosting techniques. In Proceedings of the Fourteenth International Conference on Machine Learning ICML, Nashville, TN, USA, 8–12 July 1997; Volume 97, pp. 107–115. [Google Scholar]
  29. Solomatine, D.P.; Shrestha, D.L. AdaBoost. RT: A boosting algorithm for regression problems. In Proceedings of the 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), Budapest, Hungary, 25–29 July 2004; Volume 2, pp. 1163–1168. [Google Scholar]
  30. Yan, K.; Lu, H. Evaluating ensemble learning impact on gene selection for automated cancer diagnosis. In International Workshop on Health Intelligence; Springer: Cham, Switzerland, 2019; pp. 183–186. [Google Scholar]
  31. Zhang, P.; Yang, Z. A robust AdaBoost. RT based ensemble extreme learning machine. Math. Probl. Eng. 2015, 2015. [Google Scholar] [CrossRef]
Figure 1. Overall flowchart of the customized Adaboost.RT algorithm integrating multiple models. Abbreviations: SVR, support vector regression; BPNN, back-propagation neural network; ELM, extreme learning machine.
Figure 1. Overall flowchart of the customized Adaboost.RT algorithm integrating multiple models. Abbreviations: SVR, support vector regression; BPNN, back-propagation neural network; ELM, extreme learning machine.
Sustainability 12 00232 g001
Figure 2. Prediction results of tunnel surface point number 181. The testing results are enlarged in the subfigure.
Figure 2. Prediction results of tunnel surface point number 181. The testing results are enlarged in the subfigure.
Sustainability 12 00232 g002
Figure 3. Prediction results of tunnel surface point number 184.
Figure 3. Prediction results of tunnel surface point number 184.
Sustainability 12 00232 g003
Figure 4. Prediction results of tunnel surface point number 188.
Figure 4. Prediction results of tunnel surface point number 188.
Sustainability 12 00232 g004
Figure 5. Prediction results of tunnel surface point number 189.
Figure 5. Prediction results of tunnel surface point number 189.
Sustainability 12 00232 g005
Figure 6. Prediction results of tunnel surface point number 201.
Figure 6. Prediction results of tunnel surface point number 201.
Sustainability 12 00232 g006
Figure 7. Prediction results of tunnel surface point number 220.
Figure 7. Prediction results of tunnel surface point number 220.
Sustainability 12 00232 g007
Table 1. Prediction results of all tunnel surface point settlements by different methods.
Table 1. Prediction results of all tunnel surface point settlements by different methods.
PointProposedSVRBPNNELM
RMSEMAEMAPE
(%)
RMSEMAEMAPE
(%)
RMSEMAEMAPE
(%)
RMSEMAEMAPE
(%)
1810.26410.20410.54880.41580.36840.99501.74441.63494.40942.15312.01665.4360
1820.29450.24701.28530.36750.31131.61570.38070.31681.63910.30670.25481.3304
1840.20010.15870.47960.39600.36531.10170.43420.40271.21150.55470.51691.5559
1880.19530.13700.78260.37730.30711.69680.40380.32611.79500.24630.19851.1090
1890.15970.13250.50830.17020.14840.55870.49490.40841.52410.32000.28251.0639
1900.13750.08670.30580.16100.10750.37930.14550.09730.34480.16500.11660.4132
2100.15300.12390.48730.33580.21570.85600.22740.17410.68980.22540.17860.7066
2250.18210.14730.35850.19860.15870.38620.28220.21320.51930.26240.19740.4808
2350.26190.19790.91030.37790.32721.49880.54620.46142.11220.28670.24981.1457
Abbreviations: RMSE, root mean squared error; MAE, mean absolute error; MAPE, mean absolute percentage error.
Table 2. Prediction results of all tunnel surface point settlements by different Adaboost.RT algorithms.
Table 2. Prediction results of all tunnel surface point settlements by different Adaboost.RT algorithms.
Point NumberProposedAdaboost.RT + SVRAdaboost.RT + BPNNAdaboost.RT + ELM
RMSEMAEMAPE
(%)
RMSEMAEMAPE
(%)
RMSEMAEMAPE
(%)
RMSEMAEMAPE
(%)
1810.26410.20410.54880.39010.33950.91790.42010.36160.97231.24791.11673.0138
1820.29450.24701.28530.30820.26451.37650.30190.26291.36610.33050.28621.4857
1840.20010.15870.47960.19430.16180.49010.23920.18470.56150.42150.39131.1773
1880.19530.13700.78260.21930.15690.88870.29810.24061.33380.19760.14750.8324
1890.15970.13250.50830.26090.21310.80720.36160.29631.10640.36740.30311.1320
1900.13750.08670.30580.17000.11680.41260.15690.11900.42710.14240.09330.3305
2100.15300.12390.48730.21580.18290.71370.18840.15380.60630.16650.14100.5518
2250.18210.14730.35850.19690.15650.38110.20900.16200.39450.21420.16490.4011
2350.26190.19790.91030.35900.30611.39990.33270.28651.31140.27010.23621.0834
Abbreviations: RMSE, root mean squared error; MAE, mean absolute error; MAPE, mean absolute percentage error.

Share and Cite

MDPI and ACS Style

Yan, K.; Dai, Y.; Xu, M.; Mo, Y. Tunnel Surface Settlement Forecasting with Ensemble Learning. Sustainability 2020, 12, 232. https://doi.org/10.3390/su12010232

AMA Style

Yan K, Dai Y, Xu M, Mo Y. Tunnel Surface Settlement Forecasting with Ensemble Learning. Sustainability. 2020; 12(1):232. https://doi.org/10.3390/su12010232

Chicago/Turabian Style

Yan, Ke, Yuting Dai, Meiling Xu, and Yuchang Mo. 2020. "Tunnel Surface Settlement Forecasting with Ensemble Learning" Sustainability 12, no. 1: 232. https://doi.org/10.3390/su12010232

APA Style

Yan, K., Dai, Y., Xu, M., & Mo, Y. (2020). Tunnel Surface Settlement Forecasting with Ensemble Learning. Sustainability, 12(1), 232. https://doi.org/10.3390/su12010232

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop