Next Article in Journal
A Deep Unfolding Network for Multispectral and Hyperspectral Image Fusion
Previous Article in Journal
Ionospheric Absorption Variation Based on Ionosonde and Riometer Data and the NOAA D-RAP Model over Europe During Intense Solar Flares in September 2017
Previous Article in Special Issue
Measuring Dam Deformation of Long-Distance Water Transfer Using Multi-Temporal Synthetic Aperture Radar Interferometry: A Case Study in South-to-North Water Diversion Project, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Algorithm for Predicting Dam Deformation Using Grey Wolf-Optimized Variational Mode Long Short-Term Neural Network

1
National Key Laboratory of Uranium Resource Exploration-Mining and Nuclear Remote Sensing, Nanchang 330013, China
2
School of Surveying and Geoinformation Engineering, East China University of Technology, Nanchang 330013, China
3
Key Laboratory of Mine Environmental Monitoring and Improving around Poyang Lake of Ministry of Natural Resources, East China University of Technology, Nanchang 330013, China
4
Key Laboratory of Poyang Lake Wetland and Watershed Research, Ministry of Education, Jiangxi Normal University, Nanchang 330022, China
5
Hebei Institute of Investigation and Design of Water Conservancy and Hydropower Co., Ltd., Shijiazhuang 050085, China
6
Xuzhou Surveying & Mapping Research Institute Co., Ltd., Xuzhou 221000, China
7
School of Civil and Surveying & Mapping Engineering, Jiangxi University of Science and Technology, No. 86, Hongqi Ave., Ganzhou 341000, China
8
Hebei Water Conservancy Engineering Bureau Group Limited, Shijiazhuang 050021, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(21), 3978; https://doi.org/10.3390/rs16213978
Submission received: 27 September 2024 / Revised: 19 October 2024 / Accepted: 23 October 2024 / Published: 26 October 2024
(This article belongs to the Special Issue Dam Stability Monitoring with Satellite Geodesy II)

Abstract

:
To solve the problems of difficult to model parameter selections, useful signal extraction and improper-signal decomposition in nonlinear, non-stationary dam displacement time series prediction methods, we propose a new predictive model for grey wolf optimization and variational mode decomposition and long short-term memory (GVLSTM). Firstly, we used the grey wolf optimization (GWO) algorithm to optimize the parameters of variable mode decomposition (VMD), obtaining the optimal parameter combination. Secondly, we used multiscale permutation entropy (MPE) as a standard to select signal screening, determining and recon-structing the effective modal components. Finally, the long short-term memory neural network (LSTM) was used to learn the dam deformation characteristics. The result shows that the GVLSTM model can effectively reduce the estimation deviation of the prediction model. Compared with VMDGRU and VMDANN, the average RMSE and MAE value of each station is increased by 19.11%~28.58% and 27.66%~29.63%, respectively. We used determination (R2) coefficient to judge the performance of the prediction model, and the value of R2 was 0.95~0.97, indicating that our method has good performance in predicting dam deformation. The proposed method has outstanding advantages of high accuracy, reliability, and stability for dam deformation prediction.

1. Introduction

The high-precision prediction of the surface displacement and deformation of reservoir dams plays a vitally important role in the prevention of geological disasters [1,2]. According to different theoretical methods, dam deformation prediction models can be divided into deterministic models, statistical regression models, and artificial intelligence models, along with others [3]. However, these methods have limitations in the prediction process, such as the commonly used single prediction models including back propagation (BP) [4], grey models (GM) [5], and support vector machines (SVM) [6], which have problems with complex structures, multicollinearity, influence by parameters or noise, and poor prediction performance [7]. Therefore, it is imperative to integrate new intelligent deep learning methods to predict dam deformation.
In recent years, intelligent deep learning and machine learning methods have been used to predict displacement and deformation of settlements [2], landslides [8,9], and dams [10]. Dal (2014) calibrated different deterministic and probabilistic methods using case studies in the middle reaches of the Noce River basin in Basilicata (Italy) and compared them within the basin [11]. Su et al. (2018) used a combination of support vector machine (SVM), phase space reconstruction, wavelet analysis, and particle swarm optimization (PSO) methods to establish a dam deformation prediction model [12]. Ranković (2014) also used support vector regression (SVR) to accurately predict the tangential displacement of concrete dams [13]. Kang (2017) developed a deformation prediction model for concrete dam health monitoring based on an extreme learning machine (ELM) [14]. Mengxin et al. (2024) used the sparrow search algorithm (SSA) combined with the extreme gradient boosting algorithm (XGBoost) to perform adaptive noise complete-set empirical mode decomposition and wavelet packet denoising on the dam time series [15]. Qu (2019) developed a single point and multi-point concrete dam deformation prediction model for health monitoring based on rough set (RS) theory and a long short-term memory (LSTM) network [16]. In response to the nonlinear and non-stationary temporal characteristics of dams, Lu et al. (2021) proposed a prediction model that combines variational mode decomposition (VMD) and long short-term memory (LSTM) neural networks, which transform the process of dam deformation temporal decomposition into a variational solution problem [17]. Many studies have confirmed that the complex dam deformation decomposed into relatively simple sub sequences of multiple, different frequency bands [18,19,20]. However, the selection of key parameters (K) and penalty factors ( α ) in VMD is often based on empirical judgement in practical applications [21,22]. Improper selection of K and penalty factor α may cause signal over-decomposition or low decomposition. Mirjalili et al. (2014) judges the intrinsic mode functions (IMFs) of VMD parameters according to a set composite index [23]. This method can effectively remove noise from the original sequence and preserve the characteristics of the original signal; however, it has not yet been used for dam deformation prediction.
To solve the problems of difficult model parameter selection, improper extraction of useful signals, and improper signal decomposition in nonlinear and non-stationary dam displacement time series prediction methods, in this study, we introduce the grey wolf optimization (GWO) algorithm to optimize the selection of VMD parameters, and input the decomposed and reconstructed signal into the LSTM prediction model as eigenvalues, and propose a new prediction model with grey wolf optimization and variational mode decomposition, and long short-term memory, which is abbreviated as GVLSTM, improving the accuracy of dam deformation prediction. The mean absolute error (MAE) and root mean square error (RMSE) were used to evaluate the accuracy of the prediction model and used R2 to judge the performance of the prediction model.
The organization of the work is as follows: Section 2 describes the dam data and mathematical methods. Section 3 shows the parameters of GWO optimized VMD, optimal parameters selection, and the result of GVLSTM compared with other hybrid models. Section 4 discusses the accuracy of the GVLSTM model and VMDLSTM model and displays the improvement of GVLSTM. Section 5 concludes this work.

2. Data and Methods

2.1. Data

In this study, we used data for dam displacement and deformation recorded at the YangHe reservoir between January 2018 and November 2022 (Figure 1). Observation intervals were single days, and observation directions were as follows: north direction (N); east direction (E); and vertical settlement (U). A dataset of six stations at the reservoir (Sub 29G, Ma 18F, Pai 20B, Ying 5E, Yuan 9C, and Yuan 10D) was used; due to different station-naming methods, these were termed stations 001~006. There were 1800 sets of data for each station; these were divided into a training set, a validation set, and a test set, giving 1300, 200, and 300 sets of data, respectively.

2.2. Methods

2.2.1. Grey Wolf Optimization

GWO is a swarm intelligence optimization algorithm based on the social hierarchy and hunting behaviour of grey wolf populations [24]. The grey wolf population can be divided into a leader wolf α 0 and β , δ , and ω wolves; the optimal solution in the algorithm is when the leader wolf leads the wolf pack to hunt prey. Suboptimal solutions in the algorithm are as follows: when wolf β assists the head wolf α 0 ; when wolf δ obeys the orders of wolf β and wolf α 0 , and takes responsibility for reconnaissance and sentry; when wolf β adapts to supervision by wolf α 0 and is demoted to wolf δ ; and when wolf ω surrounds wolf α 0 and wolf β , or updates the wolf’s position [25,26].
(1)
Social hierarchy
The social hierarchy of wolve α 0 and β , δ , and ω in grey wolf populations is shown in Figure 2. In the figure, wolf α 0 can be seen as the optimal solution, wolves β and δ are the wolves with the second- and third-best fitnesses, respectively, and the remaining wolves are ω wolves, which are the candidate solutions [27].
(2)
Surround
In the optimization process, wolf packs surround prey to search for the optimal hunting route; this may be expressed in a mathematical model, as follows [28]:
D = C · X p ( l ) X ( l ) ω
X ( l + 1 ) = X p ( l ) A · D
A = 2 a r 1 a
C = 2 · r 2
In the above equations, l is the current iteration count; “ · ” is a multiplication sign; D is the distance between the grey wolf and its prey; X p is the prey position vector; X is the grey wolf position vector; A and C are coefficient vectors whose values may be adjusted so that positions of points around the optimal solution can be searched to ensure the local search capability of the algorithm; the convergence factor is a = 2 2 ( l / max ) , where max is the maximum number of iterations, which linearly decreases from 2 to 0 during the a iteration; finally, r 1 and r 2 are both random vectors within the range of [0,1].
(3)
Hunt
After surrounding the prey, wolves α 0 and β , δ guide the pack to hunt. In each iteration, the position of wolf ω is updated the positions of wolves α 0 and β , δ with the best fitness, which gradually approach the prey to achieve optimization of the ω prey wolf’s position. The updated mathematical expression may now be expressed as follows [27]:
D α 0 = C 1 · X α 0 ( l ) X ( l ) D β = C 2 · X β ( l ) X ( l ) D δ = C 3 · X δ ( l ) X ( l )
X 1 = X α 0 A 1 · D α 0 X 2 = X β A 2 · D β X 3 = X δ A 3 · D δ
X ( l + 1 ) = X 1 + X 2 + X 3 3
In the above, Equations (5) and (6) define the forward stride and direction of wolf ω towards wolves α 0 and β , δ . Equation (7) can determine the location of the prey.
(4)
Attack
After determining the final location of the prey, the grey wolf attacks the prey to complete the hunting process. In order to simulate the approach of a grey wolf to its prey, the a value decreases linearly, and the fluctuation range of A also decreases; i.e., when a decreases from 2 to 0, the corresponding range [ a ,   a ] of A is also changed. The wolves act according to the value of A , and when A < 1 , the wolves concentrate on attacking and fall into the local optimal solution; otherwise, the wolves disperse and look for better prey to obtain the global optimal solution.

2.2.2. Variational Mode Decomposition

The VMD algorithm decomposes the signal into K modal functions u k [29], with a centre frequency of ω k , by setting the modal parameter K , the penalty parameter α , and the rising step τ. The VMD algorithm exhibits strong robustness with respect to noise and sampling errors, and its algorithm-constrained variational problem can be expressed as follows [30]:
min u k , ω k k t ( δ ( t ) + j π ) u k ( t ) e j ω k t 2 2 s . t . k u k = f
In the above, t is the time, f is the original signal, δ ( t ) is the pulse function, uk is the modal function, ω k is the actual centre frequency of each modality, e j ω k t is the estimated centre frequency of each analytic signal, 2 is the L2 norm, s . t . is the constraint, and k u k is the sum of all the modal numbers.
By introducing the second-order penalty factor α with the Lagrange multiplier λ t , conversion to an unconstrained variational problem may be achieved, with the resulting extended Lagrange expression expressed as follows [31]:
L u k , ω k , λ = α k | | t δ t + j π t u k t e j ω k t | | 2 2                                                     + | | f t k u k t | | 2 2 + λ t , f t k u k t
Iterative updates are made using the multiplier direction alternation algorithm u ^ k n + 1 , and the ω k n + 1 ; λ ^ n + 1 saddle point of Equation (9) is thereby obtained; this is the optimal solution of Equation (8), as described in [29,32,33].

2.2.3. Long Short-Term Memory

LSTM solves the shortcomings of recurrent neural networks (RNNs) that cannot memorize data at distant locations [34,35,36]. The LSTM network structure is called a cell and includes an input layer, a hidden layer, and an output layer [37]. Each hidden layer controls the storage and access of data through input gates, forgetting gates, and output gates, and the LSTM network module is shown in Figure 3, and the mathematical steps are as follows. More detailed information about LSTM can be obtained from Hochreiter and Schmidhuber, 1997 [36,38].

2.2.4. GWO-VMD Model

The setting of parameter k and penalty factor α has a significant impact on the results of VMD results, when other parameters are set to default values. When k is too small, the decomposed signal may be decomposed completely; however, setting too large of a value for k may cause signal over-decomposition, as well as modal aliasing [15]. To solve this problem, we used GWO to optimize the VMD parameters. When using the GWO algorithm to optimize VMD, it is very important to choose a suitable fitness function as the optimization criterion. In this study, envelope entropy was selected as the fitness function optimized by GWO, as this could best reflect the sparsity and uncertainty of the original signal. The principle of envelope entropy may be expressed as in Equation (10), as follows [39,40]:
E p = j = 1 N p j lg p j p j = a ( j ) / j 1 N a ( j )
In the above, N is the number of sampling points of the signal, p j is the normalized form of a ( j ) , and a ( j ) is the envelope signal obtained by x ( j ) Hilbert demodulation of the signal.
The modal components of VMD contain a lot of noise and complex signals. After optimizing the VMD parameters, GWO incorporates Mirjalili’s concept so that multiscale permutation entropy (MPE) may be used as the criterion for judging noise and signals [23]. After the multiscale permutation entropy (MPE) [41,42] of each IMF component in VMD is calculated, low-frequency signals and high-frequency noise may be determined by setting the MPE threshold. When there is less noise in the IMF component, the signal is more regular, and the MPE value is smaller. Conversely, when there is more noise in the IMF component, the MPE value is larger [43]. After multiple experiments, the MPE threshold set in Lu’s (2023) research can effectively filter out noise in deformation monitoring data. After multiple experiments, we found that the MPE threshold used in Lu’s (2023) research could effectively filter out noise in deformation monitoring data. After multiple tests, then, the MPE value was set to 0.6, and low-frequency IMF components below the MPE threshold were reconstructed into new signals to optimize GWO for VMD. The specific steps were as follows:
Step1: Initialization of GWO algorithm parameters, setting the number of wolf packs to 30 and the maximum number of iterations to 10. Based on considerations of computational efficiency and algorithm accuracy, we set a K value range of [3,12] and an α value range of [100, 4000] to randomly generate the position of grey wolves [37].
Step 2: According to the optimal parameter combination [ K , α ] obtained in Step 1, calculate the IMF fitness value through envelope entropy after VMD, and update the positions of the α 0 , β , and δ wolves.
Step 3: The threshold judgement of MPE determines the effective IMF components based on the threshold size and reconstructs them into signals, while the remaining components are reconstructed into noise. In order to avoid the phenomenon of excessive signal decomposition, the α 0 wolf, the β wolf, and any individual δ wolves that satisfy Equation (2) do not participate. The wolf’s position is then updated. We set XMPE be the threshold of IMF, and set the judgement instruction thus:
X MPE ( IMF i ) > X MPE ( IMF i + 1 )
Step 4: Update the position of the grey wolf, iterate, and return to step 2 until the optimal solution for [ K , α ] is obtained.
Step 5: Calculate the MPE value of IMF, reconstruct the sequence into a denoised signal, and end the optimization of VMD by GWO [32]. The GWO-VMD optimization process is shown in Figure 4.

2.2.5. Construction of a New GVLSTM Model

According to the above algorithm model, the GVLSTM model used in the study was constructed using the following steps:
(1)
Obtain the optimal parameter combination [ K , α ] using GWO.
(2)
Judge the effective IMF components and noise according to the MPE, and reconstruct the signal.
(3)
Input the reconstructed signal into the LSTM model as an eigenvalue for prediction.
(4)
Evaluate the accuracy of the prediction results. Figure 5 shows the framework diagram of the GVLSTM prediction model constructed in the study.

2.2.6. Evaluation Index

We used MAE and RMSE as evaluation indicators for model prediction accuracy. The mathematical expressions for MAE and RMSE are as follows [32,44,45]:
(1)
RMSE
RMSE = 1 n   i = 1 n y i y ^ i 2
(2)
MAE
MAE = 1 n i = 1 n y i y ^ i
where y i represents the actual GNSS data value, y ^ i represents the predicted results of each model, and n represents the number of GNSS data. The smaller the values of RMSE and MAE, the higher the prediction accuracy of the model. Conversely, high RMSE and MAE values indicate that the prediction accuracy of the model is low [46,47].

3. Results

3.1. GWO of VMD Parameter Selection

We used datasets from six GNSS stations at the YangHe reservoir. First, we performed VMD using the GWO method to determine the optimal combination of parameters [ K , α ]. The envelope entropy was used as the fitness function to determine the optiization criteria. Taking station 001 as an example, the variation of GWO with iteration times during the optimization process is shown in Figure 6. The distribution of optimal parameter combinations in different directions for each station is shown in Figure 7.
It can be seen in Figure 7 that K values fluctuate considerably in the E direction, ranging mainly from 5 to 10, while corresponding values in the N and U directions range from 8 to 10. In addition, α values mainly range from 200 to 1000 in the E and U directions, but fluctuate more considerably in the N direction, ranging from 200 to 4000.
After decomposing the VMD by GWO, taking station 001 as an example, ten IMF values were decomposed in the N and E directions, and eight IMF values were obtained in the U direction, as shown in Figure 8. It is evident in Figure 8 that the low-frequency components are mainly concentrated in the first-order mode. After decomposing the six stations, the low-frequency components were mainly first- and second-order modes.
MPE is used as a standard for judging noise and signal. In the present study, to obtain the MPE value of each component, the IMF components decomposed by VMD were calculated. Taking station 001 as an example, the MPE values of components in different directions are shown in Table 1. The MPE threshold set in the present study was 0.6; components below the threshold were classified as low-frequency signal components, and those above 0.6 were classified as noise components. The higher the MPE value, the more noise in the IMF component, and the more irregular the signal. The MPE distribution of the components at each station is shown in Figure 9.
It can be seen in Figure 9 that the maximum IMF after the decomposition of VMD at each site is 10 and the minimum is 5. The MPE value of IMF1~IMF10 fluctuates in a range of 0.5~0.8, indicating that the random fluctuation of the series is increasing, the noise component is considerable, and the signal is irregular. However, the MPE values of IMF1~IMF2 at most stations was less than 0.6, so IMF components with MPE values less than 0.6 were reconstructed into low-frequency signals.
After GWO optimized the parameters of VMD, the reconstructed signals of each station were brought into the LSTM model as eigenvalues for prediction, to complete the construction of the GVLSTM method. In view of the nonlinear displacement sequence of the GNSS station at the YangHe reservoir used in the present study, the applicability of the GVLSTM method was verified by a performance comparison with different prediction models and the incorporation of VMD fusion into a new combined model.

3.2. Analysis of Prediction Accuracy of GVLSTM Model

To effectively predict the displacement time series of dam monitoring stations, and verify the reliability of the GVLSTM method proposed in the present study, we decomposed each station through VMD, and input the reconstructed signal into the gated recurrent unit (GRU) and artificial neural network (ANN) prediction model as eigenvalues, and then combined these into VMDGRU, VMDANN, and GVLSTM models for comparison purposes. Taking station 001 as an example, the prediction curve of the combined model in different directions is shown in Figure 10.
It is evident from Figure 10 that the results of the GVLSTM prediction model are more closely aligned with the original sequence. In the N direction, the VMDGRU model of station 001 and the VMDANN prediction curve are generally shifted upwards. In the E direction, the overall downward shift in the prediction results curve for VMDGRU, compared with that of VMDANN, shows that the VMDGRU prediction results had a poor degree of fitting with the original sequence, with larger RMSE values, and a poorer prediction effect in the U direction; the first half of the VMDANN prediction curve for station 001 shifts downward, the second half of the curve shifts upward, and the overall VMDGRU prediction curve shifts upward. In summary, then, the prediction results of the GVLSTM model are more closely matched to the original sequence, and the prediction results of the VMDGRU and VMDANN models in different directions are either overestimated or underestimated. The evaluation indexes of different combinations of models in different directions are shown in Table 2.
In Table 2, it can be seen that RMSE values of the GVLSTM model are considerably lower, and accuracy is greatly improved. In the N, E, and U directions, the RMSE and MAE values of the GVLSTM model are smaller than those predicted by the VMDGRU and VMDANN models. Taking station 001 as an example, we may note the following: compared with the VMDANN model, the RMSE accuracy of the prediction results in the N, E, and U directions of the GVLSTM model is increased by about 32.61%, 14.89%, and 22.73%, respectively; the accuracy of MAE is increased by about 35.00%, 11.11%, and 22.86%, respectively. Compared with the VMDGRU model, the RMSE accuracy of the prediction results in the N, E, and U directions of the GVLSTM model is increased by about 73.50%, 50.00%, and 54.05%, respectively; the accuracy of MAE is improved by about 75.70%, 50.77%, and 55.00%, respectively. At other stations, compared with the VMDDEN model, the accuracy of RMSE and MAE predicted by the GVLSTM model is increased by 3.77%~56.92% and 2.38%~58.17%, respectively; compared with the VMDGRU model, the accuracy of RMSE and MAE predicted by the VMDGRU model is increased by 50.28%~79.63% and 46.37%~78.81%, respectively. These results demonstrated that the GVLSTM model had a better prediction ability than the combined model of VMDGRU and VMDANN.
In order to reflect the quality of the prediction model, the coefficient of determination R2 was introduced to further evaluate the prediction effect of the model [48]. The mathematical expression of the coefficient of determination may be expressed as follows [49,50]:
R 2 = 1 i = 1 n ( y ^ i y i ) 2 i = 1 n ( y ¯ i y i ) 2
In the above equation, y i represents the original time series data, y ^ i represents the prediction results of each model, y ¯ represents the average value of the original time series data, and n represents the number of original time series data. RMSE and MAE can better reflect the accuracy of prediction results, while R2 can better reflect the quality of prediction models. The range of R2 values is [0,1], and the closer R2 values are to 1, the better the prediction model [51]. The R2 prediction results of the combined model are shown in Figure 11.
The larger the hexagonal area of the R2 result in Figure 11, the larger the R2 value, and the better the prediction model. Taking station 001 as an example, the maximum R2 value of the GVLSTM prediction model in the N direction is 0.93, the maximum R2 value of the VMDGRU prediction model is 0.03, and the maximum R2 value of the VMDANN prediction model is 0.85. The maximum R2 values for the E-direction GVLSTM, VMDGRU, and VMDANN prediction models are 0.79, 0.16, and 0.71, respectively, while the maximum R2 values for the U-direction GVLSTM, VMDGRU, and VMDANN prediction models are 0.85, 0.25, and 0.73, respectively. The R2 values of the GVLSTM prediction model in different directions at other sites are about 0.90–0.97, those of the VMDGRU prediction model are about 0.1–0.77, and those of the VMDANN prediction model are about 0.65–0.89. The comprehensive results of Figure 11 and Table 2 indicate that the GVLSTM model has the best predictive performance, followed by the VMDANN model, and the VMDGRU model provides the worst predictive performance. In summary, the GVLSTM model predicts the R2 value region curve and fits the original sequence better, and the GVLSTM prediction model predicts the results with higher accuracy after the decomposition and reconstruction of the original sequence, thereby proving the effectiveness of the GVLSTM model’s prediction.

4. Discussion

4.1. Quality Analysis of GVLSTM and VMDLSTM Model Prediction Results

Unlike in the VMDLSTM model, in VMD, the selection of key parameters K and penalty factors α is often judged empirically in practical applications, without optimization in selecting parameters such as the RMSE value of each IMF. Improper selection of K and α may cause signals to excessive decomposition or low decomposition [52,53]. The new intelligent deep learning method model of GVLSTM proposed in this paper optimizes the parameter values of VMD through GWO, and can produce the optimal parameter combination [ K , α ] more accurately. Figure 12 shows the prediction results of GVLSTM and VMDLSTM, taking station 001 as an example.
In Figure 12, it can be seen that the VMDLSTM curve shifts upwards in the N and U directions, with a downward shift in the E direction; in addition, the GVLSTM model has a better fitting effect than the original sequence. The prediction results of VMDLSTM and GVLSTM models were evaluated by RMSE and MAE, and the experimental results showed that the difference between the R2 values of the two models after LSTM training was small, with these values being in a range of 0.95~0.97. Figure 13 shows the evaluation of the prediction results of GVLSTM and VMDLSTM for different stations in different directions.

4.2. Evaluation of Improvements in Accuracy Indexes for GVLSTM and VMDLSTM Models

To judge the prediction accuracy of the GVLSTM model, we set Q to denote the accuracy improvement amplitude, and O and O to represent the accuracy evaluation indexes of the initial model and the combined model (i.e., the optimized model), respectively by RMSE and MAE and the improvement in accuracy delivered by GVLSTM in different directions, compared with VMDLSTM, is shown in Table 3. The mathematical expression for Q is as follows:
Q = O O O
From Figure 13 and Table 3, it can be seen that the accuracy of GVLSTM in predicting RMSE and MAE at each station is considerably higher than that of the VMDLSTM model. In the N direction, the maximum increase in accuracy of RMSE is 52.17% and the minimum increase is 7.41%, while the maximum increase in accuracy of MAE is 36.36%, and the minimum increase is 10.00%. In the E direction, the maximum increase in accuracy of RMSE is 40.00% and the minimum increase is 0.56%, while the maximum increase in MAE accuracy is 42.11%, and the minimum increase is 10.05%. In the U direction, the maximum increase in accuracy of RMSE accuracy is 52.78% and the minimum increase is 10.53%, while the maximum increase in MAE accuracy is 48.00% and the minimum increase is 18.00%. In summary, these results proved that the accuracy of RMSE and MAE in the prediction results of the GVLSTM model in different directions is improved by 0.56%~52.78% and 10.00%~48.00%, respectively, compared with the prediction results of the VMDLSTM model; moreover, the modelling and prediction results of the GVLSTM prediction model are closer to the accurate prediction of the measured dam deformation displacement, so the accuracy of prediction is improved.

5. Conclusions

To improve the accuracy of dam deformation prediction, a new grey wolf optimized variational mode decomposition long short-term memory neural network prediction model was constructed to address the problems of difficult model parameter acquisition or useful signal extraction, and inappropriate signal decomposition in nonlinear, non-stationary dam displacement time series prediction methods. Our main conclusions are as follows:
(1)
After the optimization of VMD by GWO, GWO-VMD effectively weakens the influence of modal aliasing and endpoint effects, and the signal characteristics are dynamic; additionally, the key parameters of VMD can be accurately obtained by using envelope entropy as a fitness function [K, α ].
(2)
The GVLSTM model proposed in this paper has higher prediction accuracy than other models. Compared with the VMDLSTM prediction model, the accuracy of the RMSE value for each station is increased by 19.11%~28.58% on average, the accuracy of the MAE value is increased by 27.66%~29.63% on average, and the R2 value is between 0.95 and 0.97, which significantly improves the prediction accuracy of GVLSTM model, which proves the effectiveness and feasibility of GVLSTM model prediction.
(3)
GVLSTM has obvious advantages in dam deformation prediction compared with other methods, and the prediction results of the GVLSTM prediction model after original sequence decomposition and reconstruction have higher accuracy and precision, providing reliable engineering application data for research on intelligent prediction of dam deformation.

Author Contributions

X.S. and T.L., writing—original draft preparation; H.W., X.H. and Z.W., methodology, review, and editing the manuscript; S.H., H.D., and Y.Z., data processing and figure plotting. All authors have read and agreed to the published version of the manuscript.

Funding

This work is sponsored by National Natural Science Foundation of China (42374040, 42061077, 42104023), Jiangxi Academic and Technical Leaders Training Program for Major Disciplines (20225BCJ23014), Water Conservancy Research Project of Hebei Province (2022-28); Research and Application of Key Technologies for High-Precision Dam Intelligent Monitoring Based on “Beidou+5G” by the Graduate Innovation Fund of East China University of Technology (DHYC-202304).

Data Availability Statement

The processing of Station data can be obtained at http://slt.hebei.gov.cn/, accessed on 1 July 2024. The photos of the dam in the article are provided by the Hebei Institute of Investigation and Design of Water Conservancy and Hydropower Co., Ltd.

Conflicts of Interest

Author Haicheng Wang was employed by the company Hebei Institute of Investigation and Design of Water Conservancy and Hydropower Co., Ltd. Author Ziyu Wang was employed by the company Xuzhou Surveying & Mapping Research Institute Co., Ltd. Author Hongqiang Ding and Yuntao Zhang were employed by the company Hebei Water Conservancy Engineering Bureau Group Limited. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Wang, H.R.; Niu, X.Q.; Xu, L.F.; Yan, T.Y.; Zhu, Y.T. Dam deformation prediction model based on singular spectrum analysis and improved whale optimization algorithm-optimized BP neural network. J. Hydroelectr. Eng. 2023, 42, 136–145. [Google Scholar]
  2. Liu, Q.; Zhang, Y.; Deng, M.; Wu, H.; Kang, Y.; Wei, J. Time series prediction method of large-scale surface subsidence based on deep learning. Acta Geod. Cartogr. Sin. 2021, 50, 396. [Google Scholar]
  3. Li, B.; Yang, J.; Hu, D. Dam monitoring data analysis methods: A literature review. Struct. Control Health Monit. 2020, 27, e2501. [Google Scholar] [CrossRef]
  4. Wang, S.; Yang, H.; Lin, Z. Research on Settlement and Section Optimization of Cemented Sand and Gravel (CSG) Dam Based on BP Neural Network. Appl. Sci. 2024, 14, 3431. [Google Scholar] [CrossRef]
  5. Lei, D.; Li, T.; Zhang, L.; Liu, Q.; Li, W. A novel time-delay neural grey model and its applications. Expert Syst. Appl. 2024, 238, 121673. [Google Scholar] [CrossRef]
  6. Kavitha, S.S.; Kaulgud, N. Quantum machine learning for support vector machine classification. Evol. Intell. 2024, 17, 819–828. [Google Scholar] [CrossRef]
  7. Liu, S.; Xu, J.; Ju, B. Dam deformation prediction based on EMD and RBF neural network. Bull. Surv. Mapp. 2019, 8, 88. [Google Scholar]
  8. Luo, H.; Jiang, Y.; Xu, Q.; Liao, L.; Yan, A.; Liu, C. A spatio-temporal network for landslide displacement prediction based on deep learning. Acta Geod. Cartogr. Sin. 2022, 51, 2160. [Google Scholar]
  9. Huang, L. Research on meteorological early-warning model of landslides in Wenchuan earthquake area based on machine learning. Acta Geod. Cartogr. Sin. 2022, 49, 267. [Google Scholar]
  10. Lee, E.H. Proactive dam operation based on inflow prediction by modified long short-term memory for improving resilience. Eng. Appl. Artif. Intell. 2024, 133, 108525. [Google Scholar] [CrossRef]
  11. Dal Sasso, S.F.; Sole, A.; Pascale, S.; Sdao, F.; Bateman Pinzón, A.; Medina, V. Assessment methodology for the prediction of landslide dam hazard. Nat. Hazards Earth Syst. 2014, 14, 557–567. [Google Scholar] [CrossRef]
  12. Su, H.; Li, X.; Yang, B.; Wen, Z. Wavelet support vector machine-based prediction model of dam deformation. Mech. Syst. Signal Process. 2018, 110, 412–427. [Google Scholar] [CrossRef]
  13. Ranković, V.; Grujović, N.; Divac, D.; Milivojević, N. Development of support vector regression identification model for prediction of dam structural behaviour. Struct. Saf. 2014, 48, 33–39. [Google Scholar] [CrossRef]
  14. Kang, F.; Liu, J.; Li, J.; Li, S. Concrete dam deformation prediction model for health monitoring based on extreme learning machine. Struct. Control. Health Monit. 2017, 24, e1997. [Google Scholar] [CrossRef]
  15. Zhang, M.; Chen, B.; Liu, W. Dam deformation prediction model selected by SSA-XGBoost with temporal and spatial features. J. Hydroelectr. Eng. 2024, 43, 39–53. [Google Scholar]
  16. Qu, X.; Yang, J.; Chang, M. A Deep Learning Model for Concrete Dam Deformation Prediction Based on RS-LSTM. J. Sens. 2019, 1, 4581672. [Google Scholar] [CrossRef]
  17. Lu, T.; Xie, J. Deformation monitoring data de-noising method based on variational mode de-composition combined with sample entropy. J. Geod. Geodyn. 2021, 41, 1–6. [Google Scholar]
  18. Liu, M.; Feng, Y.; Yang, S.; Su, H. Dam Deformation Prediction Considering the Seasonal Fluctuations Using Ensemble Learning Algorithm. Buildings 2024, 14, 2163. [Google Scholar] [CrossRef]
  19. Ou, B.; Zhang, C.; Xu, B.; Fu, S.; Liu, Z.; Wang, K. Innovative Approach to Dam Deformation Analysis: Integration of VMD, Fractal Theory, and WOA-DELM. Struct. Control Health Monit. 2024, 2024, 1710019. [Google Scholar] [CrossRef]
  20. Cai, S.; Gao, H.; Zhang, J.; Peng, M. A self-attention-LSTM method for dam deformation prediction based on CEEMDAN optimization. Appl. Soft Comput. 2024, 159, 111615. [Google Scholar] [CrossRef]
  21. Zhang, L.; Yu, C.; Tan, Y. A method for pulse signal denoising based on VMD parameter optimization and grey wolf optimizer. J. Phys. Conf. Ser. IOP Publ. 2021, 1920, 012100. [Google Scholar] [CrossRef]
  22. Zhang, X.; Miao, Q.; Zhang, H.; Wang, L. A parameter-adaptive VMD method based on grasshopper optimization algorithm to analyze vibration signals from rotating machinery. Mech. Syst. Signal Process. 2018, 108, 58–72. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  24. Faris, H.; Aljarah, I.; Al-Betar, M.A.; Mirjalili, S. Grey wolf optimizer: A review of recent variants and applications. Neural Comput. Appl. 2018, 30, 413–435. [Google Scholar] [CrossRef]
  25. Emary, E.; Zawbaa, H.M.; Hassanien, A.E. Binary grey wolf optimization approaches for feature selection. Neurocomputing 2016, 172, 371–381. [Google Scholar] [CrossRef]
  26. Mittal, N.; Singh, U.; Sohi, B.S. Modified grey wolf optimizer for global engineering optimization. Appl. Comput. Intell. Soft Comput. 2016, 2016, 7950348. [Google Scholar] [CrossRef]
  27. Teng, Z.J.; Lv, J.L.; Guo, L.W. An improved hybrid grey wolf optimization algorithm. Soft Comput. 2019, 23, 6617–6631. [Google Scholar] [CrossRef]
  28. Hou, Y.; Gao, H.; Wang, Z.; Du, C. Improved grey wolf optimization algorithm and application. Sensors 2022, 22, 3810. [Google Scholar] [CrossRef]
  29. Nazari, M.; Sakhaei, S.M. Successive variational mode decomposition. Signal Process. 2020, 174, 107610. [Google Scholar] [CrossRef]
  30. Huang, Z.; Hou, Z.; Huang, J.; Sun, X.; He, X.; Chen, H.; Montillet, J.-P. A New Adaptive WVS Based Denoising Method on GNSS Vertical Time Series. Acta Geodyn. Geomater. 2023, 20, 71–82. [Google Scholar] [CrossRef]
  31. Chen, Z.; Xiong, X.; You, Y. Variational modal decomposition and long-short time neural networks for dam deformation prediction. Mapp. Sci. 2021, 46, 34–42. [Google Scholar]
  32. Lu, T.; He, J.; He, X.; Tao, R. GNSS Coordinate Time Series Denoising Method Based on Parameter-optimized Variational Mode Decomposition. Geomat. Inf. Sci. Wuhan Univ. 2023, 49, 1856–1866. [Google Scholar]
  33. Dragomiretskiy, K.; Zosso, D. Variational mode decomposition. IEEE Trans. Signal Process. 2013, 62, 531–544. [Google Scholar] [CrossRef]
  34. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to forget: Continual prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. [Google Scholar] [CrossRef]
  35. Graves, A. Long Short-term Memory. In Supervised Sequence Labelling with Recurrent Neural Networks. Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2012; pp. 37–45. [Google Scholar]
  36. Schmidhuber, J.; Hochreiter, S. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar]
  37. Chen, H.; Lu, T.; Huang, J.; He, X.; Sun, X. An Improved VMD–EEMD–LSTM Time Series Hybrid Prediction Model for Sea Surface Height Derived from Satellite Altimetry Data. J. Mar. Sci. Eng. 2023, 11, 2386. [Google Scholar] [CrossRef]
  38. Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. Gradient Flow in Recurrent Nets: The Difficulty of Learning Long-Term Dependencies; Wiley-IEEE Press: Hoboken, NJ, USA, 2001. [Google Scholar]
  39. Tang, G.J.; Wang, X.L. Application of parameter optimization variational modal decomposition method in early fault diagnosis of rolling bearing. J. Xi’an JiaoTong Univ. 2015, 49, 73–81. [Google Scholar]
  40. Yang, Y.; Liu, H.; Han, L.; Gao, P. A feature extraction method using VMD and improved envelope spectrum entropy for rolling bearing fault diagnosis. IEEE Sens. J. 2023, 23, 3848–3858. [Google Scholar] [CrossRef]
  41. Jabloun, M.; Ravier, P.; Buttelli, O. On the Genuine Relevance of the Data-Driven Signal Decomposition-Based Multiscale Permutation Entropy. Entropy 2022, 24, 1343. [Google Scholar] [CrossRef]
  42. Choi, Y.S. Improved multiscale permutation entropy measure for analysis of brain waves. Int. J. Fuzzy Log. Intell. Syst. 2017, 17, 194–201. [Google Scholar] [CrossRef]
  43. Lu, T.; Xie, J. EEMD-Multiscale Permutation Entropy Noise Reduction Method for GPS Elevation Time Series. J. Geod. Geodyn. 2021, 41, 111–115. [Google Scholar]
  44. Xu, H.; Lu, T.; Montillet, J.P.; He, X. An improved adaptive IVMD-WPT-Based noise reduction algorithm on GPS height time series. Sensors 2021, 21, 8295. [Google Scholar] [CrossRef] [PubMed]
  45. Wang, Z.; Nie, W.; Xu, H.; Jian, W. Prediction of landslide displacement based on EEMD-Prophet-LSTM. J. Univ. Chin. Acad. Sci. 2023, 40, 514. [Google Scholar]
  46. Li, Z.; Lu, T.; He, X.; Montillet, J.P.; Tao, R. An improved cyclic multi model-eXtreme gradient boosting (CMM-XGBoost) forecasting algorithm on the GNSS vertical time series. Adv. Space Res. 2023, 71, 912–935. [Google Scholar] [CrossRef]
  47. Wang, J.; Jiang, W.; Li, Z.; Lu, Y. A new multi-scale sliding window LSTM framework (MSSW-LSTM): A case study for GNSS time-series prediction. Remote Sens. 2021, 13, 3328. [Google Scholar] [CrossRef]
  48. Li, Z.; Lu, T.; Yu, K.; Wang, J. Interpolation of GNSS Position Time Series Using GBDT, XGBoost, and RF Machine Learning Algorithms and Models Error Analysis. Remote Sens. 2023, 15, 4374. [Google Scholar] [CrossRef]
  49. Nakagawa, S.; Johnson, P.C.; Schielzeth, H. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded. J. R. Soc. Interface 2017, 14, 20170213. [Google Scholar] [CrossRef]
  50. Piepho, H.P. An adjusted coefficient of determination (R2) for generalized linear mixed models in one go. Biom. J. 2023, 65, 2200290. [Google Scholar] [CrossRef]
  51. Piepho, H.P. A coefficient of determination (R2) for generalized linear mixed models. Biom. J. 2019, 61, 860–872. [Google Scholar] [CrossRef]
  52. Liu, B.; Xie, Y.; Wang, K.; Yu, L.; Zhou, Y.; Lv, X. Short-term multi-step wind direction prediction based on OVMD quadratic decomposition and LSTM. Sustainability 2023, 15, 11746. [Google Scholar] [CrossRef]
  53. Shang, X.Q.; Huang, T.L.; Chen, H.P.; Ren, W.X.; Lou, M.L. Recursive variational mode decomposition enhanced by orthogonalization algorithm for accurate structural modal identification. Mech. Syst. Signal Process. 2023, 197, 110358. [Google Scholar] [CrossRef]
Figure 1. GNSS monitoring station diagram of YangHe Reservoir dam.
Figure 1. GNSS monitoring station diagram of YangHe Reservoir dam.
Remotesensing 16 03978 g001
Figure 2. Social hierarchy diagram of grey wolf population.
Figure 2. Social hierarchy diagram of grey wolf population.
Remotesensing 16 03978 g002
Figure 3. The duplicate module in LSTM. (The tanh layer is used to generate new candidate cell states, which helps control the flow and storage of information. σ is the logistic sigmoid function, h t is the current cell output. h t 1 is the input of the previous time, x t is the input of the current time, and f t is the forget gate for the moment t. The forget gate combines the input h t 1 of the previous time with the input x t of the current time to selectively forget content. it is the input gate, C ˜ t is a candidate vector, C t is the current cell state, and C t 1 is the previous cell state. O t is the output gate).
Figure 3. The duplicate module in LSTM. (The tanh layer is used to generate new candidate cell states, which helps control the flow and storage of information. σ is the logistic sigmoid function, h t is the current cell output. h t 1 is the input of the previous time, x t is the input of the current time, and f t is the forget gate for the moment t. The forget gate combines the input h t 1 of the previous time with the input x t of the current time to selectively forget content. it is the input gate, C ˜ t is a candidate vector, C t is the current cell state, and C t 1 is the previous cell state. O t is the output gate).
Remotesensing 16 03978 g003
Figure 4. GWO optimized VMD flowchart.
Figure 4. GWO optimized VMD flowchart.
Remotesensing 16 03978 g004
Figure 5. Frame diagram of GVLSTM prediction model.
Figure 5. Frame diagram of GVLSTM prediction model.
Remotesensing 16 03978 g005
Figure 6. Convergence diagram of fitness values in three directions for station 001.
Figure 6. Convergence diagram of fitness values in three directions for station 001.
Remotesensing 16 03978 g006
Figure 7. Optimal parameter distribution.
Figure 7. Optimal parameter distribution.
Remotesensing 16 03978 g007
Figure 8. Signal after decomposition of GWO-VMD in three directions for station 001.
Figure 8. Signal after decomposition of GWO-VMD in three directions for station 001.
Remotesensing 16 03978 g008aRemotesensing 16 03978 g008b
Figure 9. Distribution of MPE value of IMF component at each station in different directions.
Figure 9. Distribution of MPE value of IMF component at each station in different directions.
Remotesensing 16 03978 g009
Figure 10. Prediction curves of combined models in different directions for station 001.
Figure 10. Prediction curves of combined models in different directions for station 001.
Remotesensing 16 03978 g010
Figure 11. R2 results of the combined prediction model in different directions at each site.
Figure 11. R2 results of the combined prediction model in different directions at each site.
Remotesensing 16 03978 g011
Figure 12. Prediction Results Curve of GVLSTM and VMDLSTM.
Figure 12. Prediction Results Curve of GVLSTM and VMDLSTM.
Remotesensing 16 03978 g012
Figure 13. Evaluation of GVLSTM and VMDLSTM prediction results for different directions.
Figure 13. Evaluation of GVLSTM and VMDLSTM prediction results for different directions.
Remotesensing 16 03978 g013
Table 1. MPE values of IMF components in different directions at station 001.
Table 1. MPE values of IMF components in different directions at station 001.
DirectionIMF1IMF2IMF3IMF4IMF5IMF6IMF7IMF8IMF9IMF10
N0.480.630.660.650.690.660.660.730.700.65
E0.520.630.700.690.730.690.710.740.650.66
U0.540.610.690.700.750.750.770.70--
Table 2. Evaluation indicators of different combination models (unit/mm).
Table 2. Evaluation indicators of different combination models (unit/mm).
StationModelNEU
RMSEMAERMSEMAERMSEMAE
001VMDANN0.460.400.470.360.440.35
VMDGRU1.171.070.800.650.740.60
GVLSTM0.310.260.400.320.340.27
002VMDANN0.130.100.160.130.730.61
VMDGRU0.540.410.310.241.901.38
GVLSTM0.110.090.140.110.530.44
003VMDANN1.761.683.253.171.951.53
VMDGRU2.892.723.563.174.073.02
GVLSTM1.251.081.771.700.840.64
004VMDANN0.170.140.290.220.530.42
VMDGRU0.390.290.690.581.591.23
GVLSTM0.090.070.190.150.510.41
005VMDANN0.200.150.230.180.470.37
VMDGRU0.550.410.830.651.441.08
GVLSTM0.180.140.150.110.410.33
006VMDANN0.180.130.180.140.180.14
VMDGRU0.370.250.530.410.520.36
GVLSTM0.090.070.120.100.170.13
Table 3. Degrees of improvement in evaluations of accuracy indexes.
Table 3. Degrees of improvement in evaluations of accuracy indexes.
StationRMSEMAE
NEUNEU
00131.11%21.57%34.62%27.78%21.95%41.30%
00252.17%22.22%18.46%30.77%42.11%22.81%
0037.41%0.56%12.50%10.00%10.05%20.99%
00430.77%13.64%10.53%36.36%11.76%18.00%
00510.00%16.67%33.87%30.00%38.89%26.67%
00640.00%40.00%52.78%36.36%41.18%48.00%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, X.; Lu, T.; Hu, S.; Wang, H.; Wang, Z.; He, X.; Ding, H.; Zhang, Y. A New Algorithm for Predicting Dam Deformation Using Grey Wolf-Optimized Variational Mode Long Short-Term Neural Network. Remote Sens. 2024, 16, 3978. https://doi.org/10.3390/rs16213978

AMA Style

Sun X, Lu T, Hu S, Wang H, Wang Z, He X, Ding H, Zhang Y. A New Algorithm for Predicting Dam Deformation Using Grey Wolf-Optimized Variational Mode Long Short-Term Neural Network. Remote Sensing. 2024; 16(21):3978. https://doi.org/10.3390/rs16213978

Chicago/Turabian Style

Sun, Xiwen, Tieding Lu, Shunqiang Hu, Haicheng Wang, Ziyu Wang, Xiaoxing He, Hongqiang Ding, and Yuntao Zhang. 2024. "A New Algorithm for Predicting Dam Deformation Using Grey Wolf-Optimized Variational Mode Long Short-Term Neural Network" Remote Sensing 16, no. 21: 3978. https://doi.org/10.3390/rs16213978

APA Style

Sun, X., Lu, T., Hu, S., Wang, H., Wang, Z., He, X., Ding, H., & Zhang, Y. (2024). A New Algorithm for Predicting Dam Deformation Using Grey Wolf-Optimized Variational Mode Long Short-Term Neural Network. Remote Sensing, 16(21), 3978. https://doi.org/10.3390/rs16213978

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop