Next Article in Journal
Experimental Investigation of Stress Distributions in 3D Printed Graded Plates with a Circular Hole
Next Article in Special Issue
Active Control of Stiffness of Tensegrity Plate-like Structures Built with Simplex Modules
Previous Article in Journal
Deformation and Failure Properties of High-Ni Lithium-Ion Battery under Axial Loads
Previous Article in Special Issue
Comparison of the Natural Vibration Frequencies of Timoshenko and Bernoulli Periodic Beams
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Recurrent Neural Network-Based Method for Dynamic Load Identification of Beam Structures

1
State Key Laboratory of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
2
School of Energy, Geoscience, Infrastructure and Society, Heriot-Watt University, Edinburgh EH14 4AS, UK
3
Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
*
Authors to whom correspondence should be addressed.
Materials 2021, 14(24), 7846; https://doi.org/10.3390/ma14247846
Submission received: 19 November 2021 / Revised: 11 December 2021 / Accepted: 13 December 2021 / Published: 18 December 2021

Abstract

:
The determination of structural dynamic characteristics can be challenging, especially for complex cases. This can be a major impediment for dynamic load identification in many engineering applications. Hence, avoiding the need to find numerous solutions for structural dynamic characteristics can significantly simplify dynamic load identification. To achieve this, we rely on machine learning. The recent developments in machine learning have fundamentally changed the way we approach problems in numerous fields. Machine learning models can be more easily established to solve inverse problems compared to standard approaches. Here, we propose a novel method for dynamic load identification, exploiting deep learning. The proposed algorithm is a time-domain solution for beam structures based on the recurrent neural network theory and the long short-term memory. A deep learning model, which contains one bidirectional long short-term memory layer, one long short-term memory layer and two full connection layers, is constructed to identify the typical dynamic loads of a simply supported beam. The dynamic inverse model based on the proposed algorithm is then used to identify a sinusoidal, an impulsive and a random excitation. The accuracy, the robustness and the adaptability of the model are analyzed. Moreover, the effects of different architectures and hyperparameters on the identification results are evaluated. We show that the model can identify multi-points excitations well. Ultimately, the impact of the number and the position of the measuring points is discussed, and it is confirmed that the identification errors are not sensitive to the layout of the measuring points. All the presented results indicate the advantages of the proposed method, which can be beneficial for many applications.

1. Introduction

External excitation is the main source of basic dynamic data for many engineering applications, such as structural dynamic characteristics, vibration response analysis, health monitoring, vibration fatigue analysis and vibration fault diagnosis [1,2,3,4,5], among others. In the majority of these applications, measuring dynamic loads directly is not possible. Such measurements are often limited by the accuracy of the test technology and the complexity of large equipment structures where force sensors are difficult to install. How to identify various forms of dynamic load is a fundamental question that has been discussed in numerous studies. However, the traditional dynamic load identification methods are deeply dependent on determining the dynamic characteristics of a structure first [6].
Dynamic load identification is usually achieved in the time or the frequency domain. In the frequency domain, it is necessary to inverse the structural dynamic characteristics matrix, which is often ill conditioned. This can result in a major impact on the accuracy, especially in noisy environments [7]. Similarly, in the time domain, the identification results often diverge or deviate due to the errors accumulation over the time span of interest. Therefore, the accuracy of dynamic load identification is difficult to guarantee and the structural dynamic characteristic information is difficult to obtain, especially for large complex structures [8,9]. However, this remains an important research field where many industrial and academic experts are committed to promoting the study of dynamic load identification [10].
Constructing the inverse model of a vibration response and the associated external excitation is the basic premise of dynamic load identification. With decades of development since the 1970s [11], dynamic load identification has developed in three diverse directions, namely, frequency-domain identification methods, time-domain identification methods and intelligent algorithms. Among these, frequency-domain methods are the earliest, and are considered by many to be the classical methods. These methods are usually based on building an inverse model between the response and the excitation [12,13]. Frequency-domain methods mainly rely on either the direct inversion, the least square approach or the modal coordinate transformation method. All these methods involve inverting the matrix of the frequency response function, which more often than not suffers from severe ill-conditioning issues [14,15]. Although scholars have studied numerous regularization methods for ill-conditioned problems [16,17,18,19,20], there are still many difficulties with the implementation details of frequency-domain methods. Nevertheless, frequency-domain methods are still considered by industrial users to be more mature than the other methods. Hence, they are intensively used to identify excitations in many engineering applications, such as wind load, six-force-factor and the load on mining machinery [21,22,23].
Compared to the frequency domain, time-domain methods can be considered to be more intuitive as they take into account time as a variable. Utilizing the model parameters of a structure to establish the inverse model of the system and identifying the input based on the output of the system is the general procedure of time-domain methods [24]. Nowadays, the existing time-domain methods are basically based on modal decomposition technology and Duhamel integral technology [25,26,27,28,29]. Most of the time-domain methods cannot identify dynamic loads with high accuracy due to the restrictions imposed by several factors, such as ill-posedness, cumulative error and the unclear parameters of the studied dynamic system [30,31,32]. Additionally, noisy environments, complex structures, structures with repeated frequencies, as well as the resonance and anti-resonance points of a structure, can also have a great impact on the identification accuracy [33,34,35].
As early as 1998, Cao et al. [36] used neural networks to solve the dynamic load identification problem facing aircraft wings. Nevertheless, neural network had not been further developed in load identification owing to limitations in computing technologies. With the rapid development of deep learning in recent years, more and more intelligent algorithms have been developed for dynamic load identification. For instance, Liu et al. [37] presented a novel method based on support vector regression to establish the uncertain load caused by heterogeneous responses. Wang et al. [38] proposed a deep regression adaptation network method with model transfer learning to improve the accuracy and efficiency of neural networks for dynamic load identification. Zhou et al. [39] proposed a novel impact load identification method based on a deep recurrent neural network for nonlinear structures. Cooper et al. [40] developed an artificial neural network model to predict the static load applied on a wing rib. All the above-mentioned literature shows an increasing trend which suggests that intelligent algorithms will be very important for the future of dynamic load identification.
Given the features of dynamic load identification, it can be classified as a regression problem of deep learning. Both a vibration response signal and an external excitation signal are considered to involve change over time. The inverse model of a single-channel response or a multi-channel response and a force signal can be established, which is the core idea of load identification. According to different data characteristics and final objectives, different deep learning models can be applied under different engineering scenarios. For instance, multilayer perceptron (MLP) is widely used in table data processing, convolutional neural networks play an important role in image processing and support vector machines have great advantages in limited samples learning. Given that the initial vibration data are often collected in the time domain, we propose using a recurrent neural network (RNN) for dynamic load identification without needing the structural dynamic characteristics. RNN is essentially a model for establishing the nonlinear relationship between multiple variables [41], which is suitable for processing time-domain data. Additionally, in an RNN model, the input of the current time and the output of the previous time can be effectively connected by a basic operation [42]; that is, the amplitude of vibration response data at each time point can be related through time. The RNN models are suitable for solving the identification problem faced by time series models, of which dynamic load identification is a representative problem. To solve the problem of gradient explosion or gradient disappearance in an ordinary RNN model [43], we propose applying the concept of the long short-term memory (LSTM) here. Moreover, bidirectional long short-term memory (BLSTM) is also introduced, which can connect previous and future information in the time domain. These variants of the RNN model are useful for multi-series prediction problems. Compared with RNN, the structure of LSTM is more complex. Specifically, LSTM adds a structure that can remember longer sequences of information, adds an input gate, a forgetting gate and an output gate, and reduces the probability of gradient disappearance or gradient explosion [44]. In other fields, LSTM was initially developed for natural languages processing. More recently, its application in other fields has also been explored by several scholars. For instance, Graves et al. [45] used bidirectional long short-term memory (BLSTM) networks to classify the framewise phoneme. Ordóñez et al. [46] proposed a generic deep framework for activity recognition based on convolutional and LSTM-recurrent units to capture the temporal dynamics of human activity recognition. Han et al. [47] proposed a novel architecture of neural networks, referred to as the long short-term neural network (LSTM NN), to capture nonlinear dynamic traffic in an effective manner. Liu et al. [48] proposed a tree structure-based traversal method, and introduced a new gating mechanism within LSTM to learn the reliability of the sequential input data. Li et al. [49] deployed LSTM networks to predict out-of-sample directional movements for the constituent stocks of the S&P500 from 1992 until 2015.
Dynamic load identification is important to several areas of system engineering, including forward dynamics, system modelling, parameter identification and the inverse problem, among others. The error introduced in any segment will greatly affect the ultimate identification result, which is similar to other fuzzy fields [50,51,52]. Moreover, different material properties will affect the solution for the problem relating to structural dynamic characteristics [53,54,55,56], which makes identification difficult. Scholars usually use the metaphor of the “black box” to describe the problem of dynamic load identification and neural networks. Here, we combine the two black box problems to reduce the difficulty of dynamic load identification.
After considering a range of different aspects, we believe that deep learning has great potential in the field of dynamic load identification. However, there is no complete dynamic load identification theory based on deep learning. Starting with RNN and LSTM, we establish a complete dynamic load identification system in order to apply this method to engineering practice, and to successfully identify common dynamic loads. In this approach, sinusoidal, impulse and random excitations are identified on a simply supported beam. Furthermore, the effects of changing the network structure and the hyperparameters on the identification results are also evaluated. We show the possibility of using this method for multi-points excitations. To our satisfaction, we find that the identification results are not sensitive to the layout of measuring points. This is a significant advantage that can be beneficial if the proposed method is extended to other engineering applications.

2. Dynamic Load Identification Framework Based on RNN

2.1. Basic Description

A beam, which is the most primitive type of continuous structure, can be an efficient simplification for different applications. A Bernoulli–Euler beam with simply supported boundary conditions and a homogeneous material is shown in Figure 1. The cross-section area, the density and the elastic modulus are given as A , ρ and E , respectively. The moment of inertia of the interface is I .
The dynamic equation [25] of the beam can be written as:
E I 4 u x 4 + E I c 0 u t + E I c 1 5 u t x 4 + ρ A 2 u x 2 = f
where E I is the section stiffness, ρ A is the mass per unit length, u is the transverse deformation, c 0 is the viscous damping coefficient of an external medium, c 1 is the internal damping coefficient and f is the external load on the beam. It is assumed that the beam is subjected to a concentrated simple harmonic load. Then, the above differential equation [57] can be depicted in modal coordinates as:
q j + 2 ξ j ω ¯ j 2 q j + ω ¯ j 2 q j = Q j ( t )
in which ω ¯ j 2 , ξ j and q j are the natural frequency, damping ratio and modal coordinates, respectively. The terms in this equation are defined by:
2 ξ j ω ¯ j 2 = c 0 E I ρ A + c 1 ω ¯ j 2 Q j ( t ) = f ( x a , t ) φ j ( x a ) sin ( ω t ) / M j M j = 0 L ρ A φ j 2 ( x ) d x
Here, M j , φ j ( ω ) and φ j ( x a ) are the modal mass, modal shape and the value of the jth modal shape, respectively. Additionally, f ( x a ) is the load value at point a and Q j ( t ) is the modal force. Solving Equation (2), the convolution integral form of the solution can be detailed as:
q j ( t ) = 0 t h j ( t τ ) Q j ( τ ) d t
Therefore, we can derive the expression of displacement response as:
u ( x , t ) = 2 ρ A l n = 1 sin ( n π x l ) [ f ( x a , t ) sin ( n π x a l ) + 0 t h n · · ( t τ ) f ( x a , t ) sin ( n π x a l ) d τ ]
where h n · · ( t ) = 1 ω n e ξ n ω n t { [ ( ξ n ω n ) 2 ( ω n ) 2 ] sin ( ω n t ) + ( 2 ξ n ω n ω n ) cos ( ω n t ) } .
The load is fitted by a set of orthogonal polynomials [58,59], which can be written as:
f ( x a , t ) = i = 1 a i P i ( t )
where a i and P i ( t ) are the coefficient of orthogonal polynomials and the ith element of orthogonal polynomials, respectively. When the fitting accuracy can be satisfied, the last equation can be rewritten as:
f ( x a , t ) = i = 1 a i P i ( t ) = { P 1 P 2 P n f } { a 1 a 2 a n f }
Assume that the number of quantities to be identified, the number of samples in the time domain and the sampling time are n f , N s and t s , respectively. The following relationships can be derived as:
{ u · · k t 1 u · · k t 2 u · · k t s } = 1 M n = 1 S n k [ H 1 p t 1 H 2 p t 1 H n f p t 1 H 1 p t 2 H 2 p t 2 H n f p t 2 H 1 p t s H 2 p t s H n f p t s ] { a 1 a 2 a n f }
where u ¨ k t s is the value of u ¨ ( x , t ) on a point k at the time t and S n k is the value of S n at the point k . Subsequently, H n f P t s is the value of H n f P at the time t s . In addition, S n = sin ( n π x l ) sin ( n π x a l ) and H n f P = P n f + 0 t h ¨ ( h τ ) d τ , which is the element of transfer function. In Equation (8), the items to be identified are a 1 ,   a 2 ,   ,   a n f . Eventually, Equation (8) can be abbreviated as:
{ u t s × 1 } = [ H t s × n f ] { A n f × 1 }
When N s = n f , H ˜ , which is the transfer function, can be inversed directly and the coefficients of the equations are calculated as:
{ A } = [ H ] 1 { u }
While N s > n f , the coefficients of the system of contradictory equations can be obtained through the generalized inverse solution of the least squares, which can be derived as:
{ A } = [ [ H ] T [ H ] ] 1 [ H ] T { u }
Equation (11) is the mathematical model of dynamic load identification based on the generalized orthogonal polynomial under the action of time-varying concentrated force. In general, measuring the acceleration is easier than the displacement or the velocity. Therefore, in this paper, we construct the identification model of the beam structure based on the acceleration.
It can be seen from the above derivation that most time-domain identification methods need the model parameters of the structure. Obtaining an impulse response function for complex structures is often an exhausting process. Due to this difficulty and the characteristics of RNN models, this paper combines a deep RNN model with dynamic load identification to reduce the difficulty of load identification in engineering applications.

2.2. Recurrent Neural Network Implementation

The selection of training data is the first step that needs to be considered for deep learning models [60]. With dynamic load identification, the application of RNN requires identifying the load type in advance. The types to be considered here include a simple harmonic load, an impact load, a random load or a superposition of sinusoidal loads. These types in general cover most of the dynamic loads to be identified in engineering applications. Taking the dynamic load of a piece of rotating machinery as an example, its dynamic load is generally a quasi-harmonic signal with the motor frequency as the main frequency and the coupling frequency of other parts or noise interference as the auxiliary [61]. Therefore, when the motor parameters of a piece of rotating machinery are known, the shape of the force signal acting on the structure by the motor can be roughly inferred. Hence, the load type can be assumed, and the recorded dynamic load can be used for training. The vibration response and the assumed dynamic load are the input in this case. Repeated training is carried out to establish the inverse model. Additionally, the historical data of real dynamic loads can also be used as the input for RNN. Taking the impact excitation as an example, we can obtain multiple impact loads by continuously knocking and recording the vibration response. Subsequently, the load data of the previous times can then be used as training data to identify the dynamic impact loads of the later impacts [39].
The structure of a single hidden layer in RNN is shown in Figure 2, in which x , s and o are the discrete vibration response time series, the output of the hidden layers and the output, respectively. U, V and W are the weights of the input layer to the hidden layer, the hidden layer to the output layer and the self-recursion, respectively. Hence, the output of the hidden layer [62] can be written as:
s t = f ( U x t + W s t 1 )
where f is the activation function. Moreover, the output of output layer can be described as:
o t = g ( V s t )
in which g is also an activation function.
The propagation process of a single hidden layer can be defined as:
o t = g ( V s t ) = V f ( U x t + W s t 1 ) = V f ( U x t + W f ( U x t 1 + W s t 2 ) ) = V f ( U x t + W f ( U x t 1 + W f ( U x t 2 + W s t 3 ) ) ) = V f ( U x t + W f ( U x t 1 + W f ( U x t 2 + W f ( U x t 3 + ) ) ) )
Therefore, the predicted dynamic load can be derived using Equation (14). Stacking the single hidden layer in Figure 2 to establish a deep network, the output can be written as:
o t = g ( V i s t i + V i s t i ) s t i = f ( U i s t i 1 + W i s t 1 ) s t i 1 = f ( U i 1 s t i 2 + W i 1 s t 1 ) s t 1 = f ( U 1 x t + W 1 s t 1 )
The process of forward propagation can be described using Equation (12), which in matrix form is:
[ s 1 t s 2 t s n t ] = f ( [ U 11 U 12 U 1 m U 21 U 22 U 2 m U n 1 U n 2 U n m ] [ x 1 x 2 x m ] + [ W 11 W 12 W 1 m W 21 W 22 W 2 m W n 1 W n 2 W n m ] [ s 1 t 1 s 2 t 1 s n t 1 ] )
The back propagation calculation of RNN has two directions, namely, back propagation along time and along the layer. Moreover, the first kind of process of forward propagation [63] can be abbreviated as:
n e t t = U x t + W s t 1 s t 1 = f ( n e t t 1 )
in which n e t t is an alternative parameter to s t . Specifically, the relationship between two adjacent moments of n e t t can be written as:
n e t t n e t t 1 = n e t t s t 1 s t 1 n e t t 1
where the two terms to the right of the equal sign can be described with the following equations.
Substituting Equation (19) into Equation (18), we can obtain the following:
n e t t s t 1 = [ n e t 1 t s 1 t 1 n e t 1 t s 2 t 1 n e t 1 t s n t 1 n e t 2 t s 1 t 1 n e t 2 t s 2 t 1 n e t 2 t s n t 1 n e t n t s 1 t 1 n e t n t s 2 t 1 n e t n t s n t 1 ] = [ W 11 W 12 W 1 n W 21 W 22 W 2 n W n 1 W n 2 W n n ] = W s t 1 n e t t 1 = [ s 1 t 1 n e t 1 t 1 s 1 t 1 n e t 2 t 1 s 1 t 1 n e t n t 1 s 2 t 1 n e t 1 t 1 s 2 t 1 n e t 2 t 1 s 2 t 1 n e t n t 1 s n t 1 n e t 1 t 1 s n t 1 n e t 2 t 1 s n t 1 n e t n t 1 ] = [ f ( n e t 1 t 1 ) 0 0 0 f ( n e t 2 t 1 ) 0 0 0 0 f ( n e t n t 1 ) ] = d i a g [ f ( n e t t 1 ) ]
n e t t n e t t 1 = n e t t s t 1 s t 1 n e t t 1 = W d i a g [ f ( n e t t 1 ) ] = [ w 11 f ( n e t 1 t 1 ) w 12 f ( n e t 2 t 1 ) w 1 n f ( n e t n t 1 ) w 21 f ( n e t 1 t 1 ) w 22 f ( n e t 2 t 1 ) w 2 n f ( n e t n t 1 ) w n 1 f ( n e t 1 t 1 ) w n 2 f ( n e t 1 t 1 ) w n n f ( n e t 1 t 1 ) ]
Therefore, δ k T , which is the error per neuron, can be derived as:
δ k T = E n e t k = E n e t t n e t t n e t k = E n e t t n e t t n e t t 1 n e t t 1 n e t t 2 n e t k + 1 n e t k = W d i a g [ f ( n e t t 1 ) ] W d i a g [ f ( n e t t 2 ) ] W d i a g [ f ( n e t k ) ] δ t l = δ t T i = k t 1 W d i a g [ f ( n e t i ) ]
in which E , k and l are the loss function, the initial moment and the ordinal number of the network layer, respectively. Just as with an ordinary multi-layer perceptron (MLP), the forward propagation of RNN between network layers can be written as:
n e t t l = U a t l 1 + W s t 1 a t l 1 = f l 1 ( n e t t l 1 )
Similarly, the relationship between two adjacent layers is:
n e t t l n e t t l 1 = n e t l a t l 1 a t l 1 n e t t l 1 = U d i a g [ f l 1 ( n e t t l 1 ) ]
The derivation process is the same as with that of Equation (19) to Equation (21). In this way, the gradient of each network layer can be detailed as:
( δ t l 1 ) T = E n e t t l 1 = E n e t t l n e t t l n e t t l 1 = ( δ t l ) T U d i a g [ f l 1 ( n e t t l 1 ) ]
As can be seen from the foregoing, n e t t is the key intermediate quantity in forward and back propagation. Furthermore, the expanded form of n e t t can be detailed as:
[ n e t 1 t n e t 2 t n e t n t ] = U x t + W s t 1 = U x t + [ W 11 W 12 W 1 n W 21 W 22 W 2 n W n 1 W n 2 W n n ] [ s 1 t 1 s 2 t 1 s n t 1 ] = U x t + [ W 11 s 1 t 1 W 12 s 2 t 1 W 1 n s n t 1 W 21 s 1 t 1 W 22 s 2 t 1 W 2 n s n t 1 W n 1 s 1 t 1 W n 2 s 2 t 1 W n n s n t 1 ]
The gradient of the loss function E to the weight W can be written as:
E w j i = E n e t j t n e t j t w j i = δ j t s i t 1
Hence, the gradient of W at time t is:
W t E = [ δ 1 t s 1 t 1 δ 1 t s 2 t 1 δ 1 t s n t 1 δ 2 t s 1 t 1 δ 2 t s 2 t 1 δ 2 t s n t 1 δ n t s 1 t 1 δ n t s 2 t 1 δ n t s n t 1 ]
Further, the sum of the gradients of W at each time instant can be given by:
W E = i = 1 t W i E = [ δ 1 t s 1 t 1 δ 1 t s 2 t 1 δ 1 t s n t 1 δ 2 t s 1 t 1 δ 2 t s 2 t 1 δ 2 t s n t 1 δ n t s 1 t 1 δ n t s 2 t 1 δ n t s n t 1 ] + + [ δ 1 1 s 1 0 δ 1 1 s 2 0 δ 1 1 s n 0 δ 2 1 s 1 0 δ 2 1 s 2 0 δ 2 1 s n 0 δ n 1 s 1 0 δ n 1 s 2 0 δ n 1 s n 0 ]
Equally, the calculation of the weight u using the loss function can be described as:
U t E = [ δ 1 t x 1 t δ 1 t x 2 t δ 1 t x m t δ 2 t x 1 t δ 2 t x 2 t δ 2 t x m t δ n t x 1 t δ n t x 2 t δ n t x m t ] U E = i = 1 t U t E
Finally, the weights are updated using a gradient descent algorithm. The fundamental process of our work is summarized in Figure 3.
Taking sinusoidal excitation as an example, using the amplitude at time t as the previous information from which to infer the amplitude at time t + 1 is the continuous process by which one can predict the dynamic load with the RNN model. However, vibration data often have a long time record where the excitation amplitude and the frequency will change with time. Under these circumstances, the simple RNN model is no longer suitable for the dynamic load identification process.
The method proposed here is based on LSTM, which is a variant of RNN. LSTM can save a long-term time record, which makes it possible to establish a relationship between the vibration response information and the dynamic load for the entire time domain. This feature enables the use of the change over time under a certain frequency and amplitude as a training data set to identify the dynamic loads under different frequencies and amplitudes. Moreover, LSTM can also better avoid the gradient disappearance problem caused by lengthy time histories compared to RNN.

2.3. Long Short-Term Memory Implementation

In vibration tests, the sampling rate is generally large and the acquisition time is long. Therefore, a vibration time series is often classified as a long series. In the process of calculating weight updates, the gradient will disappear because time-domain vibration data is usually a long time series [64]. Furthermore, the gradient will explode due to the increase in the number of neural network layers [65]. Consequently, the use of long short-term memory is necessary in our work. The LSTM layer is shown in Figure 4.
In contrast to RNN, LSTM neurons add a forgetting gate, a memory gate, an input gate and an output gate. In Figure 4, f t ,   i t ,   c t ,   o t and h t are outputs of the forgetting gate, the input gate, the combination of forgetting and input gates, the output gate and the final result, respectively. σ and tanh are the sigmoid function and hyperbolic tangent function, respectively. Furthermore, W f ,   W i ,   W c and W o are the weights of each part. These outputs can be written as:
f t = σ ( W f [ h t 1 , x t ] + b f ) i t = σ ( W i [ h t 1 , x t ] + b i ) c t = f t c t 1 + i t tanh ( W c [ h t 1 , x t ] + b c ) o t = σ ( W o [ h t 1 , x t ] + b o ) h t = o t tanh ( c t )
in which b f ,   b i ,   b c   a n d   b 0 are the biases. Thus, the parameters to be learned for LSTM training are the weights and the biases in the above. Just as with the RNN derivation, we again use n e t as an intermediate variable. In this fashion, the intermediate output of each part can be obtained as:
n e t f , t = W f [ h t 1 , x t ] + b f = W f h h t 1 + W f x x t + b f n e t i , t = W i [ h t 1 , x t ] + b i = W i h h t 1 + W i x x t + b i n e t c , t = W c [ h t 1 , x t ] + b c = W c h h t 1 + W c x x t + b c n e t o , t = W o [ h t 1 , x t ] + b o = W o h h t 1 + W o x x t + b o
Moreover, the chain structure of LSTM is similar to that of RNN. Finally, the error transmitted from time t to any time k can be depicted as:
δ k T = j = k t 1 δ o , j T W o h + δ f , j T W f h + δ i , j T W i h + δ c , j T W c h
where the error of each part can be detailed as:
δ o , t T = δ t T tanh ( c t ) o t ( 1 o t ) δ f , t T = δ t T o t ( 1 tanh ( c t ) 2 ) c t 1 f t ( 1 f t ) δ i , t T = δ t T o t ( 1 tanh ( c t ) 2 ) c t i t ( 1 i t ) δ c , t T = δ t T o t ( 1 tanh ( c t ) 2 ) i t ( 1 c 2 ) c t = tanh ( W c [ h t 1 , x t ] + b c )
Similarly, the gradient between layers can be written as:
E n e t t l 1 = ( δ f , t T W f x + δ i , t T W i x + δ c , t T W i x + δ o , t T W o x ) f ( n e t t l 1 )
Eventually, the backpropagation through time (BPTT) algorithm is used to update the weight and the bias to complete the training of the model, and refers this to the RNN.

3. Numerical Studies

The dynamic load identification steps of a beam structure based on RNN are:
  • Step A: Establish the deep network with the BLSTM layers, LSTM layers and full connection layers.
  • Step B: Two groups of vibration response data are prepared. The first is the vibration response under an unknown dynamic load which is to be identified. The second is the vibration response under a known dynamic load which is different from the first group and used for training. The proposed algorithm is then trained using the second group with the known dynamic load, while the responses obtained from the first group are used to identify the unknown load. Furthermore, these data groups are divided into a training set, a verification set and a test set on the basis of equipment computational ability.
  • Step C: The backpropagation through time (BPTT) algorithm is used as a model training method to update the parameters of the model. In addition, the initial learning rate and batch size are set in the light of available computer memory. To accelerate the training speed, the training process is run on a GPU device.
  • Step D: The new vibration response data are used to test the identification effect of the model. In this paper, two methods are introduced to appraise the effect of identification: the peak relative error method (PREM) and the signal-to-noise ratio (SNR). PREM is the maximum value of the peak error of the load identification result and can be written in Equation (35) as:
P R E M ( X , Y ) = | max Y ( i ) max X ( i ) | max X ( i ) × 100 %
in which X ( i ) and Y ( i ) are the actual load and the identified load signal, respectively. Moreover, SNR is the signal-to-noise ratio of dynamic load identification results, which describes the overall effect of dynamic load identification. The calculation of SNR can be detailed as:
S N R ( X , Y ) = 10 log 10 [ i = 1 n s X ( i ) 2 i = 1 n s ( X ( i ) Y ( i ) ) 2 ]
where n s is the number of acquisition points in the analysis period.
Additionally, we compare the results based on RNN with MLP. The two methods use the same vibration response data to identify the same dynamic load. Furthermore, the best MLP structure is selected to identify the dynamic load and the overlapping parameters are set to be the same as MLP.

3.1. Model Parameters

We analyze the dynamic load identification cases of the simply supported beam under three kinds of excitation: sinusoidal dynamic load, impact dynamic load or random load. All the loads are applied at one point, as shown in Figure 1. The simply supported beam is 5 m long, 0.25 m wide and 0.05 m thick. Moreover, the elastic modulus, Poisson’s ratio and the density of the simply supported beam are 210 Gpa , 0.31 and 7800 Kg / m 3 , respectively, as shown in Table 1. In addition, the simply supported beam is divided into 10 sections and 11 nodes. Table 2 describes the distance from each node to the coordinate’s origin. We have carried out modal analysis on the simply supported beam and the first ten natural frequencies are shown in Table 3.

3.2. Considered Cases

In order to evaluate the proposed method, three numerical cases are analyzed using RNN and MLP. Furthermore, three SNRs (10 dB, 20 dB, 30 dB) are added to the input vibration response data to compare the robustness of RNN and MLP. The dynamic load parameters of the three cases are:
Case 1: A sinusoidal excitation F = 85 sin ( 30 π t ) is applied at a = 1.5 m. The time interval is 0.0001 s and the full considered time span is 1 s. The sinusoidal load function used for training is F = 50 sin ( 15 π t ) .
Case 2: An impact excitation is applied at a = 1.5 m and the function is:
F = { 30 sin ( 50 π t ) , t ϵ [ 0.56 , 0.58 ] 50 sin ( 50 π t ) , t ϵ [ 0.64 , 0.66 ] .
The time interval is 0.0001 s and the full considered time span is 0.22 s. The dynamic load data for training are made up of continuous hammering from 0 s to 0.56 s using F and the vibration response data for training are obtained under this load.
Case 3: A random excitation is applied at a = 1.5 m. The variance of the excitation is 100 and the mean value is 0. Moreover, the time interval is 0.0001 s and the full considered time span is 0.2 s. The dynamic load used for training is a random excitation with a variance of 25 and a mean value of 0 while the vibration response data for training are obtained under this load. The parameters of the three dynamic loads are described in detail in Table 4.
The RNN used for these cases has one BLSTM layer, one LSTM layer and two fully connected layers. The BLSTM is a form of LSTM that allows the current output to be obtained by the combination of the previous output and the future output. Consequently, we added the BLSTM layer based on the statistical regularity of conventional excitation data. In order to reflect the comparison, the number of neurons and layers in the fully connected layers in RNN is the same as that in MLP. Dropout regularization is used to improve the generalization ability of the model. The computations are performed on an intel i5-9300H CPU and an NVIDIA GTX1650 GPU. In addition, we use both the GPU and the CPU to calculate the dynamic loads. The GPU is only used to improve the computing efficiency compared with the CPU, and its influence on the calculation accuracy is insignificant and can be ignored. Hence, the identification results are only presented with the GPU in this paper.

3.3. Identification Results and Comparisons

Case 1: The data concerning the sinusoidal excitation applied on the beam was recovered with the RNN and MLP networks that were trained using the method previously described above. The results without noise and the absolute errors of identification result are shown in Figure 5, in which we show comparisons of deep RNN, MLP and the actual load, as well as the absolute errors of deep RNN and MLP. The abscissa unit is s and the ordinate unit is N.
Figure 6 presents the results with different noise levels, specifically, 10 dB, 20 dB and 30 dB, and shows comparisons for deep RNN under these three noises, and the errors compared with the actual load. The abscissa unit is s and the ordinate unit is N. Table 5 describes the PREM and SNR of RNN and MLP without noise as well as PREM and SNR of RNN for the different considered noise levels. The results show that the error of the sinusoidal load identification based on RNN is smaller than that based on MLP without noise. Moreover, PREM and SNR are within the acceptable range. After adding the noise, the error increases significantly when compared to a noise-free environment. Under 10 dB of noise, the PREM reaches a maximum of 6.18% and the SNR is 24.50. Overall, however, even when the noise is considered, the errors remain within an acceptable level for engineering accuracy. The obtained results indicate the accuracy and the robustness of the proposed method.
Case 2: A half sine wave F = { 30 sin ( 50 π t ) , t ϵ [ 0.56 , 0.58 ] 50 sin ( 50 π t ) , t ϵ [ 0.64 , 0.66 ] is used as an excitation on the beam at a = 1.5 m. Again, we first compare the results of RNN and MLP where no noise is used. Next, the resilience of the RNN method is evaluated under noisy conditions. The presentation of the results is similar to Case 1. Figure 7 shows the comparison of the RNN and MLP identifications. The performance of RNN under noisy conditions are presented in Figure 8. Moreover, Table 6 shows the PREM and SNR for the different considered solutions. It can be inferred that the identification results of MLP concerning the impact load are unsatisfactory. This is especially so compared to the high accuracy of RNN and its ability to recover the load curve with good precision. Clearly, the MLP results capture the time instants of the two impacts, but the amplitude of the largest impact is significantly missed. Additionally, the robustness of the RNN model to the impact load is acceptable, given that its PREM reaches the maximum value of 9.37% at 10 dB, while the SNR is 22.52.
Case 3: The considered random load applied in this example is a Gaussian white noise with a variance of 100 and a mean value of 0. The dynamic load is again applied at a = 1.5 m. As with before, the results of RNN and MLP are compared for a noise-free environment, while only the performance of RNN is evaluated under noise. The results without noise are plotted in Figure 9 and the results with noise in Figure 10, in which the abscissa unit is s and the ordinate unit is N. The PREM and SNR for different solutions are presented in Table 7. The results are consistent with before and indicate the advantages of RNN in the time domain compared with MLP. The RNN shows a high level of identification accuracy in noise-free environments. When adding noise to the load identification input, the RNN results remain acceptable. The PREM reaches the maximum value of 1.69% at 10 dB, while the SNR is 25.29 under a 10 dB noise level.

4. Experimental Results

4.1. Experimental Setting

In this section, experimental results are used to further validate the reliability and feasibility of the proposed approach. A simply supported beam is set up with vibration analysis equipment, acceleration and force sensors, as well as vibration exciters. The test setting is shown in Figure 11. Nine acceleration sensors (PCB Unidirectional acceleration sensor) are arranged on the beam to collect the vibration response information. A vibration exciter (NTS Vibration exciter) is applied 0.21 m away from the left end of the beam and a vibration analyzer (M+P VibMoblie) is used to collect vibration data, as can be seen in the figure. Two types of load are applied, namely, sinusoidal excitation and random excitation. Additionally, the vibration responses under these two types of excitations are measured at the same location where the exciter is applied, i.e., 0.21 m from the left end. The function of the sinusoidal excitation is F = 1.8 sin ( 150 π t ) and the random excitation is again Gaussian white noise with a variance of 100 and a mean value of 0. It should be noted that, for the training purposes, the sinusoidal excitation is F = 10 sin ( 100 π t ) and the random excitation is Gaussian white noise with 25 variance and a mean value of 0. The sample rate is set to 6400. The time span of the calculations for the sinusoidal excitation and the random excitation is fixed at 0.5 s. Moreover, the experimental validation measures, PREM and SNR, are used to evaluate the identification accuracy. The parameters of the simply supported beam are shown in Table 8 and the parameters of the dynamic loads in Table 9.

4.2. Experimental Results

The excitations mentioned in Table 9 were applied to the simply supported beam. The vibration responses that were obtained were transferred through the proposed model as described in Section 3.2. The results of the sinusoidal excitation are presented in Figure 12 and those of the random excitation in Figure 13, in which the abscissa unit is s and the ordinate unit is N, similar to the presentation of the results of the numerical analysis. The PREM and SNR for the different obtained solutions are displayed in Table 10. The results again reflect the high reliability and accuracy of the proposed model. The PREMs of the sinusoidal excitation and random excitation are 1.27% and 1.26%, respectively, while the SNRs of these two excitations are 36.42 and 46.28. The precision and robustness of the proposed method mean that it is of great practical value for many engineering applications. However, the proposed method requires a relatively large amount of vibration response data, which also means that an extended amount of time is needed to train the model. In addition, this method requires a priori data of the dynamic load, that is, historical data or similar data of the target’s dynamic load. For completely unknown dynamic loads, this method might face difficulties predicting the load accurately.

5. Implementation Factors

Taking the simply supported beam in Figure 11 with the sinusoidal excitations as an example, we now aim to perform a detailed analysis of the proposed method. We first evaluate the influence of the hyperparameters on the dynamic load identification results. The impact of the RNN models with different structures on the identification results is also studied. Then, we evaluate the proposed method of identifying dynamic loads under multi-points excitations. Finally, we check if the identification results are affected by the layouts or the measuring points. To this end, we build three models using different layers and adjust the hyperparameters, which consist of each neuron’s number and learning rate and the training time, and compare the performance relative to PREM and SNR of the obtained results.

5.1. Effect of Different Architectures and Hyperparameters

To discuss the impression of the model structure on the identification results, we constructed three different models: an RNN model (two layers without LSTM), an LSTM model (two layers with LSTM) and a BLSTM model (one BLSTM and one LSTM layer). Furthermore, the number of neurons, which affects the learning ability of the network, and the learning rate, which represents the calculation step size of the update algorithm, are changed to evaluate the influence of the hyperparameters on the identification results. All the considered variations and their relevant accuracy results are presented in Table 11. For reference we also show in the table the respective GPU and CPU times in minutes needed to perform the computations.
In general, the proposed approach remains effective for the different cases shown in the table. However, the changes in the structure and the hyperparameters shows a meaningful impact on the identification results, especially for the structure without LSTM. The value of PREM increases and of SNR decreases without LSTM. This behavior is consistent with the fact that RNN cannot deal with vanishing and exploding gradients. The model with BLSTM shows an improved identification ability but it also requires longer training times. It is to be noted that the BLSTM model is the structure considered in Section 3 and Section 4. Finally, the increase in the number of neurons and the reduction of the learning rate improve the identification accuracy. Nevertheless, using an excessive number of neurons will result in data being over-fitted, while a very low learning rate will prevent the convergence of the network’s gradients.

5.2. Effect of Multi-Point Excitations

To evaluate the effect of multi-point excitations, we apply two sinusoidal loads on the simply supported beam considered Section 4. The first sinusoidal load (Excitation 1) is F 1 = 1.8 sin ( 150 π t ) and applied at a = 0.21 m from the left support, while the second (Excitation 2) is F 2 = 2.5 sin ( 100 π t ) and at a = 0.56 m. The BLSTM model in Section 5.1 is then used to identify the loads that are applied simultaneously. The identification results are presented in Figure 14 and the accuracy is shown in Table 12. Clearly, the identification results of the two-point excitation is worse than that under a single-point excitation, which is consistent with the research results for multi-point excitations in [66,67]. However, the proposed method can accurately identify the excitation curves, as can be seen in the figure. Moreover, the absolute error, PREM and SNR are acceptable.

5.3. Effect of Different Measuring Points

Finally, we want to evaluate the impact of using different numbers and positions of points to take the measurements on the accuracy. Thus, we assess the impact of choosing a specific measurement profile compared to others on the identification accuracy of RNN. We propose five layouts of measurement points on the simply supported beam defined in Section 4. Each layout is different based on the positions and the number of considered points, as shown in Figure 15. The distance from the measuring point to the left end, the distance from the excitation point to the right end and the distance between the measuring points are all fixed at 0.21 m. The BLSTM network defined in Section 5.1 is again used here. The impact of different layouts on the identification results is shown in Figure 16 and Table 13.
It can be inferred from the results that the identification accuracy is not affected by changing the measurement layout. Furthermore, the dynamic loads can be accurately identified even when using only one measuring point. In many engineering applications, this can be an important feature for the proposed approach, given that the accuracy of the RNN model is insensitive to the measurement layout. This feature can significantly reduce the complexity of the vibration measurements.

6. Conclusions

In this paper, a novel method based on a recurrent neural network is proposed for the dynamic load identification of a simply supported beam. The model is based on RNN and LSTM. The data needed to train and validate the model are created from different types of dynamic loads, i.e., sinusoidal, impact and random loads. The model is then used to identify the dynamic load using the vibration response from different excitations. The results show that the proposed method has a good identification accuracy and is reliable even when used with noisy measurements or when considering multiple excitations simultaneously. To evaluate the stability of the proposed algorithm, we also considered its performance using different network structures and different values for the hyperparameters. We finally analyze the sensitivity of the proposed algorithm to the number and the layout of the points where measurements are taken. Based on the presented results, the proposed model has the following advantages:
  • Compared with conventional methods, the proposed algorithm can avoid the need to solve the model parameters of the structure. This can significantly reduce the difficulty of dynamic load identification as assessing the dynamic properties of a structure cannot always be possible.
  • The presented results shows that the proposed algorithm for dynamic load identification is accurate, stable and robust.
  • The proposed method is suitable for single-point or multi-point excitations. Similarly, the method does not display sensitivity to changing the vibration measurement layouts.
  • Using different structures for the model network and the choice of the hyperparameters has a limited impact on the identification results. The choice of the structure and the hyperparameters can then be made based on balancing the required accuracy against the time available to train the network.
Despite the different advantages of models built with RNN for change-over-time applications, this work is the first to utilize such models to identify different dynamic loads applied to a simply supported beam. We hope the work can bring some fresh ideas into the dynamic load identification academic and industrial communities.

Author Contributions

Conceptualization, J.J. and H.Y.; methodology, J.J. and F.L.; software, M.S.M.; validation, G.C.; formal analysis, H.Y.; investigation, H.Y.; resources, M.S.M.; data curation, H.Y. and F.L.; writing—original draft preparation, H.Y.; writing—review and editing, M.S.M.; visualization, F.L.; supervision, G.C.; project administration, J.J. and M.S.M.; funding acquisition, J.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China grant number: No. 51775270.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is available by requests and should be directed to the second co-author Jinhui Jiang ([email protected]).

Acknowledgments

The author would like to acknowledge the support provided by the National Natural Science Foundation of China, No. 51775270, and Qinglan Project.

Conflicts of Interest

The authors have no conflict of interests to be declared.

References

  1. Mendrok, K.; Uhl, T. Load identification using a modified modal filter technique. J. Vib. Control 2010, 16, 89–105. [Google Scholar] [CrossRef]
  2. Alkhatib, R.; Golnaraghi, F.M. Active structural vibration control: A review. Shock Vib. Dig. 2003, 35, 367. [Google Scholar] [CrossRef]
  3. Aucejo, M.; Smet, O.D. An optimal Bayesian regularization for force reconstruction problems. Mech. Syst. Signal Process. 2019, 126, 98–115. [Google Scholar] [CrossRef] [Green Version]
  4. Xu, X.; Ou, J. Force identification of dynamic systems using virtual work principle. J. Sound Vib. 2015, 337, 71–94. [Google Scholar] [CrossRef]
  5. Jiang, J.; Kong, D. Joint user scheduling and MU-MIMO hybrid beamforming algorithm for mmWave FDMA massive MIMO system. Int. J. Antennas Propag. 2016, 2016 Pt 3, 1–10. [Google Scholar] [CrossRef] [Green Version]
  6. Jiang, J.; Seaid, M.; Mohamed, M.S.; Li, H. Inverse algorithm for real-time road roughness estimation for autonomous vehicles. Arch. Appl. Mech. 2020, 90, 1333–1348. [Google Scholar] [CrossRef]
  7. Yi, L.; Shepard, W.S. Dynamic force identification based on enhanced least squares and total least-squares schemes in the frequency domain. J. Sound Vib. 2005, 282, 37–60. [Google Scholar]
  8. Wu, E.; Tsai, C.Z.; Tseng, L.H. A deconvolution method for force reconstruction in rods under axial impact. J. Acoust. Soc. Am. 1998, 104, 1418–1426. [Google Scholar] [CrossRef]
  9. Zheng, S.; Lin, Z.; Lian, X.; Li, K. Technical note: Coherence analysis of the transfer function for dynamic force identification. Mech. Syst. Signal Process. 2011, 25, 2229–2240. [Google Scholar] [CrossRef]
  10. Liu, R.; Doriban, E.; Hou, Z.; Qian, K. Dynamic load identification for mechanical systems: A review. Arch. Comput. Methods Eng. 2021, 14, 1–33. [Google Scholar] [CrossRef]
  11. Bartlett, F.D.; Flannelly, W.G. Model verification of force determination for measuring vibratory loads. J. Am. Helicopter Soc. 1979, 24, 10–18. [Google Scholar] [CrossRef]
  12. Chao, M.; Hongxing, H.; Feng, X. The identification of external forces for a nonlinear vibration system in frequency domain. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2014, 228, 1531–1539. [Google Scholar] [CrossRef]
  13. Lage, Y.E.; Maia, N.M.M.; Neves, M.M.; Ribeiro, A.M.R. Force identification using the concept of displacement transmissibility. J. Sound Vib. 2013, 332, 1674–1686. [Google Scholar] [CrossRef]
  14. Pack, T.; Haug, E.J. Ill-Conditioned equations in kinematics and dynamics of machines. Int. J. Numer. Methods Eng. 2010, 26, 217–230. [Google Scholar]
  15. Zhou, L.; Sheng, S.F.; Wang, B.X.; Lian, X.M.; Li, K.Q. Coherence analysis method for dynamic force identification. J. Vib. Eng. 2011, 24, 14–19. [Google Scholar]
  16. Wang, L.; Peng, Y.; Xie, Y.; Chen, B.; Du, Y. A new iteration regularization method for dynamic load identification of stochastic structures. Mech. Syst. Signal Process. 2021, 156, 107586. [Google Scholar] [CrossRef]
  17. Jiang, J.; Tang, H.Z.; Mohamed, M.S.; Li, S.Y.; Chen, J.D. Augmented tikhonov regularization method for dynamic load identification. Appl. Sci. 2020, 10, 6348. [Google Scholar] [CrossRef]
  18. He, Z.C.; Zhang, Z.; Li, E. Random dynamic load identification for stochastic structural-acoustic system using an adaptive regularization parameter and evidence theory. J. Sound Vib. 2020, 471, 115188. [Google Scholar] [CrossRef]
  19. Feng, Y.; Gao, W. A quotient function method for selecting adaptive dynamic load identification optimal regularization parameter. DEStech Trans. Eng. Technol. Res. 2020. [Google Scholar] [CrossRef]
  20. Liu, C.; Ren, C.; Wang, N. Load identification method based on interval analysis and Tikhonov regularization and its application. Int. J. Electr. Comput. Eng. 2019, 11, 1–8. [Google Scholar] [CrossRef] [Green Version]
  21. Wang, P.; Yang, G.L.; Xiao, H. Dynamic load identification theoretical summary and the application on mining machinery. Appl. Mech. Mater. 2013, 330, 811–814. [Google Scholar] [CrossRef]
  22. Hwang, J.S.; Lee, T.J.; Park, H.J.; Park, S.H. Dynamic force identification of a structure using state variable in the frequency domain. J. Wind. Eng. 2016, 20, 195–202. [Google Scholar]
  23. Zhang, J.; Li, W. Six-force-factor identification of helicopters. Acta Aeronaut. Astronaut. Sin. 1986, 1986, 2. [Google Scholar]
  24. Jie, L.; Meng, X.; Chao, J.; Xu, H.; Zhang, D. Time-domain Galerkin method for dynamic load identification. Int. J. Numer. Methods Eng. 2015, 105, 620–640. [Google Scholar]
  25. Jiang, J.; Ding, M.; Li, J. A novel time-domain dynamic load identification numerical algorithm for continuous systems. Mech. Syst. Signal Process. 2021, 160, 107881. [Google Scholar] [CrossRef]
  26. Chi, L.; Liu, J.; Jiang, C. Radial Basis Shape Function Method for Identification of Dynamic Load in Time Domain. Chin. J. Mech. Eng. 2013, 24, 285–289. [Google Scholar]
  27. Li, H.; Jiang, J.; Mohamed, M.S. Online dynamic load identification based on extended Kalman filter for structures with varying parameters. Symmetry 2021, 13, 1372. [Google Scholar] [CrossRef]
  28. Jiang, J.; Mohamed, M.S.; Seaid, M. Fast inverse solver for identifying the diffusion coefficient in time-dependent problems using noisy data. Arch. Appl. Mech. 2020, 4, 1623–1639. [Google Scholar] [CrossRef]
  29. Jiang, J.; Luo, S.Y.; Mohamed, M.S.; Liang, Z. Real-Time identification of dynamic loads using inverse Solution and Kalman filter. Appl. Sci. 2020, 10, 6767. [Google Scholar] [CrossRef]
  30. Liu, J.; Meng, X.; Zhang, D.; Jiang, C.; Han, X. An efficient method to reduce ill-posedness for structural dynamic load identification. Mech. Syst. Signal Process. 2017, 95, 273–285. [Google Scholar] [CrossRef]
  31. Cao, Y.; Wang, F.Q. Dynamic load identification in multi-freedom structure based on precise integration. J. Appl. Sci. 2006, 24, 547–550. [Google Scholar]
  32. Mxa, B.; Nan, J.B. Dynamic load identification for interval structures under a presupposition of ‘being included prior to being measured’. Appl. Math. Model. 2020, 85, 107–123. [Google Scholar]
  33. Yu, B.; Wu, Y.; Hu, P.; Ding, J.; Wang, B. A non-iterative identification method of dynamic loads for different structures. J. Sound Vib. 2020, 483, 115508. [Google Scholar] [CrossRef]
  34. Liu, J.; Li, K. Sparse identification of time-space coupled distributed dynamic load. Mech. Syst. Signal Process. 2021, 148, 107177. [Google Scholar] [CrossRef]
  35. Zhang, J.; Zhang, F.; Jiang, J. identification of multi-point dynamic load positions based on filter coefficient method. J. Vib. Eng. Technol. 2021, 9, 563–573. [Google Scholar] [CrossRef]
  36. Cao, X.; Sugiyama, Y.; Mitsui, Y. Application of artificial neural networks to load identification. Comput. Struct. 1998, 69, 63–78. [Google Scholar] [CrossRef]
  37. Yl, A.; Lei, W.; Kg, C. A support vector regression (SVR)-based method for dynamic load identification using heterogeneous responses under interval uncertainties. Appl. Soft Comput. 2021, 110, 107599. [Google Scholar]
  38. Wang, C.; Chen, D.; Chen, J.; Lai, X.; He, T. Deep regression adaptation networks with model-based transfer learning for dynamic load identification in the frequency domain. Eng. Appl. Artif. Intell. 2021, 102, 104244. [Google Scholar] [CrossRef]
  39. Zhou, J.M.; Dong, L.; Guan, W.; Yan, J. Impact load identification of nonlinear structures using deep Recurrent Neural Network. Mech. Syst. Signal Process. 2019, 133, 106292. [Google Scholar] [CrossRef]
  40. Cooper, S.B.; Dimaio, D. Static load estimation using artificial neural network: Application on a wing rib. Adv. Eng. Softw. 2018, 125, 113–125. [Google Scholar] [CrossRef]
  41. Graves, A.; Mohamed, A.R.; Hinton, G. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics Speech, and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013. [Google Scholar]
  42. Schuster, M.; Paliwal, K.K. Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef] [Green Version]
  43. Hao, N.; Yi, J.; Wen, Z.; Tao, J. Recurrent neural network based language model adaptation for accent mandarin speech. In Chinese Conference on Pattern Recognition; Springer: Singapore, 2016; pp. 607–617. [Google Scholar]
  44. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  45. Graves, A.; Schmidhuber, J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 2005, 18, 602–610. [Google Scholar] [CrossRef]
  46. Francisco, O.; Daniel, R. Deep convolutional and LSTM recurrent neural networks for multimodal Wearable Activity Recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef] [Green Version]
  47. Han, J.; Zou, Y.; Zhang, S.; Tang, J.; Wang, Y. Short-Term speed prediction using remote microwave sensor data: Machine learning versus statistical model. Math. Probl. Eng. 2016, 2016, 1–13. [Google Scholar]
  48. Liu, J.; Shahroudy, A.; Dong, X.; Gang, W. Spatio-Temporal LSTM with trust gates for 3D human action recognition. In European Conference on Computer Vision; Springer: Cham, Germany, 2016; pp. 816–833. [Google Scholar]
  49. Li, C.; Zhang, Y.; Zhao, G. Deep learning with long short-term memory networks for air temperature predictions. In Proceedings of the 2019 International Conference on Artificial Intelligence and Advanced Manufacturing, Dublin, Ireland, 16–18 October 2019. [Google Scholar]
  50. Haber, R.E.; Alique, J.R. Fuzzy logic-based torque control system for milling process optimization. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2007, 37, 941–950. [Google Scholar] [CrossRef]
  51. Haber, R.E.; Alique, J.R.; Alique, A.; Hernandez, J.; Uribe-Etxebarria, R. Embedded fuzzy-control system for machining processes: Results of a case study. Comput. Ind. 2003, 50, 353–366. [Google Scholar] [CrossRef]
  52. Villalonga, A.; Beruvides, G.; Castano, F.; Haber, R. Cloud-based industrial cyber-physical system for data-driven reasoning. A review and use case on an industry 4.0 pilot line. IEEE Trans. Industr. Inform. 2020, 16, 5975–5984. [Google Scholar] [CrossRef]
  53. Stachurski, Z.H. On structure and properties of amorphous materials. Materials 2011, 4, 1564–1598. [Google Scholar] [CrossRef]
  54. Domenico, M.; Andrea, A.; Marco, B.; Giacomo, C.; Chiara, V. The use of empirical methods for testing granular materials in analogue modelling. Materials 2017, 10, 635. [Google Scholar] [CrossRef]
  55. Duan, Q.; An, J.; Mao, H.; Liang, D.; Li, H.; Wang, S.; Huang, C. Review about the application of fractal theory in the research of packaging materials. Materials 2021, 14, 860. [Google Scholar] [CrossRef]
  56. Zhang, N.; She, W.; Du, F.; Xu, K. Experimental study on mechanical and functional properties of reduced graphene Oxide/Cement composites. Materials 2020, 13, 3015. [Google Scholar] [CrossRef] [PubMed]
  57. Fang, Z. Identification of dynamic load based on series expansion. J. Vib. Eng. 1996, 1996, 6461427. [Google Scholar]
  58. Charles, F.D.; Yuan, X. Orthogonal Polynomials of Several Variables; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  59. Bhat, R.B. Natural frequencies of rectangular plates using characteristic orthogonal polynomials in Rayleigh-Ritz method. J. Sound Vib. 1985, 102, 493–499. [Google Scholar] [CrossRef]
  60. Lee, D.S. Improved Activation Functions of Deep Convolutional Neural Networks for Image Classification. Master’s Thesis, Graduate School of UNIST, Ulsan, Korea, February 2017. [Google Scholar]
  61. Feng, L.I.Z.C. Identification of moving loads for complex bridge structures based on time finite element model. Trans. Tianjin Univ. 2006, 9, 1043–1047. [Google Scholar]
  62. Hwang, K.; Sung, W. Character-level incremental speech recognition with recurrent neural networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016. [Google Scholar]
  63. Shin, J.; Yeon, K.; Kim, S.; Sunwoo, M.; Han, M. Comparative study of Markov chain with recurrent neural network for short term velocity prediction implemented on an embedded system. IEEE Access 2021, 9, 24755–24767. [Google Scholar] [CrossRef]
  64. Munir, K.; Elahi, H.; Ayub, A.; Frezza, F.; Rizzi, A. Cancer diagnosis using deep learning: A bibliographic review. Cancers 2019, 11, 1235. [Google Scholar] [CrossRef] [Green Version]
  65. Ji, Z.; Wang, B.; Deng, S.P.; You, Z. Predicting dynamic deformation of retaining structure by LSSVR-based time series method. Neurocomputing 2014, 137, 165–172. [Google Scholar] [CrossRef]
  66. Shi, Z.L.; Li, Z.X. Methods of seismic response analysis for long-span bridges under multi-support excitations of random earthquake ground motion. Earthq. Eng. Eng. Vib. 2003, 23, 124–130. [Google Scholar]
  67. Bai, F.Y.; Liu, J.Q.; Li, L. Elasto-plastic analysis of large span reticulated shell structure under multi-support excitations. Adv. Mat. Res. 2012, 446, 54–58. [Google Scholar]
Figure 1. Dynamic load identification diagram of a simply supported beam.
Figure 1. Dynamic load identification diagram of a simply supported beam.
Materials 14 07846 g001
Figure 2. RNN structure of a single hidden layer.
Figure 2. RNN structure of a single hidden layer.
Materials 14 07846 g002
Figure 3. Dynamic load identification process based on RNN.
Figure 3. Dynamic load identification process based on RNN.
Materials 14 07846 g003
Figure 4. Structure of LSTM.
Figure 4. Structure of LSTM.
Materials 14 07846 g004
Figure 5. Identification results from sinusoidal excitation: (a) comparison of results of the use of deep RNN and MLP with a sinusoidal excitation; (b) comparison of errors of the use of deep RNN and MLP with a sinusoidal excitation.
Figure 5. Identification results from sinusoidal excitation: (a) comparison of results of the use of deep RNN and MLP with a sinusoidal excitation; (b) comparison of errors of the use of deep RNN and MLP with a sinusoidal excitation.
Materials 14 07846 g005
Figure 6. Identification results from sinusoidal excitation with noises: (a) comparison of results of the use of deep RNN with a sinusoidal excitation under three noises; (b) comparison of errors of the use of deep RNN under noises compared with an actual load.
Figure 6. Identification results from sinusoidal excitation with noises: (a) comparison of results of the use of deep RNN with a sinusoidal excitation under three noises; (b) comparison of errors of the use of deep RNN under noises compared with an actual load.
Materials 14 07846 g006
Figure 7. The identification results from impact excitation: (a) comparison of results of the use of deep RNN and MLP with an impact excitation; (b) comparison of errors of the use of deep RNN and MLP with an impact excitation.
Figure 7. The identification results from impact excitation: (a) comparison of results of the use of deep RNN and MLP with an impact excitation; (b) comparison of errors of the use of deep RNN and MLP with an impact excitation.
Materials 14 07846 g007
Figure 8. The identification results from impact excitation with noises: (a) comparison of results of the use of deep RNN with an impact excitation under three noises; (b) comparison of errors of the use of deep RNN under noises compared with actual load.
Figure 8. The identification results from impact excitation with noises: (a) comparison of results of the use of deep RNN with an impact excitation under three noises; (b) comparison of errors of the use of deep RNN under noises compared with actual load.
Materials 14 07846 g008
Figure 9. Identification results from random excitation: (a) comparison of results of the use of deep RNN and MLP with a random excitation; (b) comparison of errors of the use of deep RNN and MLP with a random excitation.
Figure 9. Identification results from random excitation: (a) comparison of results of the use of deep RNN and MLP with a random excitation; (b) comparison of errors of the use of deep RNN and MLP with a random excitation.
Materials 14 07846 g009
Figure 10. Identification results from random excitation with noises: (a) comparison of results of the use of deep RNN with a random excitation under three noises; (b) comparison of errors of the use of deep RNN under noises compared with an actual load.
Figure 10. Identification results from random excitation with noises: (a) comparison of results of the use of deep RNN with a random excitation under three noises; (b) comparison of errors of the use of deep RNN under noises compared with an actual load.
Materials 14 07846 g010
Figure 11. Experimental settings.
Figure 11. Experimental settings.
Materials 14 07846 g011
Figure 12. Identification results for sinusoidal excitation from the practical experiment: (a) results of the use of deep RNN compared with actual load with a sinusoidal excitation in an experiment; (b) absolute error of the use of deep RNN with a sinusoidal excitation in an experiment.
Figure 12. Identification results for sinusoidal excitation from the practical experiment: (a) results of the use of deep RNN compared with actual load with a sinusoidal excitation in an experiment; (b) absolute error of the use of deep RNN with a sinusoidal excitation in an experiment.
Materials 14 07846 g012
Figure 13. Identification results for random excitation from the practical experiment: (a) results of the use of deep RNN compared with actual load with a random excitation in an experiment; (b) absolute error of the use of deep RNN with a random excitation in an experiment.
Figure 13. Identification results for random excitation from the practical experiment: (a) results of the use of deep RNN compared with actual load with a random excitation in an experiment; (b) absolute error of the use of deep RNN with a random excitation in an experiment.
Materials 14 07846 g013
Figure 14. Identification results for multi-point excitation: (a) identification curve for Excitation 1; (b) absolute error for Excitation 1; (c) identification curve for Excitation 2; (d) absolute error for Excitation 2.
Figure 14. Identification results for multi-point excitation: (a) identification curve for Excitation 1; (b) absolute error for Excitation 1; (c) identification curve for Excitation 2; (d) absolute error for Excitation 2.
Materials 14 07846 g014
Figure 15. Different considered layouts for measurement points.
Figure 15. Different considered layouts for measurement points.
Materials 14 07846 g015
Figure 16. The identification results for layout of measuring points: (a) identification curve for layout 1; (b) absolute error for layout 1; (c) identification curve for layout 2; (d) absolute error for layout 2; (e) identification curve for layout 3; (f) absolute error for layout 3; (g) identification curve for layout 4; (h) absolute error for layout 4; (i) identification curve for layout 5; (j) absolute error for layout 5.
Figure 16. The identification results for layout of measuring points: (a) identification curve for layout 1; (b) absolute error for layout 1; (c) identification curve for layout 2; (d) absolute error for layout 2; (e) identification curve for layout 3; (f) absolute error for layout 3; (g) identification curve for layout 4; (h) absolute error for layout 4; (i) identification curve for layout 5; (j) absolute error for layout 5.
Materials 14 07846 g016aMaterials 14 07846 g016b
Table 1. Parameters of the simply supported beam.
Table 1. Parameters of the simply supported beam.
ParametersValue
Length l5 m
Width a0.25 m
Thickness b0.05 m
Elastic modulus E210 GPa
Poisson’s ratio ε0.31
Density ρ 7800   Kg / m 3
Table 2. Specific location of measuring points.
Table 2. Specific location of measuring points.
Measuring Point123456789
Position (m)0.51.01.52.02.53.03.54.04.5
Table 3. Natural frequency of the simply supported beam.
Table 3. Natural frequency of the simply supported beam.
Modal Order12345
Frequency (Hz)4.718.842.352.674.9
Modal Order678910
Frequency (Hz)116.4117.1142.6165.4219.1
Table 4. Load conditions of the three considered cases.
Table 4. Load conditions of the three considered cases.
CaseDynamic LoadExcitation ParametersSampling Parameters
1Sinusoidal x = 1.5 m , F = 85 sin ( 30 π t )
The excitation for training is
x = 1.5 m , F = 50 sin ( 15 π t )
Δ t = 0.0001 s , t = 1 s
2Impact x = 1.5 m , F = { 30 sin ( 50 π t ) , t [ 0.56 , 0.58 ] 50 sin ( 50 π t ) , t [ 0.64 , 0.66 ]
The excitation for training is
x = 1.5 m , F = { 30 sin ( 50 π t ) t [ 0.08 , 0.10 ] , [ 0.24 , 0.26 ] , [ 0.40 , 0.42 ] 50 sin ( 50 π t ) t [ 0.16 , 0.18 ] , [ 0.32 , 0.34 ] , [ 0.48 , 0.50 ]
Δ t = 0.0001 s , t = 0.22 s
3Randomwhite Gaussian noise
x = 1.5 m , var i a n c e = 100 , m e a n = 0
The excitation for training is
white Gaussian noise
x = 1.5 m , var i a n c e = 25 , m e a n = 0
Δ t = 0.0001 s , t = 0.2 s
Table 5. The evaluation of sinusoidal excitation identification results.
Table 5. The evaluation of sinusoidal excitation identification results.
No NoiseRNN with Noise
RNNMLP10 dB20 dB30 dB
PREM0.22%29.51%6.18%4.24%2.66%
SNR55.0713.3924.5022.7332.11
Table 6. Evaluation of impact excitation identification results.
Table 6. Evaluation of impact excitation identification results.
No NoiseRNN with Noise
RNNMLP10 dB20 dB30 dB
PREM2.94%19.16%9.37%5.70%4.64%
SNR34.1017.1522.5227.0128.62
Table 7. Evaluation of random excitation identification results.
Table 7. Evaluation of random excitation identification results.
No NoiseRNN with Noise
RNNMLP10 dB20 dB30 dB
PREM1.71%40.65%1.69%1.43%0.82%
SNR43.2714.8525.2935.9847.20
Table 8. Parameters for the simply supported beam experiment.
Table 8. Parameters for the simply supported beam experiment.
ParametersValue
Length l0.7 m
Width a0.04 m
Thickness b0.008 m
Elastic modulus E210 GPa
Poisson’s ratio ε0.3
Density ρ7800 kg/m3
Table 9. Load conditions for two cases of experiments.
Table 9. Load conditions for two cases of experiments.
CaseDynamic LoadExcitation ParametersSampling Parameters
1Sinusoidal x = 0.21 m , F = 1.8 sin 150 ( π t )
The excitation for training is
x = 0.21 m , F = 10 sin ( 100 π t )
Δ t = 1 6400 s , t = 0.5 s
2Randomwhite Gaussian noise
x = 0.21 m , var i a n c e = 100 , m e a n = 0
The excitation for training is
white Gaussian noise
x = 0.21 m , var i a n c e = 25 , m e a n = 0
Δ t = 1 6400 s , t = 0.5 s
Table 10. Evaluation of experimental results.
Table 10. Evaluation of experimental results.
Excitation
Sinusoidal ExcitationRandom Excitation
PREM1.27%1.26%
SNR36.4246.28
Table 11. Identification results with changing structures and hyperparameters.
Table 11. Identification results with changing structures and hyperparameters.
HyperparameterTraining Time by GPU(CPU) in minPREM SNR
Number of NeuronsLearning Rate
1BLSTM+
1LSTM+
2FC
1280.0118(25)1.35%33.28
0.00541(68)1.27%44.29
0.00152(86)1.27%45.42
2560.0129(48)1.25%46.42
0.00558(77)1.27%45.20
0.00185(132)1.22%45.58
2LSTM+
2FC
1280.0112(17)2.86%28.17
0.00528(36)1.52%35.74
0.00146(66)1.48%37.55
2560.0125(40)4.79%30.74
0.00565(88)2.87%38.26
0.00185(125)2.76%38.65
2RNN+
2FC
1280.0112(15)8.78%22.93
0.00521(32)6.42%25.66
0.00132(47)7.29%27.31
2560.0122(25)5.31%23.75
0.00534(47)4.10%28.12
0.00141(56)4.08%31.07
Table 12. Evaluation of the results of multi-point excitation.
Table 12. Evaluation of the results of multi-point excitation.
Excitation
Excitation 1Excitation 2
PREM7.07%3.52%
SNR20.5224.27
Table 13. Evaluation of the results of multi-point excitation.
Table 13. Evaluation of the results of multi-point excitation.
Different Layouts of Measuring Points
Layout 1Layout 2Layout 3Layout 4Layout 5
PREM1.60%2.42%0.90%2.35%1.61%
SNR34.0229.4130.4730.3228.43
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, H.; Jiang, J.; Chen, G.; Mohamed, M.S.; Lu, F. A Recurrent Neural Network-Based Method for Dynamic Load Identification of Beam Structures. Materials 2021, 14, 7846. https://doi.org/10.3390/ma14247846

AMA Style

Yang H, Jiang J, Chen G, Mohamed MS, Lu F. A Recurrent Neural Network-Based Method for Dynamic Load Identification of Beam Structures. Materials. 2021; 14(24):7846. https://doi.org/10.3390/ma14247846

Chicago/Turabian Style

Yang, Hongji, Jinhui Jiang, Guoping Chen, M Shadi Mohamed, and Fan Lu. 2021. "A Recurrent Neural Network-Based Method for Dynamic Load Identification of Beam Structures" Materials 14, no. 24: 7846. https://doi.org/10.3390/ma14247846

APA Style

Yang, H., Jiang, J., Chen, G., Mohamed, M. S., & Lu, F. (2021). A Recurrent Neural Network-Based Method for Dynamic Load Identification of Beam Structures. Materials, 14(24), 7846. https://doi.org/10.3390/ma14247846

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop