1. Introduction
In today’s rapidly advancing bridge industry, the number of cable-stayed bridges, suspension bridges, arch bridges, and other types of bridges is increasing. Cables, as crucial components bearing structural loads, can lead to severe bridge accidents if damaged. Incidents such as the cable rupture on the Machchhu River bridge in Gujarat, India, resulting in at least 132 fatalities, and the collapse of the Nanfang’ao Bridge in Taiwan, causing at least 3 deaths, underscore the importance of accurately determining cable tension to ensure bridge safety.
Marziyeh Ghasemi et al. [
1] use the lumped plasticity approach to model the moment-resisting frames in finite element software to investigate a cost-effective and easy-to-install displacement-restraint cable bracing technique (cable cylinder). Zhang et al. [
2] explored the enhanced seismic performance of cable brackets. Based on the concrete-filled steel tubular reinforced concrete plates (CFST-RCP) column, this paper proposes to replace the plate material of reinforced concrete (RC) with ultra-high-performance concrete (UHPC) with higher compressive strength, forming a concrete-filled steel tubular composite column with an ultra-high-performance concrete plates (CFST-UCHP column) structure. This could result in a reduction in casualties caused by bridges in seismic accidents. Wei et al. [
3] also analyzed the flexural performance of steel fiber-reinforced concrete–normal concrete (SFRC-NC) by experimental and numerical analysis.
Utilizing cable vibration frequencies to determine tension is common in current cable tension testing for bridges. The vibration frequency method, also known as the spectral method, involves exciting cables through artificial or environmental means, collecting acceleration signals from sensors fixed to the cables, analyzing the signals, and plotting a spectrum to determine the cables’ vibration frequency. The tension is then calculated based on the relationship between tension and natural frequency. Overseas scholars have studied this previously. In 1938, German engineer Franz Dischinger [
4] proposed the first theoretical calculation model of a cable-stayed bridge. Abdel-Ghaffar’s [
5] study on the vibration characteristics of the suspension cables of the Golden Gate Bridge paved the way for investigating cable structure vibrations under environmental excitations. Zhao Di et al. [
6] employed the Monte Carlo method (MCM) to calculate the optimal estimate and uncertainty of tension in a measurement model. Irvine, H.M. and Caughey, T.K. [
7] deduced the equilibrium equation, and we propose that the influence of cable degree on cable tension should be considered with the dimensionless parameter
, which is also one of the basic parameters in the theory of cable string vibration. Chen and Dong [
8], Ren and Chen [
9], Shao et al. [
10], Wang, J. [
11], and others further considered the influence of dimensionless parameters. Yi and Liu [
12], based on the basic principles of the frequency method and employing mathematical elimination, derived a tension calculation formula that considers bending stiffness but does not explicitly reflect it, offering a simple and rapid calculation method, albeit with ambiguity regarding cable length selection. Wu et al. [
13] proposed an optimization method based on the response surface method, which provides a new idea for the analysis of large-span arch bridges.
An analysis of the current frequency method reveals that, although there are formulas that consider factors such as effective cable length, boundary conditions, and bending stiffness, the complexity of actual engineering situations often makes it difficult to determine these factors accurately. Consequently, incorrect judgments can lead to inaccuracies in cable tension calculation results, undermining their reliability.
Neural networks are capable of adapting to various types of regression problems, handling complex nonlinear relationships, and accurately predicting outcomes by learning from large datasets. Scholars have already integrated neural networks into tension prediction research: Xiao Liang [
14] utilized a large amount of finite element simulation data to train neural networks but simplified boundary conditions to hinge and fixity, which significantly oversimplified the actual situation. Gai and Zeng [
15] used ANSYS to carry out a numerical simulation of cable vibration, verified the cable tension with an existing cable tension calculation formula, combined it with vibration model data, and used the generalized regression neural network to predict the bridge cable tension so as to avoid the influence of the boundary condition discrimination of the cable on the accuracy of the cable tension identification results.
Although a large amount of data can be obtained using finite element simulations, the frequency method is still employed. The authors of this paper abandon traditional ideas and use field measured data to train a neural network model based on an artificial neural network. For the cable tension prediction method, the line density, cable length, and base frequency are used as the input parameters of the neural network and the cable tension is used as the output parameter. The final results show that the method proposed in this paper has good accuracy and provides a reference for the prediction of cable tension in engineering.
2. Establishment of BP Neural Network
2.1. Principles of the BP Neural Network
The BP neural network [
16] is a feed-forward neural network model based on the back propagation algorithm. Its core idea is as follows: data are input into the neural network through the input layer, then the data are progressively processed through hidden layers, and, finally, results are obtained in the output layer. During the training process, the first step is to initialize the network parameters; that is, the weights and biases of the neural network need to be initialized. These parameters are typically randomly initialized, but they can also be initialized based on experience or other methods.
The second step is forward propagation: given input data, the predicted values are calculated through the forward propagation of the network. The forward propagation process involves a series of linear transformations and nonlinear transformations (activation functions), resulting in the output values. The specific steps are as follows:
Input layer: input data are fed into the input layer of the neural network.
Hidden layer: the input data undergo linear transformations through weights and biases and pass through activation functions, and the output of the hidden layer is obtained.
Output layer: the output of the hidden layer undergoes linear transformations through weights and biases, and the final output of the neural network is obtained.
The third step is to compute the loss function: the output values of the neural network are compared with the true label values, and the value of the loss function is calculated. Common loss functions include mean squared percentage error (MSPE) and MAPE.
The fourth step is back propagation: the gradients of each parameter are calculated with respect to the loss function using the back propagation algorithm. The specific steps are as follows:
Output layer: the partial derivatives of the output layer parameters (weights and biases) are computed with respect to the loss function.
Hidden layer: the chain rule is used to propagate the gradients backward, and the partial derivatives of the hidden layer parameters (weights and biases) are computed with respect to the loss function.
Update parameters: the network parameters are updated based on the gradients using the gradient descent algorithm or other optimization algorithms to minimize the loss function.
The fifth step is to repeat the iteration: the forward propagation and back propagation process is continuously repeated, and the network parameters are continuously adjusted until reaching the predetermined stopping conditions (such as reaching the maximum number of iterations and convergence of the loss function).
The sixth step is prediction: After the network training is completed, the trained model can be used to predict new input data. Through forward propagation, the prediction results are obtained. The neural network continuously adjusts weights and biases to minimize the error between the predicted values and the actual values, representing the process of the back propagation algorithm. This enables the learning and representation of complex nonlinear relationships. The topological structure of a multi-layer neural network model [
17] is illustrated in the diagram below (
Figure 1).
2.2. Establishment of the Neural Network
According to string vibration theory, when the bending stiffness is not considered and the boundary condition of the cable is regarded as hinged, the explicit relationship between the cable tension and its natural frequency is [
18].
where
T is the cable tension,
m is mostly defined as the mass per unit length of the cable,
L is the cable length,
f is the cable natural frequency, and
n represents the mode number.
In the above equation, it can be observed that the main factors influencing cable tension are cable line density, cable length, and natural frequency.
Therefore, cable line density, cable length, and frequency are chosen as the input layer of the neural network. The accuracy of neural network training can be improved by adjusting the number of hidden layers and the number of neurons in each hidden layer. The learning rate of the neural network, the number of training iterations, the target, and the gradient are set to 0.01, 2000 times, 1 × 10−6 and 1 × 10−7. The accuracy of neural network training can be realized by the number of hidden layers and the number of neurons in each hidden layer. In the test, it is found that when the number of hidden layers is two, the model prediction gradient reaches 9.58 × 10−8, close to the target gradient, indicating that the parameters are close enough to the optimal solution, and the algorithm stops iterating; when the number of hidden layers is greater than two, the gradient reaches 6 × 10−4, and more iterations are needed; when the number of hidden layers is less than two, the prediction effect is poorer, and it is difficult to approach the target gradient. Therefore two hidden layers are chosen. The number of nodes is based on continuous testing of the model, observing the relationship between the observation error and the number of nodes in the hidden layer, and determining the number of nodes to be four and six. More nodes increase the risk of overfitting the model, and fewer nodes were found in the tests, and the model often fails to reach the target gradient even in more than 2000 iterations. In summary, the model for the neural network was determined to be 3-4-6-1.
There are various activation functions for neural networks, such as sigmoid, rectified linear unit (ReLU), and hyperbolic tangent (TanH). In this study, the TanH function is used between the input layer and the hidden layers, as well as between the hidden layers. TanH is a nonlinear activation function with an output range between [−1,1], and it is commonly used in the hidden layers of neural networks to normalize data and avoid inaccuracies in training results caused by significant differences in data magnitudes. The Purelin function is used between the hidden layers and the output layer. It is a linear function with no output range restrictions and is commonly used for model denormalization, as it allows the model to output any real number, making it suitable for linear problems in the output layer.
2.3. Training and Prediction of the BP Model
In this study, 200 sets of data obtained from an actual bridge survey were randomly arranged, and 149 sets of measured data were selected to train the model (see
Section 4.1 Overview of the Project for the specific survey scheme), with the first 110 groups used as the training dataset and the remaining 39 groups used as the test dataset.
During the model training process, as the neural network continuously adjusted the initial weights and biases, scatter plots of the training and testing set prediction results were obtained, as shown in
Figure 2 and
Figure 3.
3. Establishment of PSO-BP Neural Network
3.1. Training and Prediction of the BP Model
The particle swarm optimization (PSO) algorithm, also known as particle swarm optimization or the bird flocking algorithm, is a population-based intelligent algorithm. In PSO, solutions are represented as particles in the solution space, with each particle having a position and velocity. The core idea of the algorithm is to find the optimal solution by simulating the search process of particles in the solution space, inspired by the behavior of biological populations, such as flocks of birds and schools of fish. The position of each particle i is represented as
where d represents the dimension of the problem.
The velocity of each particle is represented as
The fitness of each particle is represented as f (Xi).
Each particle has two most important positions: the individual best position Pi and the global best position Pg.
The process of particle updating is as follows:
where
w is the inertia weight,
c1 and
c2 are acceleration coefficients, and
rand1 and
rand2 are random numbers.
With the incorporation of the BP neural network, each particle’s position Xi corresponds to the weights and biases of the BP neural network. The PSO algorithm optimizes the performance of the BP neural network by adjusting these weights and biases. In each iteration, PSO updates the particles’ positions and uses them to update the weights and biases of the BP neural network.
During the optimization process, the BP neural network computes outputs through forward propagation and calculates the loss based on the loss function (mean square error). The formula for the mean square error calculation is
where
n represents the number of samples in the training set,
Yi is the predicted value of the
ith sample in the training set, and
yi is the actual value of the
ith sample.
Then, the gradient is calculated using the backpropagation algorithm, and the weights and biases of the network are adjusted based on the positions updated by the PSO algorithm. This process iterates continuously until a stopping condition is met, such as reaching the maximum number of iterations or satisfying a certain performance metric.
The convergence rate of the BP neural network algorithm is mainly influenced by the initial weights and biases of the network, which are arbitrarily set and highly random. Even in the subsequent process of weight and bias updates, the network may become stuck in local optima and fail to reach the global optimum. To address this issue, this study adopts a PSO-based method to optimize the BP neural network model.
When optimizing the BP neural network with PSO, the first step is to initialize the particle swarm: randomly generate a certain number of particles, with each representing a possible solution of the BP neural network, i.e., the weights and biases of the neural network. The second step is to calculate the fitness: for each particle, compute its fitness corresponding to the neural network, i.e., the error of the neural network on the training set. The third step is to update the velocity and position: based on the current position and velocity of the particle, as well as the influence of its individual best position and global best position, move the particle towards a better direction in the search space. The fourth step is to iterate: repeat steps 2 and 3 until the predetermined number of iterations is reached or a stopping condition is met. The fifth step is to obtain the optimal solution: after a certain number of iterations, one or more optimal solutions will emerge in the particle swarm, representing the values of the weights and biases of the neural network.
These optimal solutions can be used to update the parameters of the BP neural network, thereby improving the fitting effect of the neural network on the training data [
19].
Combining these two algorithms can overcome their corresponding limitations by effectively searching for the global optimal solution using the particle swarm algorithm and optimizing the connection weights and biases of the neural network, thereby improving the overall search efficiency [
20]. The local search capability of the BP neural network algorithm can make the search process more refined. Therefore, combining these two algorithms can allow for a simultaneous search for the optimal solution in both global and local scopes, thus improving the performance of the neural network.
3.2. Establishment of PSO-BP
The output layer, hidden layers, and input layer of PSO-BP remain consistent with the neural network. The total number of nodes in the particle swarm is equal to the number of parameters to be optimized, which, in this model, are the weights and biases of the BP neural network.
where
N is the total number of nodes,
I is the number of nodes in the input layer,
E is the number of nodes in the output layer, and
H is the number of hidden layers. The learning factor represents the update speed of the particles. After testing, it is set to 2 to ensure that the particles conduct both local and global searches in the search space evenly. Considering computer performance and optimization accuracy, the population size is finally determined to be 5, with the maximum and minimum speeds set to 1 and −1, respectively, and the maximum and minimum boundaries set to 2 and −2, respectively. The population is updated for 50 iterations. The iteration times of the PSO-BP neural network are shown in
Figure 4. The dataset used for the neural network is also used for the PSO-BP neural network.
3.3. Training and Prediction of the PSO-BP Model
A scatter plot of the prediction results obtained through training and testing is presented below (
Figure 5 and
Figure 6):
An observation and a comparison of the training efficacy of the two models reveal that the neural network model refined through the particle swarm intelligence optimization algorithm outperforms the unrefined BP neural network model in terms of machine learning efficacy on both the training and testing datasets.
4. Engineering Case Verification
4.1. Overview of the Project
The East Ring Road Bridge is located in Luoyang City, Henan Province, with a total length of 1444 m; the main bridge is 320 m long, and the approach bridge is 1124 m long. The main span of the main bridge adopts a 160 m bearing steel truss tie rod arch, with a beam height of 2.6 m, a calculated span of 156.8 m, and a vector span ratio of 1/4.618; the side span adopts a 240 m cast-in-place pre-stressed concrete box girder, with a beam height of 2.2 m; and the approach bridge adopts a 40 m span cast-in-place pre-stressed concrete box girder, with a beam height of 2.2 m.
The full width of the main bridge is 47 m, the lane widths are 3.75 m and 3.5 m, and two-way six lanes were recently implemented. The transverse arrangement of the bridge deck is 4.25 m (sidewalk and tower area) + 6.5 m (non-motorized lane) + 0.5 m (side strip) + 12 m (motorway) + 0.5 m (middle strip) + 12 m (motorway) + 0.5 m (side strip) + 6.5 m (non-motorized lane) + 4., 25 m (sidewalk and tower area). The lead bridge is 25.5 m, and the main bridge is reserved for a forward ramp connection. The widths of the approach bridge lane are planned to be 3.75 m and 3.5 m, and the recent transverse layout of the bridge deck is 2 m (sidewalk) + 2.25 m (non-motorized lane) + 8.25 m (motor vehicle lane) + 0.5 m (middle zone) + 8.25 m (motor vehicle lane) + 2.25 m (non-motorized lane) + 2 m (sidewalk).
An elevation view of the main bridge is shown in
Figure 7.
The cable adopts a low-stress corrosive cable body with a spacing of 6 m and 4 × 25 = 100 pieces. The cable wire adopts an epoxy-coated parallel wire beam with good anti-corrosion performance and high binding with cold-cast anchors, and the elastic modulus is 1.95 × 1011.
In the measurement experiment, the cable length, line density, outer diameter, and elastic modulus were obtained from construction drawings. The frequency of the cable was measured using a DH5906W Wi-Fi cable tension test instrument combined with a DHDAS dynamic signal acquisition and analysis system. At the same time, a hydraulic jack was used to measure the accurate cable tension of the cable. The Wi-Fi cable tension test instrument is shown in
Figure 8.
When measuring, the instrument was fixed to all 100 cables in order to measure their frequency in different tension stages under different working conditions.
Figure 9 shows measurements being taken of the vibration frequency of the cable and the cable tension
noisnte using the hydraulic jack.
After measurement, detailed cable data were obtained, as shown in
Table 1 and
Table A1.
4.2. Training and Prediction of the PSO-BP Model
The tension in the suspension cables of the bridge was predicted using the trained BP neural network model. The length, linear density, and frequency of the cables were taken as the input values, and the cable tension was predicted as the output value. The obtained cable tension values are as shown in
Table 2.
The predicted cable tensions were compared with the jack reading, the percentage error was calculated, and the precision of the model’s predictions was analyzed.
The percentage errors of the BP neural network model’s predictions are depicted in
Figure 10.
Through an analysis, it was found that the maximum error was 38% and that 31 groups of data had errors greater than 10%, accounting for 22% of the total data. The model was evaluated using the mean absolute percentage error (MAPE), and the calculation formula is shown in Equation (7):
.
Here, and represent the predicted value and the actual value of the cable tension, respectively.
Through calculations, it was found that the average absolute percentage error for all prediction sets was 7.93%, indicating that the BP model has a relatively large predictive error and that there is room for improvement. Similarly, the predicted error percentage of the model optimized by the particle swarm algorithm for data prediction is shown in
Figure 11.
Through an analysis, it was found that the maximum prediction error was 9.82%; no data points had errors exceeding 10%, and the data points with errors exceeding 5% accounted for 16.2% of all data. The calculated MAPE was 2.78%, indicating that the PSO-BP neural network model has good predictive performance and is superior to the BP neural network prediction model.
To examine the fitting effectiveness, the results obtained from the two models were compared with the results obtained from Equation (1) and the actual jack readings. Additionally,
Figure 12,
Figure 13 and
Figure 14 were constructed.
In the above figures, it can be observed that the neural network model optimized by particle swarm optimization outperforms the unoptimized model, and its accuracy far exceeds that of the results obtained from the string vibration theory equation. This indicates that the PSO-BP neural network prediction model has high engineering application value.
4.3. Comparison between the Literature Cable Tension Calculation Methods and Neural Network Prediction Methods
Many scholars both domestically and internationally have proposed their own optimized formulas for calculating cable tension based on their research. In this study, cable tensions were calculated using the formulas proposed by ZUI et al. [
21], Chen and Dong [
8], Ren and Chen [
9], Shao et al. [
10], Wang, J. [
11], and others, and the results were analyzed. The formulas are as follows:
Here, , , , where EI represents the flexural rigidity of the cable, and the remaining symbols are consistent with those detailed above.
After substituting the parameters of the measured cables into the above formulas and combining them with the neural network prediction results, the absolute average percentage error between the calculated and actual cable tensions was calculated. The result is shown in
Figure 15.
In
Figure 15, it can be observed that the results predicted by the neural network model have a higher accuracy than the results calculated using the formulas mentioned in the literature.
5. Discussion
In the measurement and prediction of the bridge cable tension in actual engineering, using the ideas and methods of this paper, we can predict the cable tension using the whole cable data obtained from the measurement without measuring all the cables. This not only saves labor and time costs but also obtains more accurate data, improves the efficiency of bridge detection work, and aids in regular bridge safety inspections.
However, at the same time, the factors affecting the prediction of tension cable tension in real engineering are much more than the length, line density and frequency of the inputs in this study, which is also the limitation of this study. On the other hand, there is room for optimization in the algorithm of this scheme.
In the subsequent study, the author will focus on the changes in the cable droop due to the self-weight of the cable, and the influence of the lateral vibration of the cable due to external factors such as wind load on the prediction of the cable tension. In the face of high dimensional data, relevance vector machine (RVM) has better generalization capabilities and can better handle highly correlated features [
22]. The dendritic neural model (DNM) is better able to handle uncertainty in data and may, in this way, improve the memory function of the model [
23]. At the same time, the algorithms used now are also improved and optimized, either through distributed parallel cooperative co-evolution particle swarm optimization (DPCCPSO) [
24], where the inertia weights and learning factors are adjusted during the evolutionary process, or through deep learning, eliminating the requirement for manual feature engineering [
25]. The purpose of analyzing the cable is achieved by converting the measured data into images using window-based convolutional neural network (CNN), integrated recurrent neural network (RNN), and autoencoder (AutoE), where pixels are used as inputs to analyze the visual appearance of the cable [
26].
We will continue to refine the methodology proposed in this study in order to more accurately predict the cable tension under more complex cable structures or more complex systems.