Next Article in Journal
Cost-Effective Control of Hybrid Ground Source Heat Pump (GSHP) System Coupled with District Heating
Previous Article in Journal
Effects of Prolonged Leaching on the Acute Ecotoxicity of Spruce-Pine Oriented Strand Board for Plants
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Cable Tension Prediction Based on Neural Network

School of Civil Engineering, Dalian University, Dalian 116622, China
*
Author to whom correspondence should be addressed.
Buildings 2024, 14(6), 1723; https://doi.org/10.3390/buildings14061723
Submission received: 8 April 2024 / Revised: 13 May 2024 / Accepted: 20 May 2024 / Published: 8 June 2024
(This article belongs to the Section Building Structures)

Abstract

:
Conventional methods for calculating tension currently suffer from an excessive simplification of boundary conditions and a vague definition of effective cable length, both of which cause inaccurate cable tension calculations. Therefore, this study utilizes bridge field data to establish a BP neural network for tension prediction, with design cable length, line density, and frequency as the input parameters and with cable tension as the output parameter. After disregarding the selection of effective cable length and innovatively integrating the particle swarm optimization–back propagation (PSO-BP) neural network for tension prediction, it is found that the MAPE between the predicted results of the BP neural network and the actual tension values is 7.93%. After optimization using the particle swarm optimization algorithm, the mean absolute percentage error (MAPE) of the neural network prediction is reduced to 2.78%. Both of these values significantly outperform those obtained from the theoretical equations of string vibration. Moreover, the MAPE of PSO-BP also surpasses that of the optimized calculation formulas in the literature. Utilizing the PSO-BP neural network for tension prediction avoids inaccuracies in tension calculation caused by an excessive simplification of boundary conditions and a vague definition of effective cable length; thus, it possesses certain engineering practical value.

1. Introduction

In today’s rapidly advancing bridge industry, the number of cable-stayed bridges, suspension bridges, arch bridges, and other types of bridges is increasing. Cables, as crucial components bearing structural loads, can lead to severe bridge accidents if damaged. Incidents such as the cable rupture on the Machchhu River bridge in Gujarat, India, resulting in at least 132 fatalities, and the collapse of the Nanfang’ao Bridge in Taiwan, causing at least 3 deaths, underscore the importance of accurately determining cable tension to ensure bridge safety.
Marziyeh Ghasemi et al. [1] use the lumped plasticity approach to model the moment-resisting frames in finite element software to investigate a cost-effective and easy-to-install displacement-restraint cable bracing technique (cable cylinder). Zhang et al. [2] explored the enhanced seismic performance of cable brackets. Based on the concrete-filled steel tubular reinforced concrete plates (CFST-RCP) column, this paper proposes to replace the plate material of reinforced concrete (RC) with ultra-high-performance concrete (UHPC) with higher compressive strength, forming a concrete-filled steel tubular composite column with an ultra-high-performance concrete plates (CFST-UCHP column) structure. This could result in a reduction in casualties caused by bridges in seismic accidents. Wei et al. [3] also analyzed the flexural performance of steel fiber-reinforced concrete–normal concrete (SFRC-NC) by experimental and numerical analysis.
Utilizing cable vibration frequencies to determine tension is common in current cable tension testing for bridges. The vibration frequency method, also known as the spectral method, involves exciting cables through artificial or environmental means, collecting acceleration signals from sensors fixed to the cables, analyzing the signals, and plotting a spectrum to determine the cables’ vibration frequency. The tension is then calculated based on the relationship between tension and natural frequency. Overseas scholars have studied this previously. In 1938, German engineer Franz Dischinger [4] proposed the first theoretical calculation model of a cable-stayed bridge. Abdel-Ghaffar’s [5] study on the vibration characteristics of the suspension cables of the Golden Gate Bridge paved the way for investigating cable structure vibrations under environmental excitations. Zhao Di et al. [6] employed the Monte Carlo method (MCM) to calculate the optimal estimate and uncertainty of tension in a measurement model. Irvine, H.M. and Caughey, T.K. [7] deduced the equilibrium equation, and we propose that the influence of cable degree on cable tension should be considered with the dimensionless parameter λ 2 , which is also one of the basic parameters in the theory of cable string vibration. Chen and Dong [8], Ren and Chen [9], Shao et al. [10], Wang, J. [11], and others further considered the influence of dimensionless parameters. Yi and Liu [12], based on the basic principles of the frequency method and employing mathematical elimination, derived a tension calculation formula that considers bending stiffness but does not explicitly reflect it, offering a simple and rapid calculation method, albeit with ambiguity regarding cable length selection. Wu et al. [13] proposed an optimization method based on the response surface method, which provides a new idea for the analysis of large-span arch bridges.
An analysis of the current frequency method reveals that, although there are formulas that consider factors such as effective cable length, boundary conditions, and bending stiffness, the complexity of actual engineering situations often makes it difficult to determine these factors accurately. Consequently, incorrect judgments can lead to inaccuracies in cable tension calculation results, undermining their reliability.
Neural networks are capable of adapting to various types of regression problems, handling complex nonlinear relationships, and accurately predicting outcomes by learning from large datasets. Scholars have already integrated neural networks into tension prediction research: Xiao Liang [14] utilized a large amount of finite element simulation data to train neural networks but simplified boundary conditions to hinge and fixity, which significantly oversimplified the actual situation. Gai and Zeng [15] used ANSYS to carry out a numerical simulation of cable vibration, verified the cable tension with an existing cable tension calculation formula, combined it with vibration model data, and used the generalized regression neural network to predict the bridge cable tension so as to avoid the influence of the boundary condition discrimination of the cable on the accuracy of the cable tension identification results.
Although a large amount of data can be obtained using finite element simulations, the frequency method is still employed. The authors of this paper abandon traditional ideas and use field measured data to train a neural network model based on an artificial neural network. For the cable tension prediction method, the line density, cable length, and base frequency are used as the input parameters of the neural network and the cable tension is used as the output parameter. The final results show that the method proposed in this paper has good accuracy and provides a reference for the prediction of cable tension in engineering.

2. Establishment of BP Neural Network

2.1. Principles of the BP Neural Network

The BP neural network [16] is a feed-forward neural network model based on the back propagation algorithm. Its core idea is as follows: data are input into the neural network through the input layer, then the data are progressively processed through hidden layers, and, finally, results are obtained in the output layer. During the training process, the first step is to initialize the network parameters; that is, the weights and biases of the neural network need to be initialized. These parameters are typically randomly initialized, but they can also be initialized based on experience or other methods.
The second step is forward propagation: given input data, the predicted values are calculated through the forward propagation of the network. The forward propagation process involves a series of linear transformations and nonlinear transformations (activation functions), resulting in the output values. The specific steps are as follows:
Input layer: input data are fed into the input layer of the neural network.
Hidden layer: the input data undergo linear transformations through weights and biases and pass through activation functions, and the output of the hidden layer is obtained.
Output layer: the output of the hidden layer undergoes linear transformations through weights and biases, and the final output of the neural network is obtained.
The third step is to compute the loss function: the output values of the neural network are compared with the true label values, and the value of the loss function is calculated. Common loss functions include mean squared percentage error (MSPE) and MAPE.
The fourth step is back propagation: the gradients of each parameter are calculated with respect to the loss function using the back propagation algorithm. The specific steps are as follows:
Output layer: the partial derivatives of the output layer parameters (weights and biases) are computed with respect to the loss function.
Hidden layer: the chain rule is used to propagate the gradients backward, and the partial derivatives of the hidden layer parameters (weights and biases) are computed with respect to the loss function.
Update parameters: the network parameters are updated based on the gradients using the gradient descent algorithm or other optimization algorithms to minimize the loss function.
The fifth step is to repeat the iteration: the forward propagation and back propagation process is continuously repeated, and the network parameters are continuously adjusted until reaching the predetermined stopping conditions (such as reaching the maximum number of iterations and convergence of the loss function).
The sixth step is prediction: After the network training is completed, the trained model can be used to predict new input data. Through forward propagation, the prediction results are obtained. The neural network continuously adjusts weights and biases to minimize the error between the predicted values and the actual values, representing the process of the back propagation algorithm. This enables the learning and representation of complex nonlinear relationships. The topological structure of a multi-layer neural network model [17] is illustrated in the diagram below (Figure 1).

2.2. Establishment of the Neural Network

According to string vibration theory, when the bending stiffness is not considered and the boundary condition of the cable is regarded as hinged, the explicit relationship between the cable tension and its natural frequency is [18].
T = 4 m L 2 ( f n n ) 2
where T is the cable tension, m is mostly defined as the mass per unit length of the cable, L is the cable length, f is the cable natural frequency, and n represents the mode number.
In the above equation, it can be observed that the main factors influencing cable tension are cable line density, cable length, and natural frequency.
Therefore, cable line density, cable length, and frequency are chosen as the input layer of the neural network. The accuracy of neural network training can be improved by adjusting the number of hidden layers and the number of neurons in each hidden layer. The learning rate of the neural network, the number of training iterations, the target, and the gradient are set to 0.01, 2000 times, 1 × 10−6 and 1 × 10−7. The accuracy of neural network training can be realized by the number of hidden layers and the number of neurons in each hidden layer. In the test, it is found that when the number of hidden layers is two, the model prediction gradient reaches 9.58 × 10−8, close to the target gradient, indicating that the parameters are close enough to the optimal solution, and the algorithm stops iterating; when the number of hidden layers is greater than two, the gradient reaches 6 × 10−4, and more iterations are needed; when the number of hidden layers is less than two, the prediction effect is poorer, and it is difficult to approach the target gradient. Therefore two hidden layers are chosen. The number of nodes is based on continuous testing of the model, observing the relationship between the observation error and the number of nodes in the hidden layer, and determining the number of nodes to be four and six. More nodes increase the risk of overfitting the model, and fewer nodes were found in the tests, and the model often fails to reach the target gradient even in more than 2000 iterations. In summary, the model for the neural network was determined to be 3-4-6-1.
There are various activation functions for neural networks, such as sigmoid, rectified linear unit (ReLU), and hyperbolic tangent (TanH). In this study, the TanH function is used between the input layer and the hidden layers, as well as between the hidden layers. TanH is a nonlinear activation function with an output range between [−1,1], and it is commonly used in the hidden layers of neural networks to normalize data and avoid inaccuracies in training results caused by significant differences in data magnitudes. The Purelin function is used between the hidden layers and the output layer. It is a linear function with no output range restrictions and is commonly used for model denormalization, as it allows the model to output any real number, making it suitable for linear problems in the output layer.

2.3. Training and Prediction of the BP Model

In this study, 200 sets of data obtained from an actual bridge survey were randomly arranged, and 149 sets of measured data were selected to train the model (see Section 4.1 Overview of the Project for the specific survey scheme), with the first 110 groups used as the training dataset and the remaining 39 groups used as the test dataset.
During the model training process, as the neural network continuously adjusted the initial weights and biases, scatter plots of the training and testing set prediction results were obtained, as shown in Figure 2 and Figure 3.

3. Establishment of PSO-BP Neural Network

3.1. Training and Prediction of the BP Model

The particle swarm optimization (PSO) algorithm, also known as particle swarm optimization or the bird flocking algorithm, is a population-based intelligent algorithm. In PSO, solutions are represented as particles in the solution space, with each particle having a position and velocity. The core idea of the algorithm is to find the optimal solution by simulating the search process of particles in the solution space, inspired by the behavior of biological populations, such as flocks of birds and schools of fish. The position of each particle i is represented as
X i = ( x i 1 , x i 2 , , x i d ) ,
where d represents the dimension of the problem.
The velocity of each particle is represented as
V i = ( v i 1 , v i 2 , , v i d ) .
The fitness of each particle is represented as f (Xi).
Each particle has two most important positions: the individual best position Pi and the global best position Pg.
The process of particle updating is as follows:
V i ( t + 1 ) = w · V i ( t ) + c 1 · r a n d 1 · P i ( t ) X i ( t ) + c 2 · r a n d 2 · P g ( t ) X i ( t ) ,
X i ( t + 1 ) = X i ( t ) + V i ( t + 1 ) ,
where w is the inertia weight, c1 and c2 are acceleration coefficients, and rand1 and rand2 are random numbers.
With the incorporation of the BP neural network, each particle’s position Xi corresponds to the weights and biases of the BP neural network. The PSO algorithm optimizes the performance of the BP neural network by adjusting these weights and biases. In each iteration, PSO updates the particles’ positions and uses them to update the weights and biases of the BP neural network.
During the optimization process, the BP neural network computes outputs through forward propagation and calculates the loss based on the loss function (mean square error). The formula for the mean square error calculation is
f i t n e s s = 1 n   i = 1 n ( Y i y i ) 2 ,
where n represents the number of samples in the training set, Yi is the predicted value of the ith sample in the training set, and yi is the actual value of the ith sample.
Then, the gradient is calculated using the backpropagation algorithm, and the weights and biases of the network are adjusted based on the positions updated by the PSO algorithm. This process iterates continuously until a stopping condition is met, such as reaching the maximum number of iterations or satisfying a certain performance metric.
The convergence rate of the BP neural network algorithm is mainly influenced by the initial weights and biases of the network, which are arbitrarily set and highly random. Even in the subsequent process of weight and bias updates, the network may become stuck in local optima and fail to reach the global optimum. To address this issue, this study adopts a PSO-based method to optimize the BP neural network model.
When optimizing the BP neural network with PSO, the first step is to initialize the particle swarm: randomly generate a certain number of particles, with each representing a possible solution of the BP neural network, i.e., the weights and biases of the neural network. The second step is to calculate the fitness: for each particle, compute its fitness corresponding to the neural network, i.e., the error of the neural network on the training set. The third step is to update the velocity and position: based on the current position and velocity of the particle, as well as the influence of its individual best position and global best position, move the particle towards a better direction in the search space. The fourth step is to iterate: repeat steps 2 and 3 until the predetermined number of iterations is reached or a stopping condition is met. The fifth step is to obtain the optimal solution: after a certain number of iterations, one or more optimal solutions will emerge in the particle swarm, representing the values of the weights and biases of the neural network.
These optimal solutions can be used to update the parameters of the BP neural network, thereby improving the fitting effect of the neural network on the training data [19].
Combining these two algorithms can overcome their corresponding limitations by effectively searching for the global optimal solution using the particle swarm algorithm and optimizing the connection weights and biases of the neural network, thereby improving the overall search efficiency [20]. The local search capability of the BP neural network algorithm can make the search process more refined. Therefore, combining these two algorithms can allow for a simultaneous search for the optimal solution in both global and local scopes, thus improving the performance of the neural network.

3.2. Establishment of PSO-BP

The output layer, hidden layers, and input layer of PSO-BP remain consistent with the neural network. The total number of nodes in the particle swarm is equal to the number of parameters to be optimized, which, in this model, are the weights and biases of the BP neural network.
N = I × H + H × E + E ,
where N is the total number of nodes, I is the number of nodes in the input layer, E is the number of nodes in the output layer, and H is the number of hidden layers. The learning factor represents the update speed of the particles. After testing, it is set to 2 to ensure that the particles conduct both local and global searches in the search space evenly. Considering computer performance and optimization accuracy, the population size is finally determined to be 5, with the maximum and minimum speeds set to 1 and −1, respectively, and the maximum and minimum boundaries set to 2 and −2, respectively. The population is updated for 50 iterations. The iteration times of the PSO-BP neural network are shown in Figure 4. The dataset used for the neural network is also used for the PSO-BP neural network.

3.3. Training and Prediction of the PSO-BP Model

A scatter plot of the prediction results obtained through training and testing is presented below (Figure 5 and Figure 6):
An observation and a comparison of the training efficacy of the two models reveal that the neural network model refined through the particle swarm intelligence optimization algorithm outperforms the unrefined BP neural network model in terms of machine learning efficacy on both the training and testing datasets.

4. Engineering Case Verification

4.1. Overview of the Project

The East Ring Road Bridge is located in Luoyang City, Henan Province, with a total length of 1444 m; the main bridge is 320 m long, and the approach bridge is 1124 m long. The main span of the main bridge adopts a 160 m bearing steel truss tie rod arch, with a beam height of 2.6 m, a calculated span of 156.8 m, and a vector span ratio of 1/4.618; the side span adopts a 240 m cast-in-place pre-stressed concrete box girder, with a beam height of 2.2 m; and the approach bridge adopts a 40 m span cast-in-place pre-stressed concrete box girder, with a beam height of 2.2 m.
The full width of the main bridge is 47 m, the lane widths are 3.75 m and 3.5 m, and two-way six lanes were recently implemented. The transverse arrangement of the bridge deck is 4.25 m (sidewalk and tower area) + 6.5 m (non-motorized lane) + 0.5 m (side strip) + 12 m (motorway) + 0.5 m (middle strip) + 12 m (motorway) + 0.5 m (side strip) + 6.5 m (non-motorized lane) + 4., 25 m (sidewalk and tower area). The lead bridge is 25.5 m, and the main bridge is reserved for a forward ramp connection. The widths of the approach bridge lane are planned to be 3.75 m and 3.5 m, and the recent transverse layout of the bridge deck is 2 m (sidewalk) + 2.25 m (non-motorized lane) + 8.25 m (motor vehicle lane) + 0.5 m (middle zone) + 8.25 m (motor vehicle lane) + 2.25 m (non-motorized lane) + 2 m (sidewalk).
An elevation view of the main bridge is shown in Figure 7.
The cable adopts a low-stress corrosive cable body with a spacing of 6 m and 4 × 25 = 100 pieces. The cable wire adopts an epoxy-coated parallel wire beam with good anti-corrosion performance and high binding with cold-cast anchors, and the elastic modulus is 1.95 × 1011.
In the measurement experiment, the cable length, line density, outer diameter, and elastic modulus were obtained from construction drawings. The frequency of the cable was measured using a DH5906W Wi-Fi cable tension test instrument combined with a DHDAS dynamic signal acquisition and analysis system. At the same time, a hydraulic jack was used to measure the accurate cable tension of the cable. The Wi-Fi cable tension test instrument is shown in Figure 8.
When measuring, the instrument was fixed to all 100 cables in order to measure their frequency in different tension stages under different working conditions. Figure 9 shows measurements being taken of the vibration frequency of the cable and the cable tension noisnte using the hydraulic jack.
After measurement, detailed cable data were obtained, as shown in Table 1 and Table A1.

4.2. Training and Prediction of the PSO-BP Model

The tension in the suspension cables of the bridge was predicted using the trained BP neural network model. The length, linear density, and frequency of the cables were taken as the input values, and the cable tension was predicted as the output value. The obtained cable tension values are as shown in Table 2.
The predicted cable tensions were compared with the jack reading, the percentage error was calculated, and the precision of the model’s predictions was analyzed.
The percentage errors of the BP neural network model’s predictions are depicted in Figure 10.
Through an analysis, it was found that the maximum error was 38% and that 31 groups of data had errors greater than 10%, accounting for 22% of the total data. The model was evaluated using the mean absolute percentage error (MAPE), and the calculation formula is shown in Equation (7):
M A P E = 100 n t = 1 n ( y t y t ) y t
.
Here, y t and y t represent the predicted value and the actual value of the cable tension, respectively.
Through calculations, it was found that the average absolute percentage error for all prediction sets was 7.93%, indicating that the BP model has a relatively large predictive error and that there is room for improvement. Similarly, the predicted error percentage of the model optimized by the particle swarm algorithm for data prediction is shown in Figure 11.
Through an analysis, it was found that the maximum prediction error was 9.82%; no data points had errors exceeding 10%, and the data points with errors exceeding 5% accounted for 16.2% of all data. The calculated MAPE was 2.78%, indicating that the PSO-BP neural network model has good predictive performance and is superior to the BP neural network prediction model.
To examine the fitting effectiveness, the results obtained from the two models were compared with the results obtained from Equation (1) and the actual jack readings. Additionally, Figure 12, Figure 13 and Figure 14 were constructed.
In the above figures, it can be observed that the neural network model optimized by particle swarm optimization outperforms the unoptimized model, and its accuracy far exceeds that of the results obtained from the string vibration theory equation. This indicates that the PSO-BP neural network prediction model has high engineering application value.

4.3. Comparison between the Literature Cable Tension Calculation Methods and Neural Network Prediction Methods

Many scholars both domestically and internationally have proposed their own optimized formulas for calculating cable tension based on their research. In this study, cable tensions were calculated using the formulas proposed by ZUI et al. [21], Chen and Dong [8], Ren and Chen [9], Shao et al. [10], Wang, J. [11], and others, and the results were analyzed. The formulas are as follows:
T = 4 m ( f 1 l ) 2 1 2.20 C f 1 0.550 ( C f 1 ) 2
T = 4 m ( f l ) 2 0.8232 10.4375 ( C / f ) 2 ( 0 ξ 9 ) 4 m ( f l ) 2 0.8881 12.7931 ( C / f ) 2 ( 9 < ξ 20 ) 4 m ( f l ) 2 ( 1 + 1 4.314 C / f 2 ) 2   ( 20 < ξ )                
T = 3.432 m l 2 f 2 45.191 E I l 2 ( 0 ξ < 18 ) m ( 2 l f 2.363 l E I m ) 2 ( 18 ξ 210 ) 4 m l 2 f 2 ( 210 < ξ )
T = ( 3.3 + 0.01 η ) m ( f l ) 2 ( 42 0.46 η ) E I l 2 ( η 70 ) 4 m ( f l ) 2 π 2 E I l 2 ( η > 70 )
T = 3.312 m ( f l ) 2 42 E I l 2 ( ξ 7.1 ) 4 m l 2 f 2 2.3 f E I m l 4 0.575 E I m l 4 ( ξ > 7.1 )
Here, ξ = T / ( E I ) l , η = f 2 m l 4 / ( E I ) , C = E I / m l 4 , where EI represents the flexural rigidity of the cable, and the remaining symbols are consistent with those detailed above.
After substituting the parameters of the measured cables into the above formulas and combining them with the neural network prediction results, the absolute average percentage error between the calculated and actual cable tensions was calculated. The result is shown in Figure 15.
In Figure 15, it can be observed that the results predicted by the neural network model have a higher accuracy than the results calculated using the formulas mentioned in the literature.

5. Discussion

In the measurement and prediction of the bridge cable tension in actual engineering, using the ideas and methods of this paper, we can predict the cable tension using the whole cable data obtained from the measurement without measuring all the cables. This not only saves labor and time costs but also obtains more accurate data, improves the efficiency of bridge detection work, and aids in regular bridge safety inspections.
However, at the same time, the factors affecting the prediction of tension cable tension in real engineering are much more than the length, line density and frequency of the inputs in this study, which is also the limitation of this study. On the other hand, there is room for optimization in the algorithm of this scheme.
In the subsequent study, the author will focus on the changes in the cable droop due to the self-weight of the cable, and the influence of the lateral vibration of the cable due to external factors such as wind load on the prediction of the cable tension. In the face of high dimensional data, relevance vector machine (RVM) has better generalization capabilities and can better handle highly correlated features [22]. The dendritic neural model (DNM) is better able to handle uncertainty in data and may, in this way, improve the memory function of the model [23]. At the same time, the algorithms used now are also improved and optimized, either through distributed parallel cooperative co-evolution particle swarm optimization (DPCCPSO) [24], where the inertia weights and learning factors are adjusted during the evolutionary process, or through deep learning, eliminating the requirement for manual feature engineering [25]. The purpose of analyzing the cable is achieved by converting the measured data into images using window-based convolutional neural network (CNN), integrated recurrent neural network (RNN), and autoencoder (AutoE), where pixels are used as inputs to analyze the visual appearance of the cable [26].
We will continue to refine the methodology proposed in this study in order to more accurately predict the cable tension under more complex cable structures or more complex systems.

6. Conclusions

Using measured data, this study trained neural network models and optimized them using intelligent algorithms, and the following conclusions were drawn:
  • Due to the influence of numerous parameters on cable tension, this study employed a machine learning approach, using cable length, line density, and frequency as the input units and using cable tension as the output unit to construct BP neural network models and PSO-BP neural network models.
  • After utilizing 149 sets of real bridge data to train the two established neural networks, the neural network model optimized through intelligent algorithms showed a significant improvement in the mean absolute percentage error (MAPE) compared to the conventional BP neural network. The reduced prediction errors provide a reliable model basis for studying the cable tensions of hangers and cables.
  • This study innovatively applied the PSO-BP neural network model to the field of cable tension prediction. Compared with traditional formula-based methods for calculating cable tension, it demonstrated good accuracy and is more practical for engineering applications.

Author Contributions

Conceptualization, W.H.; methodology, W.H. and H.Z.; software, W.H.; validation, W.H. and H.Z.; formal analysis, W.H.; investigation, W.H. and H.Z.; resources, H.Z.; data curation, W.H.; writing—original draft preparation, W.H.; writing—review and editing, W.H. and H.Z.; visualization, W.H.; supervision, W.H. and H.Z.; project administration, H.Z.; funding acquisition, NO. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All relevant data are presented in this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Detailed data for 149 groups of cables.
Table A1. Detailed data for 149 groups of cables.
Cable Length
(m)
Cable Linear Density
(kg/m)
FrequencyJack Readings
(kN)
Cable Length
(m)
Cable Linear Density
(kg/m)
FrequencyJack Readings
(kN)
7.782812.012422.5427.8324.22.49348.31
7.782811.23393.9827.8324.23.369728.92
7.78289.814414.7027.8324.22.539371.47
7.782812.158393.6927.8324.23.516753.74
7.7824.212.939377.4627.8320.52.686328.57
7.7824.211.816341.2427.8320.53.369613.00
7.7824.212.012364.6027.8320.53.223599.55
12.17286.25426.9827.8320.52.441283.41
12.17286.689489.2327.8320.53.369585.89
12.17286.152371.9727.8320.52.49320.13
12.17286.543486.4427.8320.53.125609.21
12.17286.006317.3329.0224.22.441395.86
12.17286.982498.7929.0224.23.418787.11
12.17286.006327.0229.0224.22.2667354.36
12.17286.934482.5929.0224.23.174763.81
12.1724.26.841275.2429.0224.23.32755.48
12.1724.26.689408.5729.0224.22.344367.02
12.1724.25.713288.1029.0224.23.271758.19
12.1724.26.787359.0329.0220.52.734350.79
12.1724.26.641426.8629.0220.52.295323.70
15.9524.25.42504.8229.0220.53.174608.45
15.9520.54.59266.3529.0224.22.441321.76
15.9520.55.127417.4629.0220.53.369635.22
15.9520.54.297297.0029.0220.52.295296.76
15.9520.55.176421.5829.0220.53.174608.13
15.9520.54.395265.6229.0220.52.295293.44
15.9520.54.443284.5529.0220.53.223613.65
15.9520.55.127431.3129.8724.22.295395.86
19.2124.23.662315.8329.8724.23.223822.68
19.2124.24.443555.9229.8724.22.148385.18
19.2124.23.32349.9529.8724.22.344379.29
19.2124.24.199556.8829.8724.23.32830.71
19.2124.23.564308.4829.8724.22.344415.92
19.2124.24.492551.8929.8724.23.271775.97
19.2124.24.492553.7129.8720.52.539359.68
19.2120.53.418253.0229.8720.52.1301.45
19.2120.54.199439.6829.8720.53.076630.69
19.2120.53.32288.1029.8720.52.344274.52
19.2120.54.102448.2729.8720.53.271679.30
19.2120.53.613256.7229.8720.51.953320.13
19.2120.54.492461.3429.8720.53.174618.10
19.2120.53.662275.6529.8724.22.979777.02
19.2120.54.443431.3130.3824.22.49480.34
22.0024.24.15622.7130.3824.23.516911.60
22.0024.23.271331.4630.3824.23.516911.60
22.0024.24.053598.1630.3824.22.686543.67
22.0020.53.223293.0230.3824.23.564931.11
22.0020.54.102479.6730.3824.22.539481.08
22.0020.53.076323.7030.3824.23.4671003.31
22.0020.53.662474.9730.3824.22.49482.59
24.3324.22.881370.4430.3824.23.32971.55
24.3324.23.809675.8130.3820.52.686381.90
24.3324.22.686304.7930.3820.53.613772.99
24.3320.53.32350.7930.3820.53.662781.88
24.3320.53.76528.5630.3820.52.588412.68
24.3320.52.8332.5930.3820.53.467799.77
24.3320.52.881301.2130.3820.52.49390.17
24.3320.53.711532.5130.3820.53.516834.99
24.3320.53.027320.1330.3820.52.49409.07
24.3320.53.613520.2630.3820.53.223787.10
26.2724.23.516724.1830.5524.22.783564.81
26.2724.22.588343.8930.5524.23.8091111.68
26.2724.23.613728.9230.5524.24.4431650.00
26.2724.22.539344.8030.5524.22.734567.05
26.2720.52.832324.1330.5524.23.7111144.90
26.2720.53.564581.8930.5524.24.4431704.98
26.2720.53.369590.6530.5520.52.832417.46
26.2720.52.49270.0730.5520.53.711901.87
26.2720.53.564594.7930.5520.54.4431372.95
26.2720.52.393262.3130.5520.54.3951370.00
26.2720.53.418595.8630.5520.53.809965.00
27.8324.22.49404.7530.5520.54.3461414.18
27.8324.23.418755.9930.5524.22.734567.05
27.8324.22.539371.9730.5524.23.7111144.90
27.8324.23.32741.79

Appendix B

The derivation principle and the mathematical implementation process of BP neural network.

Appendix B.1. Network Output Calculation

For a three-layer neural network (input layer, hidden layer, output layer), the j input sample is xj, the number of neurons in hidden layer is q, and the number of neurons in output layer is L. Then the output of the k-th neuron is:
y k = f ( ( w j k × g j ) ) ,
where wjk is the connection weight of the hidden layer to the output layer.
The gj is the output of the j-th neurons in the hidden layer:
g j = f ( ( v i j × x i ) ,
and vij is the connection weight of the input layer to the hidden layer, and f is the activation function (the TanH function used in this method).

Appendix B.2. Error Calculation

For supervised learning, define the error function:
E = 1 2 × ( d k y k ) 2 ,
where dk is the desired output.

Appendix B.3. Back Propagation of Error

After the network output is computed by forward propagation, the error between the output and the desired output is back-propagated back through the network, and the sensitivity (gradient) of each weight to the final error is computed, thus determining how to update each connection weight so that the error can be continuously reduced.
In order to minimize E, E is necessary to calculate the partial derivative of each weight and update the weight through the gradient descent algorithm.
To wjk:
E w j k = ( y k d k ) × f ( w j k × g j ) × g j ,
to vij:
E v i j = ( E w j k ) × w j k × f ( v i j × x i ) × x i .

Appendix B.4. Weight Update

Weight updating is a back-propagation algorithm that adjusts the weights of the connections in the network so that the network output can gradually approximate the desired output, thus “learning” the underlying mapping laws.
new   w j k = o l d   w j k η × E w j k ,
v i j = o l d v i j η × E v i j ,
where η is the learning rate, which is 0.01 in this scheme.

References

  1. Ghasemi, M.; Zhang, C.; Khorshidi, H.; Zhu, L.; Hsiao, P.-C. Seismic Upgrading of Existing RC Frames with Displacement-restraint Cable Bracing. Eng. Struct. 2023, 282, 115764. [Google Scholar] [CrossRef]
  2. Zhang, W.; Zhang, S.; Wei, J.; Huang, Y. Flexural Behavior of SFRC-NC Composite Beams: An Experimental and Numerical Analytical Study. Structures 2024, 60, 105823. [Google Scholar] [CrossRef]
  3. Wei, J.; Ying, H.; Yang, Y.; Zhang, W.; Yuan, H.; Zhou, J. Seismic Performance of Concrete-filled Steel Tubular Composite Columns with Ultra High Performance Concrete Plates. Eng. Struct. 2023, 278, 115500. [Google Scholar] [CrossRef]
  4. Franz Dischinger. Available online: https://en.wikipedia.org/wiki/Franz_Dischinger#cite_note-5 (accessed on 22 April 2024).
  5. Abdel-Ghaffar, A.M.; Scanlan, R.H. Scanlan.Ambient Vibration Studies of Golden Gate Bridge: I. Suspended Structure. J. Eng. Mech. 1985, 111, 483–499. [Google Scholar] [CrossRef]
  6. Zhao, D.; Li, P.; Liang, X.; Li, X. Study on Uncertainty of Cable Force Measurement of Cable-stayed Bridge. In Proceedings of the 2022 Industrial Architecture Academic Exchange Conference, Hangzhou, China, 15–17 April 2022. [Google Scholar] [CrossRef]
  7. Irvine, H.M.; Caughey, T.K. The Linear Theory of Free Vibration of a Suspended Cable. Proceeding Royal. Soc. 1974, 341, 299–315. [Google Scholar] [CrossRef]
  8. Chen, H.; Dong, J. Practical Formulae of Vibration Method for Suspender Tension Measure on Half-through and Through Arch Bridge. Chin. Highw. J. 2007, 20, 66–70. [Google Scholar] [CrossRef]
  9. Ren, W.; Chen, G. Practical Formualas to Determine Cable Tension by Using Cable Fundamental Frequency. J. Civ. Eng. 2005, 20, 26–31. [Google Scholar] [CrossRef]
  10. Shao, X.; Li, G.; Li, L. Hod Vibration Analysis and Tension Measurement. Chin. Foreign Highw. 2004, 24, 29–31. [Google Scholar] [CrossRef]
  11. Wang, J. Vibration Method Measurement for Cable Tension of Arch Bridge. Master’s Thesis, Harbin Institute of Technology, Harbin, China, 2012. [Google Scholar]
  12. Yi, D.; Liu, X. Study on the Cable Tension Test of Steel Box Basket Arch Bridge Based on Frequency Method. Foreign Highw. 2021, 41, 154–158. [Google Scholar] [CrossRef]
  13. Wu, Y.; Wang, X.; Fan, Y.; Shi, J.; Luo, C.; Wang, X. A Study on the Ultimate Span of a Concrete-Filled Steel Tube Arch Bridge. Buildings 2024, 14, 896. [Google Scholar] [CrossRef]
  14. Xiao, L. Identification of Cable Force Based on Artificial Neural Network. Master’s Thesis, Harbin Institute of Technology, Harbin, China, 2019. [Google Scholar]
  15. Gai, T.; Zeng, S. Research on Neural Network Generalization of Cable Force Vibration Measurement. Eng. Sci. Technol. 2021, 53, 118–127. [Google Scholar] [CrossRef]
  16. Yuan, B.; Cheng, G.; Zheng, L. BP Neural Network Fundamentals. Digit. Commun. World 2018, 1, 28–29. [Google Scholar] [CrossRef]
  17. Ding, H.; Wang, Z.; Guo, Y. Multi-objective Optimization of Fiber Laser Cutting Based on Generalized Regression Neural Network and Non-dominated Sorting Genetic Algorithm. Infrared Phys. Technol. 2020, 108, 103337. [Google Scholar] [CrossRef]
  18. Kim, B.H.; Park, T. Estimation of Cable Tension Force Using the Frequency-based System Identification Method. J. Sound Vib. 2007, 304, 660–676. [Google Scholar] [CrossRef]
  19. Yao, L. Improved BP Algorithm Based on Particle Swarm Optimization for Nonlinear Equation PID Parameter Tuning. Electron. Mass 2010, 1, 4–5. [Google Scholar]
  20. Liu, X.; Wu, Y.; Zhou, Y. Axial Compression Prediction and GUI Design for CCFST Column Using Machine Learning and Shapley Additive Explanation. Buildings 2022, 12, 698. [Google Scholar] [CrossRef]
  21. Zui, H.; Shinke, T.; Namita, Y. Practical Formulas for Estimation of Cable Tension by Vibration Method. J. Struct. Eng. 1996, 122, 6. [Google Scholar] [CrossRef]
  22. Wang, W.; Sun, Y.; Li, K.; Wang, J.; He, C.; Sun, D. Fully Bayesian Analysis Oftherelevance Vector Machine Classification Forimbalanced Dataproblem. CAAI Trans. Onintell. Technol. 2022, 8, 192–205. [Google Scholar] [CrossRef]
  23. Luo, X.; Wen, X.; Li, Y.; Li, Q. Pruning Method for Dendritic Neuron Model Based on Dendrite Layer Significance Constraints. CAAI Trans. Onintell. Technol. 2023, 8, 208–318. [Google Scholar] [CrossRef]
  24. Cao, B.; Gu, Y.; Lv, Z.; Yang, S.; Zhao, J.; Li, Y. RFID Reader Anticollision Based on Distributed Parallel Particle Swarm Optimization. IEEE Internet Things J. 2021, 8, 3099–3107. [Google Scholar] [CrossRef]
  25. Putri, R.K.; Athoillah, M. Detection of Facial Mask Using Deep Learning Classification Algorithm. J. Data Sci. Intell. Syst. 2023, 2, 58–63. [Google Scholar] [CrossRef]
  26. Isiaka, F. Performance Metrics of an Intrusion Detection System Through Window-Based Deep Learning Models. J. Data Sci. Intell. Syst. 2023, 1–7. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of the neural network model.
Figure 1. Schematic representation of the neural network model.
Buildings 14 01723 g001
Figure 2. Comparison of BP training set prediction results.
Figure 2. Comparison of BP training set prediction results.
Buildings 14 01723 g002
Figure 3. Comparison of BP test set prediction results.
Figure 3. Comparison of BP test set prediction results.
Buildings 14 01723 g003
Figure 4. Variation plot of the model iteration error.
Figure 4. Variation plot of the model iteration error.
Buildings 14 01723 g004
Figure 5. Comparison of the prediction results of the PSO-BP training set.
Figure 5. Comparison of the prediction results of the PSO-BP training set.
Buildings 14 01723 g005
Figure 6. Comparison of the prediction results of the PSO-BP test set.
Figure 6. Comparison of the prediction results of the PSO-BP test set.
Buildings 14 01723 g006
Figure 7. Elevation view of the main bridge.
Figure 7. Elevation view of the main bridge.
Buildings 14 01723 g007
Figure 8. DH5906W Wi-Fi cable tension test instrument.
Figure 8. DH5906W Wi-Fi cable tension test instrument.
Buildings 14 01723 g008
Figure 9. Measurement at project site: (a) frequency measurements; (b) cable tension measurement.
Figure 9. Measurement at project site: (a) frequency measurements; (b) cable tension measurement.
Buildings 14 01723 g009
Figure 10. Errors in BP neural network prediction.
Figure 10. Errors in BP neural network prediction.
Buildings 14 01723 g010
Figure 11. Errors in PSO-BP neural network prediction.
Figure 11. Errors in PSO-BP neural network prediction.
Buildings 14 01723 g011
Figure 12. Linear fitting graph of BP predicted values and actual values.
Figure 12. Linear fitting graph of BP predicted values and actual values.
Buildings 14 01723 g012
Figure 13. Linear fitting graph of PSO-BP predicted values and actual values.
Figure 13. Linear fitting graph of PSO-BP predicted values and actual values.
Buildings 14 01723 g013
Figure 14. Linear fitting graph of equation predicted values and actual values.
Figure 14. Linear fitting graph of equation predicted values and actual values.
Buildings 14 01723 g014
Figure 15. MAPE for cable tension calculation method.
Figure 15. MAPE for cable tension calculation method.
Buildings 14 01723 g015
Table 1. Detailed cable data.
Table 1. Detailed cable data.
Cable Length (m)Cable Linear Density (kg/m)FrequencyJack Readings (kN)
7.782812.158393.69
7.7824.212.939377.46
12.17286.689489.23
12.17286.152371.97
15.9524.25.42504.82
15.9520.54.59266.35
19.2124.24.492553.71
19.2120.53.418253.02
22.0024.24.053598.16
22.0020.53.223293.02
24.3324.22.686304.79
24.3320.53.32350.79
26.2724.22.539344.80
26.2720.52.832324.13
27.8324.22.49404.75
27.8324.23.418755.99
29.0224.22.441395.86
29.0224.23.418787.11
29.8724.22.295395.86
29.8724.23.223822.68
30.3824.22.49480.34
30.3824.23.516911.60
30.5524.24.4431650.00
30.5524.22.734567.05
Table 2. Actual and predicted values of cable tension.
Table 2. Actual and predicted values of cable tension.
Jack Reading
(kN)
PSO-BP Predicted Values
(kN)
BP Predicted Values
(kN)
Formula Calculation
(kN)
422.54426.52402.85978.16
393.98410.51400.60854.94
414.70384.41390.17652.93
393.69429.64403.191002.08
377.46377.47346.56980.93
341.24337.23347.00818.04
364.60363.90346.91845.41
426.98430.85412.07647.98
489.23489.48488.24742.20
371.97394.85379.90627.82
486.44481.68474.81710.15
482.59491.09491.10797.57
275.24271.87335.32670.96
408.57416.61331.56641.47
288.10288.17305.93467.93
359.03364.05333.94660.41
426.86418.65330.45632.30
504.82508.52511.05723.43
266.35279.75319.53439.50
417.46402.11411.74548.36
343.89354.84346.52447.43
728.92749.85736.98872.03
344.80347.96335.67430.65
324.13326.67367.52453.86
581.89585.52605.37718.80
590.65586.98539.20642.30
270.07269.46284.67350.86
409.07397.98392.69469.23
787.10775.40691.62786.16
564.81592.32608.99699.72
1111.61135.41124.01310.75
1645.21644.7506.953276.27
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, H.; Hu, W. Research on Cable Tension Prediction Based on Neural Network. Buildings 2024, 14, 1723. https://doi.org/10.3390/buildings14061723

AMA Style

Zhang H, Hu W. Research on Cable Tension Prediction Based on Neural Network. Buildings. 2024; 14(6):1723. https://doi.org/10.3390/buildings14061723

Chicago/Turabian Style

Zhang, Hongbin, and Weihao Hu. 2024. "Research on Cable Tension Prediction Based on Neural Network" Buildings 14, no. 6: 1723. https://doi.org/10.3390/buildings14061723

APA Style

Zhang, H., & Hu, W. (2024). Research on Cable Tension Prediction Based on Neural Network. Buildings, 14(6), 1723. https://doi.org/10.3390/buildings14061723

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop