Next Article in Journal
Change Management of Organizational Digital Transformation: A Proposed Roadmap for Building Information Modelling-Enabled Facilities Management
Next Article in Special Issue
Technological Innovation Cooperation in Mega Construction Projects: A Conceptual Framework
Previous Article in Journal
High-Rise Residential Timber Buildings: Emerging Architectural and Structural Design Trends
Previous Article in Special Issue
Countermeasures for the Transformation of Migrant Workers to Industrial Workers in the Construction Industry Based on Evolutionary Game Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

UAV 3D Modeling and Application Based on Railroad Bridge Inspection

School of Civil Engineering, Central South University, Changsha 410075, China
*
Author to whom correspondence should be addressed.
Buildings 2024, 14(1), 26; https://doi.org/10.3390/buildings14010026
Submission received: 9 October 2023 / Revised: 30 November 2023 / Accepted: 6 December 2023 / Published: 21 December 2023

Abstract

:
Unmanned aerial vehicle (UAV) remote sensing technology is vigorously driving the development of digital cities. For experimental objects such as large, protruding, and structurally complex steel truss railway bridge structures, commonly used oblique photography and cross-circular photography techniques can lead to blurring, missing, or lower accuracy of fine texture in the models. Therefore, this paper proposes a real-scene three-dimensional modeling method that combines oblique photography with inclined photography and compares it with oblique photography and cross-circular photography techniques. Experimental results demonstrate that the model generated by combining oblique photography with inclined photography exhibits clearer textures, more complete lines, and higher accuracy, meeting the accuracy requirements of 1:500 topographic map control points. This method plays a beneficial auxiliary role in the inspection of ailments such as steel structure coating corrosion and high-strength bolt loss in steel truss railway arch bridges.

1. Introduction

Railway bridges are structures built to traverse obstacles such as rivers and valleys for railway crossing, as well as to facilitate three-dimensional crossings with railways or roads. China’s high-speed railway implementation of closed management, no one can not enter during operation, which also led to the railway line on the bridge inspection is very tricky, railway bridge [1,2,3] inspection can only be carried out at night, which led to the railway bridge disease inspection work is difficult, difficult to comprehensive analysis of the overall bridge disease. To address these limitations, UAV three-dimensional modeling technology is commonly introduced to assist bridge inspections. To cater to diverse needs of real-scene three-dimensional modeling, UAV flight planning encompasses various approaches, including vertical photography, oblique photography, and circular photography.
Unmanned aerial vehicles are widely utilized across various domains, such as agriculture [4,5], surveying [6,7], environmental and safety applications [8], due to their convenience, flexibility, relatively low costs, and ability to acquire high-resolution images through automated means. This has facilitated collaborative development among different fields. In the evolution of UAV-based modeling, numerous scholars have conducted investigations into aspects including model accuracy [9,10,11] and texture effects [12,13].
Wang et al. [14] focused on a primary school as their research area, utilizing oblique photography from four different levels of unmanned aerial vehicles at varying flight altitudes, and comparing the performance of different types of drones in three-dimensional reconstruction. Li et al. [15] introduced the innovative 3D reconstruction method, Edge3D, which restores building edges in the form of 3D lines. They enhanced line segment matching accuracy through geometric constraints and progressive filtering techniques, thereby reducing technological costs and enabling precise building modeling. Che et al. [16] proposed a method based on triangular mesh cutting to restore the edge lines of 3D building models captured through UAV oblique photography, aiming to enhance the realism of buildings and improve model rendering quality. Chen et al. [17] combined deep learning technology with unmanned aerial vehicles to facilitate maintenance in various construction projects involving roads, bridges, and buildings. Through automatic image classification and detection, they effectively identified changed areas in construction sites, contributing to construction management support within the development of intelligent construction technologies.
The route planning approach for UAV real-scene three-dimensional modeling typically employs oblique photography techniques, using different angles to plan UAV flight paths for image acquisition from varying perspectives, thus producing rich images and high-resolution textures. However, for prominent bridge structures that require detailed modeling [18], oblique photography techniques still have limitations. These limitations manifest in fixed gimbal pitch angles, flight altitudes, lateral and longitudinal overlap ratios, and insufficient connection points, resulting in regions of the generated model exhibiting blurred or even missing textures. To address this issue, some researchers have proposed the use of cross-circular photography methods [19] for refined modeling of protruding bridge structures. Surrounding flight is directed solely towards photographing the architecture itself, reducing redundancy in image acquisition. However, for railway bridges where ground control points (GCP) cannot be deployed on the bridge, the three-dimensional error in models created through cross-surround methods often tends to be higher.
Currently, both domestically and internationally, there is a wide variety of steel bridges. In comparison to conventional concrete beams, steel bridges possess advantages such as superior span capabilities, high strength, and shorter construction periods. However, issues such as apparent corrosion, stress corrosion, and corrosion fatigue can lead to surface defects, reduced lifespan, and even loss of operational capacity in these large steel bridge components. To mitigate these concerns, protective measures such as corrosion-resistant coatings are often employed for the preservation of steel bridges.
For steel truss arch bridges, the UAV oblique photography technique cannot guarantee the texture quality of the building, and the cross-circular photography cannot guarantee the ground image control point accuracy. This paper addresses the challenges of 3D modeling for prominent structures like railway bridges by proposing a flight route planning strategy that combines oblique photography with near-range oblique photography (inclined photography) [20]. It offers solutions for the modeling of oblique photography parallel to the bridge deck but at a farther distance, as well as for the processing of near-range oblique photography vertical to the bridge deck. A comparative analysis is conducted between model accuracy, detailed textures, and overall model effects of 3D modeling using oblique photography combined with near-range oblique photography, oblique photography alone, and cross-circular photography. This method can solve the problem of fuzzy or even missing texture of large abrupt buildings such as arch bridges and also ensure the accuracy of the model. Finally, the application and prospects of UAV modeling results in railway bridge inspection are discussed.

2. Principles of UAV Three-Dimensional Modeling

In recent years, with the consolidation of UAV remote sensing technology, the technological advantages of unmanned aerial vehicles have been increasingly highlighted [21].
The process of UAV three-dimensional modeling involves importing captured UAV imagery into modeling software and generating models and data through post-processing within the software (Figure 1). The fundamental principle entails initially utilizing the UAV’s own positioning system to obtain position and orientation system (POS) data of the imagery, which is then employed as the initial orientation element of the images. Subsequently, the imagery and POS data are imported into the modeling software, where algorithms such as SIFT, PMVS, and structure from motion (SFM) are employed to establish a dense 3D point cloud of the target through aerial triangulation. Subsequently, ground control points are incorporated to correct model accuracy discrepancies caused by systematic errors through bundle adjustment, thereby restoring the true 3D model of the object. Finally, through multi-image dense matching of the 3D model, a triangulated irregular network (TIN) and solid 3D model are constructed. Optimal image textures are selected for texture mapping, culminating in the completion of the three-dimensional modeling of the real scene.

3. Workflow of UAV Three-Dimensional Modeling Technology

3.1. Overview of the Experiment

The chosen subject of this experiment was the San’an Yongjiang Extra-Large Bridge (a three-span continuous steel truss arch bridge with spans of 132 m + 276 m + 132 m), located in the southeastern outskirts of Nanning City, Guangxi Province, China near the San’an Horticultural Park. The central geographic coordinates of the experimental area were approximately 108.432° longitude and 22.788° latitude, with an elevation ranging between approximately 70 m and 170 m. The experiment was conducted on 6–7 November 2022, under clear weather conditions, high visibility, and favorable satellite signal strength [22].
The DJI Phantom 4 RTK quadcopter, developed by DJI Innovations (Shenzhen, China), was selected as the UAV model for aerial photography equipment. Phantom 4 RTK is equipped with a 1-inch 20-megapixel CMOS sensor camera, featuring a field of view of 84°, an aperture range of f/2.8–f/11, and an autofocus lens. This UAV model offers a flight endurance of up to 30 min and boasts a hovering accuracy of ±0.1 m in both vertical and horizontal directions.
Control points were established using Real-Time Kinematic (RTK) measurements, with the Zhonghaida V5 mobile station selected as the equipment model. The static positioning plane accuracy of Zhonghaida V5 is ±(2.5 + 0.5 × 10 − 6D) mm, and the elevation accuracy is ±(5 + 0.5 × 10 − 6D) mm, where D represents the distance between the measured points. Field data collection was based on the 2000 National Geodetic Coordinate System (CGCS2000), with a central meridian of 108 degrees and a geodetic height.

3.2. Arrangement of Ground Control Points

Ground control points, also referred to as image control points, are control points strategically placed within the aerial imaging area to correct aggregate errors and geospatial information discrepancies in the aerial images. Factors such as quantity, distribution, and accuracy of these image control points significantly impact the accuracy of the generated three-dimensional model. In this experiment, a total of nine image control points were uniformly arranged at prominent and clear locations within the imaging area, as depicted in Figure 2.

3.3. Image Acquisition and Flight Path Planning

To validate the improvement in bridge model quality resulting from the introduction of oblique planning, three sets of experiments were conducted in accordance with different UAV aerial photography methods: oblique photography, cross-circular photography, and a combination of oblique and inclined photography, as outlined in this study. Oblique photography flies the drone according to the “S-shaped” route, from the southeast, west, north, south, north and central five directions to take a comprehensive picture of the building; cross-circular photography flies the route in a circle to take detailed aerial photography of the center of the circle; inclined photography flies the route parallel to the building façade to take aerial photography of the façade of the building and to obtain the image of the side of the building.

3.3.1. Parameter Configuration for Oblique Photography

Oblique photography technology overcomes the limitations of traditional aerial surveying, which only captures images from vertical angles. It provides a more realistic and comprehensive representation of object appearance, texture, and positional data. This greatly facilitates fieldwork collaboration in surveying, inspection, and related domains. The oblique photography parameters for this experiment were set as depicted in Table 1.

3.3.2. Parameters for Cross-Circular Photography

Compared to oblique photography, the cross-circular photography approach adopts a multi-angle and omni-directional shooting method tailored for detailed modeling of individual structures. This method offers advantages in capturing more intricate texture details of buildings and enhancing the efficiency of model creation. In this experiment, different circular photography parameters were set for both the side span and the main span of the railway steel truss arch bridge. Cross-circular photography parameters for this experiment were set as depicted in Table 2.

3.3.3. Parameters for Inclined Photography Combined with Oblique Photography

Inclined photography is conducted by forming a flight plane using three points on the UAV. It is applicable to scenarios such as slopes and building facades, enabling comprehensive and detailed data collection of building facades to enhance the texture accuracy of real-world 3D modeling. In the case of railway steel truss arch bridges, combining inclined photography with oblique photography can supplement more side texture information of the bridge while maintaining model accuracy. Due to the overall height and width of the bridge, it is essential to analyze the principles and parameter settings of inclined photography in advance to achieve appropriate image overlap and object distance. The schematic diagram of inclined photography is illustrated in Figure 3. The correspondence between the corresponding parameters In the figure can be seen in (1) to (3).
d = f G S D / a ,
X = A 2 B 1 A 1 B 2 = 2 d tan α 2 h 1 / 2 d tan α 2 + h 1 ,
Y = C 2 D 1 C 1 D 2 = 2 d tan α 2 h 2 / 2 d tan α 2 + h 2 ,
where d is the distance from the drone to the side of the bridge in meters; f is the lens focal length in millimeters; GSD is Ground Sampling Distance, representing spatial resolution; a is pixel size in millimeters; h1 is the vertical separation between adjacent images in the lateral direction in meters; h2 is the horizontal separation between adjacent images in the longitudinal direction in meters; α is the camera’s vertical Field of View (FOV) in degrees; β is the camera’s horizontal Field of View in degrees; A2B1 and C2D1, respectively, represent lateral and longitudinal overlap regions. We let the lateral and longitudinal overlap rates of the drone be denoted as X and Y, visually represented in the diagram by the ratios of A2B1 to A1B2 and C2D1 to C1D1.
In this experiment, the same approach was applied to the railway steel truss arch bridge, where different inclined photography parameters were set for both the side span and the main span. The inclined photography parameters for this experiment were set as depicted in Table 3.

3.4. Aerial Triangulation

So far, there is a proliferation of UAV modeling software, each with its strengths in terms of model accuracy and the effectiveness of three-dimensional mesh construction. In this experiment, image processing software Context Capture 20.0 (formerly known as 3D Smart) developed by Bentley Systems was employed for analysis and processing. The images from the three proposed methods were imported into the Context Capture software for aerial triangulation.
For oblique photography and cross-circular photography, due to the similarity in flight altitude differences, adjacent spacing, photography time, and grayscale values of captured images, all images were grouped into the same image set for the aerial triangulation process. This enabled the successful calculation of optical attribute parameters, such as principal point and distortion, as well as position information, for all images through aerial triangulation computation.
For the combined oblique and inclined photography flight method, as oblique photography employs flight paths parallel to the bridge deck while inclined photography uses flight paths perpendicular to the bridge deck, there exist significant differences between the two in terms of capture angles, image-to-bridge distances, inter-image spacing, and grayscale values. For the traditional aerial triangulation process, it is difficult for the software to identify and compare all the image information, and there are often a lot of images with unsuccessful aerial triangulation. Therefore, it is difficult to process the images of tilt and oblique photography in the software, and this paper improves on the traditional processing method to ensure that the oblique photography combined with inclined photography method can be applied. Therefore, this study proposes to divide the slanted images into two groups: upstream and downstream faces of the bridge. High-precision GPS and the onboard RTK positioning module of the unmanned aircraft are employed to separately conduct multi-view matching and dense matching for the upstream and downstream faces. This approach yields varying optical attribute parameters after steps such as iterative least squares adjustment and distortion correction. The oblique, upstream slanted, and downstream slanted image groups are then merged and assigned to three distinct image sets. Optical attribute parameters are imported for each set, and aerial triangulation computation is performed for the respective images.
After the initial aerial triangulation, the majority of image parameter information can be successfully computed. For those images that remain unresolved, manual tie points are added, and iterative aerial triangulation is performed until the accuracy requirements are met. Following this, the process proceeds to three-dimensional reconstruction.

4. Model Quality Assessment and Application

4.1. Texture Evaluation and Analysis of the Models

In order to examine the impact of three distinct aerial photogrammetry techniques on the visual quality of realistic 3D models of railway steel truss bridges, an evaluation and analysis of the modeling outcomes was conducted. This assessment involved comparing the overall shape and completeness of the models, as well as the texture fidelity of their local details, to gauge the efficacy of the three modeling approaches.
The overall effect is illustrated in Figure 4. It can be observed from the figure that the models reconstructed using the cross-circular photography and oblique combined with inclined photography approaches exhibit relatively complete overall effects. However, in the case of oblique photography, the camera’s proximity to the bridge’s apex results in insufficient matching connection points, ultimately leading to the appearance of voids and significant texture loss in the model. It can therefore be shown that the introduction of inclined photography has greatly improved the integrity of oblique photographic models.
The three approaches are arranged according to oblique photography, cross-circular photography, and oblique combined with inclined photography for comparison, analyzing the modeling effects of fine textures on the bridge (Figure 5). A large span steel truss arch bridge is a kind of structural system between beams and arches, and a large number of trusses are subjected to axial force to locate the stability of the structure, so it is particularly important to display the images of different truss joints. Meanwhile, comparing the textures of the detailed models of the three methods, it can be seen that the introduction of inclined photography makes the truss model clearer, and it can be seen that the oblique combined with inclined photography model is more rich in texture than the other two methods, including the fact that the sunlight, shadows, and the corrosion of steel trusses can be shown very well. The linearity of the internal trusses of the bridge is more complete, without any voids or deformations.
In the detailed model image of cross-circular photography, it can be observed that although the bridge’s edge-span truss is relatively intact, the corrosion on the steel truss’s side surface is discernible but somewhat blurry. The visibility of node plates and high-strength bolts is not well reflected, and the linearity of the internal trusses of the bridge is not ideal.
The feedback regarding the corrosion on the side surfaces of the bridge in the oblique photography model is similar to that of cross-circular photography. The visibility of node plates and high-strength bolts is not very satisfactory, and there is a phenomenon of texture loss in the internal trusses of the bridge. For complex structures like the steel truss bridge’s internal framework, tilted photography is not ideal.
The performance of the internal trusses of the bridge is not particularly ideal. By incorporating oblique flight paths for close-range photography of the bridge, it is possible to better supplement the model with detailed textures and enhance the overall completeness of the model.

4.2. Model Accuracy Evaluation and Analysis

After aerial triangulation was completed in this experiment, the overall accuracy of the three approaches was assessed by comparing the errors of control points in the X, Y, and Z directions. Additionally, a comparison was made between the three approaches and the precision of the bridge model was evaluated by comparing it with the check points on the bridge side obtained from a total station target.

4.2.1. Control Point Accuracy Assessment

From the error plots (Figure 6), it can be observed that the distribution of errors between the same control points is similar among the three schemes, but it can be roughly determined that the control point errors for the cross-circle flight scheme are relatively larger, while those for the oblique photography scheme are smaller. The error curve for the scheme combining oblique and inclined photography falls between the other two.
In order to analyze and compare the level of dispersion between the actual and predicted values of ground control point coordinates, the Root Mean Square Error (RMSE) formula is introduced to calculate the errors in the horizontal and vertical directions of ground control points, as shown in Table 4.
Comparative analysis of the RMSE of ground control points in both horizontal and vertical directions reveals that the RMSE in the horizontal direction aligns with the representation in the azimuth error plot. Thus, with the introduction of oblique imagery through inclined photography, the horizontal accuracy of ground control points experiences a slight decrease. However, the accuracy still remains higher than that of cross-circle circumnavigation aerial photography. When considering the RMSE in the vertical direction, the inclusion of oblique photography in the inclined photography approach results in a slight improvement over the original oblique photography scheme. Among the three approaches, this combined scheme demonstrates the optimal error precision in the vertical direction. From the accuracy of the control points, it can be seen that the three schemes can meet the accuracy requirements of the control points of the 1:500 topographic map mapping, but for bridges such as large abrupt structures, the introduction of inclined photography to improve its vertical progress, the bridge has a very good role in helping detection.

4.2.2. Checkpoint Accuracy Assessment

After controlling the accuracy of the bridge model, the accuracy of the bridge structure was analyzed and compared by introducing checkpoints on the bridge side. According to the symmetry, 20 checkpoints were placed on the downstream right side of the bridge, as shown in Figure 7, and their positions were measured using a total station. Subsequently, the position data of the bridge side checkpoints collected by the total station were imported into Context Capture for individual point fitting. The horizontal and vertical errors of the 20 checkpoints were then plotted as box plots, as shown in Figure 8, for analysis and comparison.
The box plots of horizontal and vertical errors of model checkpoints on the bridge side obtained from the three different approaches show that the modeling scheme combining inclined photography with oblique photography significantly improves the accuracy of the bridge side. This photographic modeling approach outperforms the accuracy of the bridge model compared to the cross-circular photography method, with some improvement in horizontal direction accuracy.

4.3. Application of Realistic 3D Models

In this experiment, the steel structure coating and corrosion condition of the railway steel truss arch bridge were examined using UAV-based realistic 3D modeling (as depicted in Figure 9). From the image, it is evident that the bridge members have experienced varying degrees of corrosion, with some components exhibiting severe corrosion. The inspection and analysis of corrosion on high-strength bolts were conducted using the realistic 3D model, as shown in Figure 9. The images clearly reveal that significant corrosion has affected several high-strength bolts subjected to substantial loads, posing a potential threat to both the structural integrity of the bridge and the safety of vehicular traffic passing beneath it. Furthermore, the absence of bolts on the junction plates of the steel truss arch bridge can also be distinctly discerned. The diameter of the high-strength bolts on the bridge is roughly 3 cm, so it is known that for the oblique photography combined with inclined photography UAV 3D modelling method the resolution can be at least 3 cm, and from the model it is known that the resolution can be up to 7.5 mm for one pixel.
In conclusion, this study employs a combination of drone-based oblique photography and inclined photography to carry out real-time 3D modeling of a railway steel truss arch bridge. This approach enables the examination of bridge structural issues, facilitating the clear identification of corrosion and detachment of steel structure coatings, as well as the condition of high-strength bolts. For large-span railway bridges, UAV 3D modelling is shorter and less costly, and for larger construction workloads, the choice of UAV modelling for bridge disease detection has a very good auxiliary role. However, drone-based inspection also has limitations; its precision is relatively lower compared to that of sophisticated instruments such as total stations and 3D scanners. To address this, integration with point clouds from 3D scanners can be employed to enhance the precision and texture of the self-generated point cloud, thereby improving the overall model accuracy.

5. Conclusions

This study investigated the impact of different aerial route modes, including oblique photography, cross-circular photography, and the integration of oblique photography and inclined photography, conducted by a multi-rotor single-lens UAV, on the texture and accuracy of real-world three-dimensional models of bridge structures. Furthermore, the study proposed the application of the developed method to the detection of railway steel truss arch bridges. The results are as follows:
(1)
In terms of model texture effects, the integration of oblique photography and inclined photography exhibited the highest model completeness, showcasing smoother overall linearity and more distinct detail feedback.
(2)
Regarding model accuracy, the integration of oblique photography and inclined photography demonstrated the highest model accuracy on the bridge’s side, with horizontal accuracy slightly lower than that of oblique photography. Nonetheless, it effectively met the accuracy requirements for 1:500 topographic map control points.
(3)
In terms of model application, the incorporation of inclined photography significantly enhanced the texture accuracy of the real-world 3D bridge model, proving to be effective for the detection of structural issues such as steel corrosion and missing bolts in bridge components.

Author Contributions

Conceptualization, Z.T.; methodology, Z.T.; software, Z.T.; validation, Y.P.; resources, J.L.; writing—original draft preparation, Z.T.; writing—review and editing, Z.T., Y.P., J.L. and Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Acknowledgments

The author sincerely thanks Peng Yipu for his invaluable assistance in establishing the model for this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aliyari, M.; Ashrafi, B.; Ayele, Y.Z. Hazards identification and risk assessment for UAV-assisted bridge inspections. Struct. Infrastruct. Eng. 2022, 18, 412–428. [Google Scholar] [CrossRef]
  2. Khaloo, A.; Lattanzi, D.; Cunningham, K.; Dell’Andrea, R.; Riley, M. Unmanned aerial vehicle inspection of the Placer River Trail Bridge through image-based 3D modelling. Struct. Infrastruct. Eng. 2018, 14, 124–136. [Google Scholar] [CrossRef]
  3. Wang, H.H. Bridge inspection and disease cause analysis. Build. Mater. Decor. 2017, 27, 234–235. [Google Scholar]
  4. Aslan, M.F.; Durdu, A.; Sabanci, K.; Ropelewska, E.; Gueltekin, S.S. A Comprehensive Survey of the Recent Studies with UAV for Precision Agriculture in Open Fields and Greenhouses. Appl. Sci. 2022, 12, 1047. [Google Scholar] [CrossRef]
  5. Yang, G.J.; Liu, J.G.; Zhao, C.J.; Li, Z.H.; Huang, Y.B.; Yu, H.Y.; Xu, B.; Yang, X.D.; Zhu, D.M.; Zhang, X.Y.; et al. Unmanned Aerial Vehicle Remote Sensing for Field-Based Crop Phenotyping: Current Status and Perspectives. Front. Plant Sci. 2017, 8, 26. [Google Scholar] [CrossRef]
  6. Lai, H.Y.; Liu, L.P.; Liu, X.; Zhang, Y.M.; Xuan, X.H. Unmanned aerial vehicle oblique photography-based superposed fold analysis of outcrops in the Xuhuai region, North China. Geol. J. 2021, 56, 2212–2222. [Google Scholar] [CrossRef]
  7. Siebert, S.; Teizer, J. Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system. Autom. Constr. 2014, 41, 1–14. [Google Scholar] [CrossRef]
  8. Singla, A.; Padakandla, S.; Bhatnagar, S. Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge. IEEE Trans. Intell. Transp. Syst. 2021, 22, 107–118. [Google Scholar] [CrossRef]
  9. Feng, L.; Li, B. Construction of high-precision erosion gully terrain model using UAV tilted imagery and GCP. J. Agric. Eng. 2018, 34, 88–95. [Google Scholar]
  10. Xiang, H.L.; Li, B.X. Three-dimensional modeling and accuracy assessment of single lens UAV inclined photogrammetry. Mapp. Bull. 2022, S02, 237–240. [Google Scholar] [CrossRef]
  11. Zhang, H.Y.; Dong, C.L.; Wang, J.G.; Sun, S.J.; Wang, H.Y.; Wang, H.H. Practice and analysis of UAV tilt photography 3D modeling based on Context Capture. Mapp. Bull. 2019, S1, 266–269. [Google Scholar] [CrossRef]
  12. Bi, R.; Gan, S.; Yuan, X.P.; Li, R.B.; Gao, S. Analysis of UAV route planning and 3D modeling in different terrain environments. Mapp. Bull. 2022, 4, 83–89+129. [Google Scholar] [CrossRef]
  13. Liu, C.; Zeng, J.T.; Zhang, S.H.; Zhou, Y. UAV single-camera real-time 3D modeling for monolithic shaped buildings. J. Tongji Univ. (Nat. Sci. Ed.) 2018, 46, 550–556+564. [Google Scholar]
  14. Wang, D.; Shu, H. Accuracy Analysis of Three-Dimensional Modeling of a Multi-Level UAV without Control Points. Buildings 2022, 12, 592. [Google Scholar] [CrossRef]
  15. Li, L.; Chen, J.; Su, X.; Nawaz, A. Advanced-Technological UAVs-Based Enhanced Reconstruction of Edges for Building Models. Buildings 2022, 12, 1248. [Google Scholar] [CrossRef]
  16. Che, D.F.; He, K.; Qiu, K.H.; Liu, Y.N.; Ma, B.D.; Liu, Q. Edge Restoration of a 3D Building Model Based on Oblique Photography. Appl. Sci. 2022, 12, 12911. [Google Scholar] [CrossRef]
  17. Chen, Y.; Zhu, Z.; Lin, Z.; Zhou, Y. Building Surface Crack Detection Using Deep Learning Technology. Buildings 2023, 13, 1814. [Google Scholar] [CrossRef]
  18. Sun, B.Y.; Wei, L.H.; Zhou, X.; Qin, Y.C.; Zhang, S.K. Three-dimensional modeling of an abrupt large height difference site based on a consumer-grade drone. Mapp. Bull. 2021, 7, 111–116. [Google Scholar] [CrossRef]
  19. Wang, K.Q.; Fang, J.; Wang, Z.L.; Guo, H.B.; Yang, Z.S. Three-dimensional modeling of city-level complex buildings under cross-surrounding routes. Realistic three-dimensional modeling of city-level complex buildings under cross-surrounding routes. Mapp. Bull. 2022, 9, 80–85. [Google Scholar] [CrossRef]
  20. Chen, C.F.; He, K.Y.; Yu, G.Y.; Mao, F.S.; Xue, X.K.; Li, F. Structural surface identification of high and steep slopes based on UAV close-up photography. J. Hunan Univ. (Nat. Sci. Ed.) 2022, 49, 145–154. [Google Scholar] [CrossRef]
  21. Wu, T.W.; Yu, J.Y.; Chen, R.P.; Yen, B.F. Research Progress of UAV Inclined Photogrammetry Technology and Its Engineering Applications. J. Hunan Univ. (Nat. Sci. Ed.) 2018, 45, 167–172. [Google Scholar] [CrossRef]
  22. Peng, Y.P.; Cheng, Y.; Han, Z.; Wang, P.L.; Wu, Z.X. Research on factors affecting the quality of 3D modeling of engineering structures based on multi-rotor UAV aerial survey. J. Railw. Sci. Eng. 2019, 16, 2969–2976. [Google Scholar] [CrossRef]
Figure 1. UAV Modeling Workflow.
Figure 1. UAV Modeling Workflow.
Buildings 14 00026 g001
Figure 2. Ground Control Point Placement Diagram.
Figure 2. Ground Control Point Placement Diagram.
Buildings 14 00026 g002
Figure 3. Inclined Photography Flight Path Design.
Figure 3. Inclined Photography Flight Path Design.
Buildings 14 00026 g003
Figure 4. Overall Effect of the Realistic 3D Model.
Figure 4. Overall Effect of the Realistic 3D Model.
Buildings 14 00026 g004
Figure 5. Detailed Effects of a Realistic 3D Model.
Figure 5. Detailed Effects of a Realistic 3D Model.
Buildings 14 00026 g005
Figure 6. GCP Error. (a) X Direction. (b) Y Direction. (c) Horizontal Direction. (d) Z Direction.
Figure 6. GCP Error. (a) X Direction. (b) Y Direction. (c) Horizontal Direction. (d) Z Direction.
Buildings 14 00026 g006
Figure 7. Distribution of Model Checkpoints.
Figure 7. Distribution of Model Checkpoints.
Buildings 14 00026 g007
Figure 8. Checkpoint Error. (a) Horizontal Error. (b) Vertical Error.
Figure 8. Checkpoint Error. (a) Horizontal Error. (b) Vertical Error.
Buildings 14 00026 g008
Figure 9. Condition check of local bridge components.
Figure 9. Condition check of local bridge components.
Buildings 14 00026 g009
Table 1. Main Parameters of Oblique Photography.
Table 1. Main Parameters of Oblique Photography.
Main ParameterNumericalMain ParameterNumerical
Lateral Overlap/%80Gimbal Pitch Angle/°−60
Longitudinal Overlap/%80Flight Altitude/m120
Oblique Lateral Overlap/%80Main Route Angle/°0
Oblique Longitudinal Overlap/%80
Table 2. Main Parameters of Cross-Circular Photography.
Table 2. Main Parameters of Cross-Circular Photography.
Main Parameter (Main Span)NumericalMain Parameters (Side Span)Numerical
Flight Altitude/m120Flight Altitude/m100
Circular Radius/m100Circular Radius/m80
Gimbal Pitch Angle/°−45Gimbal Pitch Angle/°−45
Table 3. Main Parameters for the Inclined Photography Flight Paths.
Table 3. Main Parameters for the Inclined Photography Flight Paths.
Main Parameter (Main Span)NumericalMain Parameters (Side Span)Numerical
Flight Overhead Extension Distance/m50Flight Overhead Extension Distance/m60
Distance to Bridge/m25Distance to Bridge/m20
Lateral Overlap/°80Lateral Overlap/°80
Longitudinal Overlap/°80Longitudinal Overlap/°80
Table 4. Root Mean Square Errors of Ground Control Points.
Table 4. Root Mean Square Errors of Ground Control Points.
ApproachRMSE in Horizontal Direction (m)RMSE in Vertical Direction (m)
Oblique Photography0.01020.0089
Cross Circular Photography0.02020.0152
Integration of Oblique Photography and Inclined Photography0.01550.0081
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, Z.; Peng, Y.; Li, J.; Li, Z. UAV 3D Modeling and Application Based on Railroad Bridge Inspection. Buildings 2024, 14, 26. https://doi.org/10.3390/buildings14010026

AMA Style

Tang Z, Peng Y, Li J, Li Z. UAV 3D Modeling and Application Based on Railroad Bridge Inspection. Buildings. 2024; 14(1):26. https://doi.org/10.3390/buildings14010026

Chicago/Turabian Style

Tang, Zhiyuan, Yipu Peng, Jian Li, and Zichao Li. 2024. "UAV 3D Modeling and Application Based on Railroad Bridge Inspection" Buildings 14, no. 1: 26. https://doi.org/10.3390/buildings14010026

APA Style

Tang, Z., Peng, Y., Li, J., & Li, Z. (2024). UAV 3D Modeling and Application Based on Railroad Bridge Inspection. Buildings, 14(1), 26. https://doi.org/10.3390/buildings14010026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop