1. Introduction
Three-dimensional (3D) laser scanning has emerged as a powerful technology for rapidly capturing the as-is conditions of real-world objects and scenes [
1]. The dense 3D point clouds obtained by laser scanners enable detailed analyses and measurements of scanned structures, making the technology highly suitable for structural health monitoring applications [
2]. Compared to traditional monitoring techniques using strain gauges or total stations, which provide merely sparse measurements, laser scanning can provide a comprehensive assessment of the full-field deformation of a structure. However, raw point clouds contain millions of 3D points and require further processing to extract actionable information. Therefore, one of the key challenges is extracting salient structural components that serve as accurate and reliable deformation markers. In this study, beam bottom flanges are the salient structural components to be extracted.
The extraction of beam bottom flanges is especially challenging. The point cloud of a cross-section of a steel beam encountered in this study is shown in
Figure 1. The beam top flange (in red) and the beam bottom flange (in green) are similar in dimension and shape. Therefore, it is difficult to define a geometric feature that can distinguish between the beam top and bottom flanges. Thus, manual extraction should be considered. Even though the beam web and beam bottom flanges look separated in
Figure 1 due to data occlusion, the space between the beam web (in blue) and beam bottom flange is merely 0.064 m, which is minimal compared to the dimension of the whole steel frame structure. Therefore, manual separation of the beam webs and bottom flanges would require precise observation and control for each of the individual beams. In other words, the manual separation should be simultaneously conducted on multiple beams in batch. The steel frame structure investigated in this study consists of many individual beams. Repeatedly conducting manual extraction for a large number of beams is very time-consuming. However, manually separating the beam webs and beam bottom flanges is challenging.
In many previous studies (e.g., [
3,
4,
5,
6,
7]), the point clouds of structural components were manually extracted. However, such manual extraction is laborious when repetitively applied to a large number of structural components.
Many algorithms are also already devised to extract representation of structural components from point clouds of various structures, such as stones from masonry walls [
8]; piers and slabs from bridges [
9]; beam lines from steel buildings [
10]; struts, connection plates, and chords from steel structures [
11]; building facades from masonry buildings [
12]; ceilings, walls, and floors from buildings [
13,
14,
15,
16,
17]; decks from steel girder bridges [
18]; and stones from stone columns [
19]. These algorithms typically involve downsampling, clutter removal, and the extraction of geometric primitives. These general procedures are followed in this study. However, it is found that a specific algorithm should be defined for extracting specific structural components from structures, according to their geometric characteristics. These algorithms alter in each of the aforementioned studies. Therefore, those methods are only applicable to the specific structures for which they were originally designed. Moreover, these methods merely extract instances of structural components. None of the aforementioned methods further break down the structural element into distinctive parts (i.e., beam webs and beam bottom flanges), as required in this study. Here, an algorithm to extract the beam bottom flanges from a steel frame structure must be originally proposed.
Recent advancements in deep learning for point cloud segmentation have enabled the extraction of certain structural components [
20,
21,
22,
23,
24,
25]. Again, these methods do not apply to the extraction of beam bottom flanges because they are trained to extract other kinds of a structure components. Among them, the work of Lee, Rashidi, Talei, and Kong [
25] is most relevant to this study. Lee, Rashidi, Talei, and Kong [
25] used a deep neural network to semantically segment a light steel framing system consisting of c-beams into studs, top tracks, bottom tracks, noggins, and bracings. Their classification is not specific enough to extract the bottom flange of steel beams, as required in this study. However, deep learning models can potentially be adapted via transfer learning for the extraction of beam bottom flanges from steel frame structures. Such adaption requires multiple labelled data of beam bottom flanges and other parts of steel frame structures as training data. In our study, data from only one steel frame structure are acquired, which cannot sufficiently serve as training data. The application of deep learning segmentation methods is limited by the scarcity of labelled training data for specific structural elements [
25]. In summary, existing deep learning methods are developed for other scenarios and are not directly applicable to the extraction of beam bottom flanges from a steel frame structure.
To avoid the necessity of manual extraction, this paper aims to propose of an algorithm-driven approach that specifically targets the extraction of the bottom flanges of steel beams from a complex steel framework for measuring vertical deformations. The algorithm specifically incorporates an originally designed point feature, namely the ‘local difference in z-axis’ to separate beam bottom flanges and beam webs. The proposed method significantly improves the efficiency in the extraction of beam bottom flanges compared to manual extraction and makes monitoring steel frame deformation using laser scanning affordable.
The method is demonstrated using point clouds captured at two stages: before and after the unloading of the temporary supporting lattice columns. Initially, the RANdom SAmple Consensus (RANSAC) [
26] algorithm is used to perform coarse extraction of the planes representing data points at the approximate level of the bottom flange of the steel beams. Euclidean clustering [
27] is then applied both globally and locally to eliminate noise. Finally, filtering based on point normals and local differences in the z-axis is employed to accurately separate the bottom flanges and webs of the steel beams. The accuracy of the extraction is assessed by visually comparing the algorithm-driven results with manually extracted data. Deformations are calculated by measuring the distances between corresponding bottom flanges.
The structure of the paper is as follows:
Section 2 covers the site conditions and data acquisition.
Section 3 details the proposed method.
Section 4 compares the bottom flanges extracted using the proposed method with those obtained through manual extraction.
Section 5 presents the deformation calculation results.
Section 6 discusses the implications and limitations of the research and suggests future research directions. Finally,
Section 7 concludes the paper.
2. Site Condition and Data Collection
The measured object is a level of a steel frame structure under construction, initially supported by fourteen lattice columns, as shown in
Figure 2a. These temporary supporting lattice columns are labelled and their locations are marked with lowercase letters in yellow squares in
Figure 2b. The columns were progressively unloaded, leading to expected downward vertical deformations of the beams following the start of the unloading process. The steel frame structure was also supported vertically by concrete-filled steel tubes, marked with numbers 1 to 6 in black circles in
Figure 2b, and the main reinforced concrete structure surrounding it. However, a detailed structural analysis is beyond the scope of this research.
In this study, point clouds of the site were acquired using a Leica P40 laser scanner [
28]. The 3D laser scanner was positioned beneath the steel frame structure, as shown in
Figure 2c. Three terrestrial laser scanning stations labelled A, B, and C in red squares in
Figure 2b, were used. Each station collected approximately 5 billion points with a scanning resolution of 2.8 mm dot-spacing at a distance of 10 m.
This data acquisition process was repeated twice: once before and once after the start of unloading two temporary supporting columns. By the time of the second data acquisition, columns
h and
m (see
Figure 2b) had been unloaded, while the others remained loaded.
Point clouds collected from multiple stations were registered using reference targets. Circular black-and-white targets, 6 inches in diameter, were printed on A4 paper and adhered to the concrete-filled steel tube columns with structural adhesive, as shown in
Figure 2d. These tubes were selected for their minimal deformation. A total of six concrete-filled steel tube columns were used, marked 1 to 6 in black circles as seen in
Figure 2b. Each tube had two targets attached, resulting in twelve targets overall. These targets were strategically placed to ensure good spatial distribution both horizontally and vertically, covering the entire steel frame structure when projected onto a horizontal plane. According to Fan, et al. [
29], registration error increases with distance from the centre of mass of reference targets. Therefore, we endeavoured to distribute the reference targets evenly around the observed steel frame structure. In this case, the centre of mass of the reference targets is approximately in the middle of the steel frame structure, thus minimising registration error.
Initial coarse registration of the multi-station point clouds was performed using the Leica Cyclone 9.1.3 software [
30], based on the twelve target points from each station. Fine registration was then conducted using the concrete-filled steel tube columns as reference, applying the Iterative Closest Point (ICP) method [
31]. Additionally, the point cloud of the primary reinforced concrete beams, as indicated in
Figure 2d, was used to verify the registration accuracy due to its minimal deformation.
3. Method for Extracting Beam Bottom Flanges
3.1. Overview
With over 15 billion points collected at each epoch, sub-sampling was necessary to manage the data volume before applying the proposed method. The downsampling tool in CloudCompare 2.9.1 [
32] was used to reduce the point count from 15 billion to 121 million.
The workflow of the method is illustrated in
Figure 3. The proposed method involves four main steps. First, RANSAC is applied to the entire point cloud to coarsely extract the steel frame structure through plane fitting. This process is iterated until a plane fitting the steel frame structure is identified. However, the point clouds extracted by this plane still contain many clutter points, as RANSAC includes all points that fit within a specified threshold. Second, Euclidean clustering is applied to segregate the point clouds within the fitted plane into clusters representing the steel frame structure and clusters of clutter, removing the latter. This clustering is done at two scales: globally on the initial RANSAC planes and locally on subsets to preserve detailed features. Third, the bottom flanges and webs of the beams are distinguished based on their orientation and vertical position. This step finalizes the extraction of the bottom flanges. Finally, the extracted bottom flanges are compared between the two epochs to determine the vertical deformations of the beams.
3.2. Coarse Extraction of Steel Frame Structure Using RANSAC
The point cloud from the first epoch, shown in
Figure 4, represents a multistorey structure with elevations ranging from −3.81 m to 11.46 m. Given RANSAC’s effectiveness with noisy datasets [
33], it was chosen for plane extraction to isolate the storey containing the steel frame structure. Key parameters for the algorithm need to be set before application.
First, a distance threshold,
, must be determined. This threshold defines the maximum allowable distance from a point to the plane. Points exceeding this distance are considered outliers and removed. The total thickness of the point cloud extracted by RANSAC plane-fitting is
. The rationale for the determination of the value of
is twofold: (a) The thickness of the extracted point cloud must be sufficient to incorporate all beam bottom flanges. (b) The thickness of the extracted point cloud must be limited, to avoid incorporating unnecessary points. Therefore, the spatial distribution of beam bottom flanges is examined in
Figure 5. Three parts of the beam bottom flanges are segmented out in
Figure 5a and their side view are zoomed in in
Figure 5b. It can be seen that all of the beam bottom flanges are not coplanar. The vertical distance from the green parts to the red is approximately 0.25 m. Therefore, it is necessary to set
. Moreover, due to the random sampling strategy in the RANSAC process, there should be a certain amount of tolerance for uncertainties in
. Therefore,
is set.
Second, the maximum number of iterations needs to be specified. This parameter affects the robustness and accuracy of the extraction. While increasing the number of iterations improves robustness, it also lengthens processing time. In this study, the maximum number of iterations was set to 1000.
RANSAC is applied iteratively to reject irrelevant planes and extract the one containing the steel frame structure. For this instance, it took three iterations to extract the steel frame successfully. The detailed process is described in the remainder of
Section 3.2.
The first plane extracted, shown in
Figure 6a, does not include the steel frame structure and is therefore discarded. After removing this plane, RANSAC is applied again to the remaining point cloud, resulting in a second plane, depicted in
Figure 6b. This plane partially captures the steel frame but also includes significant clutter. A closer view of this plane, shown in
Figure 7, reveals that it contains the parts of the top flanges (in green), webs (in blue), and safety nets (in red), which are not of interest. Consequently, this plane is removed from the dataset.
Following the removal of unnecessary data, RANSAC is then applied a third time on the remaining points, as illustrated in
Figure 8. This iteration successfully extracts the plane containing the bottom flanges and webs of the beams. To optimize the results, the distance threshold
was varied and compared.
The plane with a distance threshold of
= 0.05 is shown in
Figure 8a. It reveals missing sections of the beams and some discontinuities. Some beams are missed as indicated by the white arrow in
Figure 8a.
Figure 8b displays the plane with
= 0.5, which captures a continuous majority of the beams but includes excessive thickness, incorporating more safe nets, as indicated by the white arrow in
Figure 8b. The plane segmented with
= 0.2, shown in
Figure 8c, provides the most accurate result. It effectively maintains the steel frame’s shape while minimizing clutter and beam webs, making it ideal for further deformation calculations.
3.3. Removal of Clutters Using Euclidean Clustering
Despite the careful selection of the distance threshold for the RANSAC algorithm, some clutter points from safety nets remain, as indicated by the red rectangle in
Figure 9. To further enhance segmentation quality, additional refinements are necessary. In the proposed method, Euclidean clustering is applied after RANSAC plane extraction to remove clutter and refine the extraction of steel structures. This combination leverages the strengths of both algorithms for robust and accurate feature extraction from the point cloud data.
Clustering is a fundamental method for point cloud segmentation, dividing a point cloud into groups based on similar characteristics. Euclidean clustering is effective for segmenting point clouds based on the Euclidean distance between points [
34].
The quality of Euclidean clustering segmentation depends on the distance threshold . This threshold controls the segmentation granularity. A large can preserve the continuity of the steel frame structure but might under-segment, merging points of structure components and clutters into one cluster. Conversely, a small , can finely separate the clutters and steel frame structure but might over-segment, splitting a single structure component into multiple parts. Therefore, both global clustering and local clustering are conducted to take advantage of the large and small values of . In global clustering, a large is used to partially separate the steel frame structure from the clutter while preserving the whole steel frame structure. In local clustering, a small is adopted to finely separate the steel frame structure and clutter. The advantage of local Euclidean clustering is the ability to set a smaller distance threshold, achieving more accurate segmentation without affecting other parts of the structure. Additionally, local clustering reduces computation time by involving only a small portion (approximately 1.5%) of the original point cloud. The values of and is to be decided for both global and local clustering, empirically.
Figure 10a presents the clustering result with a distance threshold
= 1 m, where some internal points are not effectively segmented. To address this, the threshold is decreased to
= 0.05 m, resulting in over-segmentation where the entire steel frame structure is divided into several parts, as shown in
Figure 10b. Therefore, the segmentation results with
= 1 m is retained, but further refinement is needed to eliminate clutters between the steel beams.
In total, nine clusters are generated. The cluster with the greatest number of points is preserved, representing the largest volume of the beam structure. All points in the remaining eight clusters are removed to improve segmentation accuracy. The point cloud after removing irrelevant clusters is shown in
Figure 11.
To apply local clustering, the data points are divided into smaller regions based on their
x and
y coordinates. Two preprocessing steps are necessary to better separate the point clouds into each region. First, the point cloud is rotated until the bottom side aligns with the x-axis. Since most primary and secondary beams are orthogonal to each other, adjusting the entire point cloud’s orientation helps evenly distribute features across regions, as shown in
Figure 12. Second, the x and y values of the point cloud are redefined, setting both lower bounds to 0. This adjustment simplifies data processing and arrangement. The box slice tool in CloudCompare 2.9.1 is used to ensure the region sizes are appropriate before processing. This tool automatically divides the point cloud into small pieces of equal dimensions. The window size is set based on an
x-to-
y ratio of approximately 80:50, with dimensions of 8 m on the
x-axis and 5 m on the
y-axis. Consequently, the point cloud is sliced into 70 local regions, excluding those with zero points.
Figure 13a highlights one representative local region containing the highest volume of safety net points before local clustering. The Euclidean clustering algorithm is applied to separate the point cloud in this region. The result, shown in
Figure 13b, indicates the successful separation of the beam and safety net points with a distance threshold of 0.05 m, confirming that the dimensions of 5 m × 8 m for each local region are suitable. When the Euclidean clustering algorithm is applied to these local regions, most of the safety net points are effectively eliminated. The point cloud of the representative slice after removing the safety net points is shown in
Figure 13c.
3.4. Separation of the Bottom Flanges and Webs of Beams
For accurate deformation calculation, the point cloud must be refined to include only the bottom flanges of beams. Despite the processes described in
Section 3.2 and
Section 3.3, some points from the webs of beams may remain. To achieve a precise separation of the web and the bottom flange points, filtering based on point normals and an originally defined point feature is employed.
Intuitively, the normals of web points should be approximately horizontal, while the normals of bottom flange points should be vertical. For each point,
, normal
is estimated and normalised to unity. Under this condition, the majority of web points with
or
greater than a threshold
are filtered out, as shown in
Figure 14.
Due to uncertainties in point normal estimation, some web points may remain after initial filtering. Therefore, an additional point feature, namely the local difference in the z-axis , is defined to further filter out the remaining web points. To note, for this step, the point clouds are voxel-downsampled using a voxel size of 0.05 m to reduce the computational cost.
It is assumed that in a local neighbourhood, the number of remaining web points is small compared to the number of bottom flange points, and the
value of a web point
i (
) should be greater than that of the majority of other points in its neighbourhood by a certain threshold
. Based on this assumption, a point feature, namely ‘local difference in z-axis’,
, is defined as follows:
where
is the coordinate in the z-axis of the point
and
is the median value of coordinate in the z-axis of all points in the neighbourhood of
.
As presented in
Figure 15, a cylindrical neighbourhood is constructed for each point
to calculate
. The axis of the cylinder is vertical and goes through
. The radius of the cylinder is
, and the height of the cylinder is
. By further filtering out all points with
, the remaining points of the webs are successfully removed. The threshold
, the radius of the cylinder
, and the height of the cylinder
, are empirically decided to be 0.05 m, 0.1 m, and 1 m, respectively.
In the example shown in
Figure 1, there is space in the vertical direction of 0.064 m between beam webs and beam bottom flanges. Therefore,
should be smaller than 0.064 m.
is further determined by assessing the cumulative distribution of
, as shown in
Figure 16. By setting
, 99% of the points can be preserved while filtering out the remaining web points.
Finally, Statistical Outlier Removal [
27] is applied to further remove clutter points.
3.5. Experimental Setting
Since the process described in
Section 3 involves many parameters. The values of these parameters are summarised in
Table 1.
4. Comparison of the Proposed Method and Manual Extraction
In this study, manual extraction results are essential to evaluate the quality of the proposed algorithm’s extraction results. The CloudCompare 2.9.1 software is used to manually extract data representing the bottom flanges of beams. This is found to be a time-consuming process.
Due to the large size and complex shape of the original point cloud, directly locating the bottom flange is challenging. Therefore, a rough segmentation is performed first. This point cloud includes the entire beam structure (top flange, bottom flange, and web) and a significant volume of the safety nets directly contacting the beams. Since some safety nets are fully enclosed by many beams, manually eliminating points corresponding to these safety nets is difficult. Moreover, beam webs and beam bottom flanges are spatially close and are difficult to separate. Through careful observation and manipulation, it took approximately four hours to manually extract the beam bottom flanges. During the manual extraction process, the separation of the beam webs and beam bottom flanges took the most time at three hours.
Since the point cloud from algorithm-driven extraction has been voxel-downsampled by a voxel size of 0.05 m in the procedure described in
Section 3.4, the manually extracted point clouds are also voxel-downsampled by a voxel size of 0.05 m to enable a fair comparison.
Figure 17a,b compares the beam bottom flanges extracted using the proposed method with those obtained through manual extraction. Based on qualitative observation, the bottom flanges extracted by our method closely match those extracted manually.
A quantitative comparison is also conducted between the manually extracted point cloud and that derived using our algorithm. The point cloud extracted by the proposed method is named , containing 134,413 points and the manually extracted point cloud is named , containing 135,053 points. The total number of points in and are represented by and . is assumed to be the ground truth and is compared with to determine how many points in are correctly extracted by the proposed method.
For any point
in
, if there exists a point
in
and
, then
is deemed correctly extracted. Since both
and
are voxel-downsampled by a voxel size of 0.05 m, it is fair to make
. Using Python codes based on SciPy [
35], it is estimated that 120,079 points are correctly extracted by the proposed method.
The number of correctly extracted points is termed True Positives (
). False Positives (
) are the number of points incorrectly extracted by the proposed method. False Negatives (
) are the number of points missed by the proposed method.
and
can be determined as follows:
Following the convention of previous studies on point cloud segmentation, three metrics, i.e., precision, recall, and F1 score are adopted to quantify the accuracy of the algorithm-driven extraction. Precision is the ratio of correctly extracted points to the total points extracted by the proposed method. Recall is the ratio of correctly extracted points to the total ground truth points. F1 score is the harmonic mean of precision and recall, balancing the two metrics. The three metrics can be calculated as follows:
The results are shown in
Table 3. The results of all three metrics show that bottom flanges extracted by our method closely match those extracted manually.
The processing time and memory usage of the proposed method are also recorded to demonstrate the effectiveness of the proposed method compared to manual extraction. For the record, the proposed method was run on a desktop consisting of an Intel i5-11400F CPU and 16.0 GB RAM. The procedures described in
Section 3.2 and
Section 3.3 require human observation and manipulation, and time usage cannot be accurately recorded. The separation of beam bottom flanges and beam webs is fully automated by the algorithm coded in Python, utilising the Python library Open3D and Numpy. The time and memory usage of each sub-step are recorded and presented in
Table 4. The number of points in the input point cloud is also presented in
Table 4, and the time and memory usage is expected to increase with the amount of input data. Peak memory usage throughout the process for an input point cloud initially containing 11,047,383 points is merely 69.22 MB, which is easily manageable on modern desktops and laptops. The total run time is 746.49 s, i.e., less than 13 min. It took approximately 180 min, which varies with the operator’s skills, to manually separate the beam bottom flanges and the beam webs in the CloudCompare 2.9.1 software. As a result, the proposed method took only 7% of the time required for manual extraction to separate the beam bottom flanges and web.
5. Deformation Calculation
The procedures outlined in
Section 3 were also applied to the point cloud data from the second scan to obtain the bottom flange point clouds of the steel beams. The distance between the processed point clouds from the two scans represents the deformation of beams between the two epochs. The distance was then calculated using the Cloud-to-Cloud (C2C) method with least-square planes as local models [
36].
The principle of the deformation calculation is illustrated in
Figure 18. The point cloud of the beam bottom flanges acquired in the second scan is treated as the query point cloud, while the point cloud acquired in the first scan is treated as the reference point cloud. The query point cloud and reference point cloud are represented by rectangles and circles, respectively. The two dashed lines represent the assumed true position of the beam bottom flanges at two epochs, respectively. For a point,
, in the query point cloud, the closest point,
, in the reference point cloud is identified.
and
are represented by a solid rectangle and a circle, respectively. A least square plane is fitted to the
, and its six nearest neighbours in the reference point cloud. The fitted plane is represented by the solid line. Finally, the distance,
, from the
to the fitted plane is regarded as the distance from
to the reference point cloud. It can be seen then that
is close to the assumed true distance
.
However, the main limitation of the distance calculation method is that distance would be overestimated where occlusion in the reference point cloud occurs. Such an example is also shown in
Figure 18. For a point
(represented by a dashed rectangle) in the query point cloud, its corresponding part in the reference cloud is occluded. The closest point to
in the reference cloud is
, represented by a dashed circle. The fitted plane of
is still the same as the fitted plane of
because
and
share the same neighbourhood. In this case, the estimated distance
is significantly overestimated compared to the assumed true distance
. Therefore, deformation results at the occlusion of the reference point cloud (point cloud of the 2nd scan) should be disregarded.
As shown in
Figure 19, the maximum observed deformation of the steel beams is 100 mm, with an average deformation of 9 mm, excluding unrealistic deformation values due to occlusion in the point cloud of 2nd scan. The deformation map indicates that the greatest deformation occurs at column
h (refer to
Figure 2b), which was unloaded, with deformation increasing towards this area. Although column
m (see
Figure 2b) was also unloaded, the steel frame structure above is near other vertical supports, including loaded temporary lattice columns, concrete-filled steel tube columns, and the main reinforced concrete structure. Consequently, minimal deformation is observed at column
m.
The steel frame structure investigated in this study is a cantilever steel frame structure for roof coverings. According to standard for design of steel structures [
37], the upper limit of allowable deformation of such structure is
, where
is the cantilever span of the whole steel frame structure, as indicated by the red dashed line in
Figure 20. In our case,
and hence the upper limit of allowable deformation is 280 mm. Therefore, the observed deformation is within the allowable deformation range.
6. Discussion
At present, the use of 3D scanning technology for monitoring structural deformation is still developing. This study successfully extracts beam bottom flanges from a steel frame and measures the deformation of the structure. The proposed method can be adapted for similar steel frame structures with minor adjustments of parameters summarised in
Table 1. In this study, only I-beams are investigated. The bottom flanges and webs of the I-beams are separated by their difference in normal orientation. Aside from I-beams, C-beams, L-beams, T-beams, box-beams, and round beams are commonly used to build steel frame structures. If the cross-section of the beams has orthogonal sides, these sides can be separated by their difference in normal orientations. Therefore, the proposed method is also applicable to steel frame structures consisting of I-beams, C-beams, L-beams, T-beams, and box-beams, but not those consisting of round beams. However, it is likely also not suitable for other structures with different geometric patterns, as noted in previous research discussed in
Section 1. Future improvements should focus on broadening the applicability of the proposed method to other civil structures.
While our method eliminates the need for manual extraction of beam bottom flanges, there are areas for improvement. Specifically, the RANSAC plane-fitting and clustering process still relies on human observation to determine the correct plane and clusters, requiring manual verification of results. Future work should focus on developing a fully automatic algorithm capable of recognising data points of interest based on unique geometric features.
The values of thresholds are empirically decided. The decision requires a manual assessment of the geometry of the steel frame structure and the characteristics of the acquired point cloud. In the future, automatically adaptive thresholding should be considered.
It is suggested that deep learning point cloud segmentation models [
38,
39,
40,
41] are utilised to extract the beam bottom flanges. Deep learning models usually operate in an end-to-end manner, which avoids manual judgement during the process. Moreover, the parameters of deep learning models are determined during the training process, which alleviates the necessity of the manual adjustments of parameters. However, deep learning models can only be realised after appropriate training data have been repaired. Our method can be used to label the training data for these models.
Overall, the accuracy of the proposed method is similar to manual extraction but the proposed method is more efficient than manual extraction. Thus, the proposed method accelerates the extraction process of the beam bottom flange point clouds, which ultimately makes monitoring of steel frame deformation using laser scanning more desirable.
7. Conclusions
This study introduces an algorithm-driven method for extracting 3D point clouds of beam bottom flanges from a complex steel frame structure. It uses RANSAC plane-fitting to coarsely extract the level of steel frame structures and clustering algorithms to remove clutters. Then, the method innovatively distinguishes between webs and bottom flanges of beams based on their orientation and vertical position.
Assuming the manually extracted beam bottom flanges as ground truth, the proposed method achieved an accuracy of 0.89. For the same objective of separating beam bottom flanges and beam webs, the proposed method took only about 7% of the time compared to manual extraction. The deformation estimated from the point cloud extracted by the proposed method agrees with the site condition. The maximum value of deformation, which is 100 mm, is observed at one of the positions where temporal support was unloaded.
There are two conceived limitations of the proposed method. First, the proposed method still requires manual judgement during the RANSAC plane-fitting and clustering process. Second, all three steps, namely RANSAC plane-fitting, clustering, and separation of beam bottom flanges and webs, requires manual adjustments of the parameter values.
To address these limitations, future studies should focus on developing fully automatic methods, with automated recognition of interested data and automated adjustments of parameter values. End-to-end deep learning point cloud segmentation models can potentially meet these requirements. However, deep learning models can only be utilised after appropriate training data are prepared. Our method can be used to label the training data for deep learning models, contributing to the development of more advanced methods.
The proposed method efficiently extracts point clouds from the bottom flange of beams, making the monitoring of steel frame deformation using laser scanning more affordable. Given the common use of steel frame structures, which often require deformation monitoring, the proposed method offers an effective and valuable solution for this frequently encountered task.