Next Article in Journal
The Effect of Spatial Resolution and Temporal Sampling Schemes on the Measurement Error for a Moon-Based Earth Radiation Observatory
Next Article in Special Issue
Progress Guidance Representation for Robust Interactive Extraction of Buildings from Remotely Sensed Images
Previous Article in Journal
Protecting Existing Urban Green Space versus Cultivating More Green Infrastructures: Strategies Choices to Alleviate Urban Waterlogging Risks in Shenzhen
Previous Article in Special Issue
Attention Enhanced U-Net for Building Extraction from Farmland Based on Google and WorldView-2 Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter-Free Half-Spaces Based 3D Building Reconstruction Using Ground and Segmented Building Points from Airborne LiDAR Data with 2D Outlines

Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(21), 4430; https://doi.org/10.3390/rs13214430
Submission received: 29 September 2021 / Revised: 29 October 2021 / Accepted: 2 November 2021 / Published: 3 November 2021
(This article belongs to the Special Issue Remote Sensing Based Building Extraction II)

Abstract

:
This paper aims to automatically reconstruct 3D building models on a large scale using a new approach on the basis of half-spaces, while making no assumptions about the building layout and keeping the number of input parameters to a minimum. The proposed algorithm is performed in two stages. First, the airborne LiDAR data and buildings’ outlines are preprocessed to generate buildings’ base models and the corresponding half-spaces. In the second stage, the half-spaces are analysed and used for shaping the final 3D building model using 3D Boolean operations. In experiments, the proposed algorithm was applied on a large scale, and its’ performance was inspected on a city level and on a single building level. Accurate reconstruction of buildings with various layouts were demonstrated and limitations were identified for large-scale applications. Finally, the proposed algorithm was validated on an ISPRS benchmark dataset, where a RMSE of 1.31 m and completeness of 98.9% were obtained.

1. Introduction

As a part of the digitalisation of cities, 3D building models are essential and provide more accurate spatial and environmental analysis for various applications [1,2,3]. The availability of building models on a large scale is usually very limited, as for most buildings the corresponding 3D models do not exist due to their age, or are not in the public domain. Manual modelling of existing buildings is not practical on a large scale as it is very time consuming and costly. Hence, research is oriented towards solutions that reconstruct buildings automatically [4]. The common approach to automatic building reconstruction is by using point clouds [5], which can be obtained by various remote sensing technologies (e.g., stereo imaging, laser scanning). One of the more widely used technologies for this purpose is LiDAR (Light Detection and Ranging), which is the technology of active remote sensing, where laser light is used to acquire the geometry of the observed surface. It is usually mounted on an aircraft to scan large geographic areas. The quality of the point cloud obtained during such acquisition affects the accuracy of the reconstructed building models and other spatial analyses directly [6]. With the fast development of remote sensing technologies and, consequently, larger availability of the resulting data, the reconstruction of 3D building models from point clouds gained popularity in recent years [5,7,8]. In general, the building reconstruction algorithms from point clouds can be data-driven, model-driven, or hybrid-driven [5]. An additional type of algorithms that employ the machine learning approach to building reconstruction [9,10,11] is becoming popular, which is accelerated by the availability of training datasets for this purpose, such as the dataset presented by Wichmann et al. [12].
Data-driven algorithms are bottom-up based, where basic geometric shapes (i.e., planes, cylinders, cones …) are detected in point clouds first. The final building’s shape is then obtained with establishing a topological structure by analysing the dependence of adjacent basic geometric shapes. Data-driven algorithms can be further divided by the type of faces that are considered. Most algorithms consider flat faces only [13,14,15,16,17,18], while some maintain curved faces as well [19,20]. There are various approaches to the analysis of basic geometric shapes, such as by the adjacency matrix [16], region growing [17], adjacency graph [21], or by direct processing of planar faces and their neighbours for estimating the common edges [14]. Data-driven algorithms are sensitive to under- or over-segmentation, which can cause problems for the analysis of basic geometric shapes. When the extracted basic geometric shapes are incomplete or noisy, which is a common occurrence for low-quality input data in complex scenes, the reconstructed building model can be of bad quality. Vosselman and Dijkman [22] used building outlines and LiDAR data for building reconstruction, where building outlines are partitioned into segments, while considering intersection and height jump lines. Segments are later merged back together until each face corresponds to one segment to obtain the 3D model. Li et al. [23] presented a new framework based on TIN (Triangulated Irregular Network) and label maps to automatically create building models from LiDAR data. TIN-based roof primitives detection supports varying point density and label maps are processed by a graph-cut to provide a good representation of roof faces. Wang et al. [24] developed a new methodology for building reconstruction based on structural and closed constraints. A surface optimisation scheme is adopted to enforce consistency between polygonal surfaces of the building and geometric structures. Zhang et al. [25] perform reconstruction of building models using unclassified LiDAR data, where the points are classified in the first two steps. In the final third step the building models are generated on the basis of the 2D topology of roof facets and estimated dominant directions. Shan et al. [26] introduced a framework for building model reconstruction where point cloud segmentation and building reconstruction are described as a minimisation problem of the corresponding energy functions. The initial segmentation is optimised by a global energy function that takes distances of LiDAR points to planes, spatial smoothness and the number of planes into account. After segmentation the reconstruction is performed by partitioning the building into volumetric cells, which is followed by determination of building surfaces and their topology. Building models are obtained with a global energy function, which is minimised using the min-cut theorem. Tarsha Kurdi et al. [27] developed a methodology that is performed in two steps. In the first step 2D building outlines are generated automatically on the basis of a neighbourhood matrix, while detecting inner roof plane boundaries as well. In the second step the 3D building models are generated, where, after fitting and refining of roof planes, the roof plane boundaries are transformed to 3D by the analysis of relationships between neighbouring planes. Later, they [28] improved the building outline modelling by filtering the point cloud with the bias of a Z-coordinate histogram.
A reverse, top-down based approach is typical for model-driven algorithms, where the building shapes are estimated by parametric fitting of shape candidates to the input point cloud. The result of fitting is a building model with a roof that is defined by parameters for the roof shape of the candidate that is best-fitted to the point cloud. This type of algorithms are usually faster than data-based, are easier to implement and can be used for datasets with a low point cloud density (1.2 pts/m2 [29]). However, the main restriction of such approach is the limited collection of possible candidates that do not cover all possible roof shapes. In case a building’s roof shape is in part or entirely different from every existing candidate in the library, the reconstruction will either fail, or generate a model with large errors. Model-driven algorithms differ mainly regarding the manner in which candidates are fitted to the point cloud. Poullis et al. [30] fitted geometric shapes while considering constraints found in architecture, Huang et al. [31] used statistical analysis, and Henn et al. [29] employed a modified RANSAC algorithm for this purpose.
The components of both approaches are combined in hybrid-driven algorithms in order to reduce the weaknesses of both. There are two main types of such reconstruction [5]. The first is by dividing buildings into smaller parts, based on the edges of the buildings’ outlines, jump edges and roof ridges. Smaller parts can be fitted to the candidates of roof shapes with a higher success rate [5]. Building parts are then combined to obtain a whole building by 3D Boolean operations as a part of CSG (Constructive Solid Geometry) [32,33,34]. Kada and Wichmann [34] estimate building shapes with half-spaces, where the model is divided into smaller convex parts, which are then combined. The concave shape can be obtained through a correct division, which is only considered for a limited number of building layouts. The second type of reconstruction is by the RTG (Roof Topology Graph), which is established over basic geometric shapes and can contain additional information about edges. Verma et al. [35] establish RTG over segmented planar roof faces. They determine building’s geometry by searching for subgraphs within RTG, that exist in the roof candidate database. More advanced RTG approaches were introduced by Xiaong et al. [36,37], where it was demonstrated, that it is possible to present the topology of any roof using a graph with minimal cycles in addition to nodes and edges [36]. Later, they improved the algorithm with a graph edit dictionary, which was used to reduce typical errors in RTG [37].
A new half-spaces based algorithm for building reconstruction from point clouds is introduced in this paper. In contrast to the related algorithms, which divide buildings’ 2D outlines into smaller parts and then process them while taking only convex shapes into account, the proposed algorithm performs reconstruction without division, while also considering concave parts of the building’s roof. Additionally, no assumptions about the building layout are made, which allows processing of buildings on a large-scale. This is achieved in two stages, where the input data is processed first to obtain the definition of each building’s base model and the corresponding half-spaces. The second stage generates a building shape by performing 3D Boolean operations over the analysed half-spaces.
The remainder of the paper is divided into three Sections. The next Section describes both stages of the proposed algorithm in detail. The third Section presents the results over a large geographic area with the complementary discussion, and the final Section concludes this work.

2. Methodology

The proposed algorithm for building reconstruction is performed in two stages as shown in Figure 1. In the first stage, the input data is preprocessed, where the buildings’ 2D outlines and the classified airborne LiDAR point cloud are used to obtain base building models and the corresponding definitions of half-spaces through segmentation. These serve as the input to the second stage, where the half-spaces are classified and processed by 3D Boolean operations to obtain the final building shape, where convex and concave parts of the final shape are considered. The model of each building is bounded by floor, exterior wall and roof faces. Only roofs without height jumps are considered. A height jump can be described as an edge of a roof’s face, which height is different from a neighbouring roof’s face. The following subsections describe both stages in detail.

2.1. Data Preprocessing

The input data to the proposed algorithm represent 2D building outlines and the airborne LiDAR point cloud. Both types of data are considered to be georeferenced using the same coordinate system and, therefore, aligned. The points of the LiDAR data that are classified as ground or building are considered in this work. In case the input LiDAR point cloud classification is missing or is of bad quality, there are various methods for classification available (for an overview, see [38]). As there are many possible classes of LiDAR points, a method that focuses on ground and building points [39] should be used for this purpose. Only building points that are located within the 2D building outline are considered for reconstructing the roof of the corresponding building model.

2.1.1. Generating Base Models

The base model of each building is used in the second stage for estimating its final roof shape. It is bounded by the floor face, top bounding face and exterior wall faces that are obtained from the 2D building outline. Floor face g is determined by placing the polygon of the corresponding 2D building outline to the height of the lowest LiDAR ground point in the direct proximity of the building. The top bounding face g is determined in the same way as g, only it is placed above the highest LiDAR building point, so it does not limit the final building model. Each exterior wall face s i is defined by the points that lie on sides g and g as shown in Figure 2. The base model M is given as a watertight model, bounded by a set of faces: { g , g , s 1 , s 2 , s n } , where n is the number of sides of the building outline. An example of a generated base model from a 2D building outline is illustrated in Figure 2.

2.1.2. Half-Spaces’ Definition

A half-space is a core element for shaping the roof in this work. It is obtained from the LiDAR point cloud, which is first segmented into sets of LiDAR points that describe each individual planar roof face. For the segmentation of planar faces various segmentation methods are applicable. Considering the type of input LiDAR point cloud, segmentation methods that take the LiDAR point cloud density into account [40,41,42,43] are more suitable for this purpose. The reason for this is variable density of the input point cloud, which occurs due to different laser scanner technology and height of the aerial vehicle that performs point cloud acquisition. Segmentation that takes variable point cloud density into account should, therefore, be used for the best results. The overview of segmentation methods is out of the scope of this paper, for more information see [44]. An example segmentation of a LiDAR point cloud that belongs to the same building as the base model from Figure 2 is shown in Figure 3.
The roof of each building will be shaped by a set of half-spaces H = { H i } . Half-space H i is defined by the corresponding set of LiDAR points S i and a plane P i that is calculated from S i . H i is given as x P i . a + y P i . b + z P i . c + d > 0 . The definition of each half-space is obtained from the segmented LiDAR point cloud, where sets of segmented LiDAR points S i that describe individual roof faces are taken. The plane P i that is best-fitted to the corresponding set of LiDAR points is determined as P i = [ a , b , c , d ] = LSqFit ( S i ) , where P i . c > 0 and LSqFit is a function that fits a plane to a set of LiDAR points by least-square fitting [45].

2.2. Shaping 3D Building Models by 3D Boolean Operations

During this stage, base models are shaped by 3D Boolean operations based on half-spaces to obtain the final 3D building models. The half-spaces of the previous stage are classified as obstructed or unobstructed for further analysis and shaping by slicing the base models.

Classification of Half-Spaces

Each half-space H i , of which the corresponding set of LiDAR points S i is partly contained within the base model M, is classified as unobstructed or obstructed in this step. A half-space is unobstructed, if it does not contain any LiDAR points of other half-spaces:
H i H O , p S j : p H i , j i H U , else ,
where H O H is a set of obstructed half-spaces and H U H a set of unobstructed half-spaces. As only planar roof faces are considered, small variations in faces’ shapes can be neglected, and in Equation (1) the number of LiDAR points are reduced from S j to C j S j . The set C j only contains points that are located on the convex hull of perpendicularly projected LiDAR points from S j to the plane P j . In case H i contains a point from another half-space, it is classified as obstructed, which means that H i is obstructed by a different half-space on a concave part of the roof. As a result, it cannot be used directly for shaping the building roof and needs to be analysed further.

2.3. Performing 3D Boolean Operations with Half-Spaces

Unobstructed half-spaces describe the convex shape of the building’s roof as they are not obstructed by any other half-spaces. Therefore, they can be used directly for shaping the building model. This is performed by subtracting all corresponding open half-spaces from the building’s base model M:
H i H U : M = M \ H i
Figure 4 shows an example of shaping the model with unobstructed half-spaces.
Each obstructed half-space may not be directly useful for shaping the model. As an example, a pair of half-spaces’ corresponding planes might be parallel at different heights or oriented in such a way that they intersect outside of the building model. Therefore, obstructed half-spaces are processed by determining the smallest slice s k , if it exists, for each half-space H k H O . A slice s k is determined by the intersection between H k and all half-spaces H l H O , k l , that are visible from H k . Half-spaces H k and H l are visible, if a line between a point from C k and a point from C l exists that does not intersect with any other roof face, and if p C k p C l , such that:
( p p ) · P k . n ^ > 0 and ( p p ) · P l . n ^ > 0 ,
where P k . n ^ and P l . n ^ are normalised normal vectors of the corresponding planes. A slice is valid if half-spaces that determine the slice intersect within the model. In case a slice is not valid, it is processed as empty. The final model M is obtained by subtracting all slices:
H k H O : M = M \ s k
Figure 5 illustrates shaping the building with obstructed half-spaces for the model from Figure 4b. The final building model is shown in Figure 5e.

3. Results

To analyse the proposed algorithm’s large-scale performance, it was applied on the basis of airborne LiDAR data and 2D buildings’ outlines. The input data cover a 6.153 km2 large geographic area of the city of Maribor, Slovenia (bounding box: 46°33′8.54301″N 15°37′29.23113″E, 46°34′13.15573″N 15°39′52.97941″E). The provided LiDAR data’s average density is 11.3 pts/m2 and the considered 14,227,123 building points and 31,420,386 ground points are shown in Figure 6. Buildings’ outlines were obtained from a public spatial database maintained by The Surveying and Mapping Authority of the Republic of Slovenia. There were 5254 buildings’ outlines contained entirely within the bounding box of the area.
First, the building points for each corresponding building’s outline were segmented. For this, a graph-based segmentation [41] that takes point cloud density and local curvature of faces into account was selected, as roofs of buildings are not always completely flat. The segmentation establishes the initial topology over the point cloud as an undirected graph by the k-nearest neighbour approach, and is controlled by the local curvature t θ and distance t d thresholds. The building points from the input LiDAR point cloud for each corresponding building’s outline were segmented using the settings given in Table 1. A minimum of 50 points were required for each segment ( t C C = 50 ), which was set to avoid too small roof faces. The resulting segmentation of building points from Figure 6 is shown in Figure 7.
The building models were generated next. A building model is deemed geometrically valid, if the building’s floor face remains intact which, as such, does not necessarily imply an accurately reconstructed model. In some cases, especially with height jumps, the obtained building models were invalid. To achieve a higher validity rate the number of obstructed half-spaces was limited to 10 for cases with height jumps. In such cases, only unobstructed half-spaces were considered for shaping the model. An example of handling these cases is shown in Figure 8.
The reconstruction produced a geometrically valid model for 4817 building outlines in 92% of cases. The resulting buildings’ reconstruction is shown in Figure 9.
In practice, many buildings’ roofs contain height jumps and, as they are not considered by the proposed method, several geometrically invalid models are to be expected.
Next, we examined various cases of buildings from the reconstructed city model with roofs of different types without height jumps, which are shown in Figure 10.
Figure 10a shows a relatively simple case that is described by eight half-spaces of a single building, of which five are obstructed. The next case from Figure 10b shows an example of two terraced buildings, where the same roof faces are shared between buildings. In such cases, where there are no geometrical features available to determine the border between two buildings, it is crucial to have buildings’ outlines already available. The building shown in Figure 10c has a roof with four ridges at different heights or orientations, and a horizontal part of the roof, which were incorporated into the model correctly from 17 half-spaces. The following case, shown in Figure 10d, has a flat face at the highest point of the building model. The building is described by 17 segments of points, where gaps can be observed in the shape of some of them. The gaps occur due to small objects located at the roof that are either too small or curved. The final case, Figure 10e, illustrates the reconstruction of a building with a complex roof that is described by 30 half-spaces, of which only one (top horizontal half-space) is unobstructed. There were at least 14 slices performed to obtain the correct shape, including slices determined by four half-spaces. The presented cases from Figure 10 demonstrate the ability of the proposed algorithm for the reconstruction of buildings of various layouts.

Validation

The proposed methodology was validated with the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark dataset, specifically the third area of the data captured over Vaihingen, Germany [46,47]. Building outlines and LiDAR data, where building points were classified manually and the density is 4 pts/m2, were used from the benchmark as input data. The output building models of the proposed algorithm were compared by height difference with the ground truth data from the benchmark, as shown in Figure 11. Ground truth data is presented as building roof faces, given as 3D polylines. The height difference was estimated over a regular grid with 10 cm resolution. Statistics for the comparison is given in Table 2, where RMSE (Root Mean Square Error) was estimated over height differences, completeness specifies the proportion of the building area covered by geometrically valid building models, and e 0.5 designates the proportion of the building area, where the height difference is lower than 0.5 m.
The majority of the difference manifested over areas, where height jumps are present. Some difference can be observed over parts of roofs with small elements (e.g., chimneys or small dormers) or parts of the dataset, where the point cloud is too sparse.

4. Discussion

The main challenges of applying the proposed algorithm to large-scale datasets include the presence of building roofs with height jumps and the availability and quality of the input data. Building roofs with height jumps are a common occurrence on a large scale and, as the proposed method does not consider them, had to be taken into account to obtain as complete city model as possible. For cases with height jumps, shaping the building model with obstructed half-spaces caused many building models to become invalid, as the subtraction of some slices affected the floor face of the building model. This was particularly apparent for buildings with many small obstructed half-spaces, where half-spaces may be oriented towards a part of a roof, where only unobstructed half-spaces are present. In such case a slice can cut through a part of a building model that would otherwise remain intact. For this reason the number of obstructed half-spaces was limited to keep a validity rate over 90%.
The availability of the input data represents an important drawback that should be taken into account. Building outlines could be, to an extent, generated from the LiDAR data [28], however some drop in accuracy can not be avoided. On the other hand, LiDAR data is essential and may not be available at a selected location. In such cases, the point cloud data could be obtained using an aerial or UAV (Unmanned Aerial Vehicle) photogrammetric survey. Even though the proposed algorithm was developed with aerial LiDAR data in mind, it could be applied to any 3D point cloud. Another aspect of the input data to consider is quality. It includes density and accuracy of the point cloud data, which largely depend on the laser scanner quality. The publicly available LiDAR data is often of low density, which means that smaller roof faces are much harder to detect. The required density for the correct reconstruction depends on the minimum size of roof faces that are desired to be included in the generated building models. In theory, there need to be at least 3 segmented points on a roof face to determine the corresponding plane. However, due to variations in measurement accuracy that affect plane orientation and the fact that as the shape of the roof face is important for further analysis as well, a higher point count is beneficial. In our experience, the minimal point cloud density, using the selected segmentation algorithm for faces larger than 10 m2, was 1.5 pts/m2. Building outline datasets are often acquired manually, which means that human error can be present as well. Another option is to use cartographic outline datasets from public databases, as was done in this work. However, it should be noted that existing outline datasets can be misaligned, inconsistent, or out of date. Some buildings might have been demolished, changed or replaced, and outlines of new buildings could be missing. Any discrepancies in the input data are presented directly in building models.
The comparison with the Vaihingen dataset has shown that, apart from the acknowledged lack of height jump processing, the proposed algorithm demonstrated satisfactory performance. This is confirmed by a large proportion of the building area, where the height error was under 0.5 m. When comparing the results with related work [47] in terms of RMSE, the proposed method yields higher value, which is largely attributed to height jumps, where every error accumulates substantially over a large surface. In addition, as completeness was reported higher than in related work at nearly 100%, which comes as a result of a reliable building reconstruction and using building outlines as input, the RMSE for the proposed algorithm accumulates extensively over a sparse part of the dataset. Building outlines provided additional completeness over sparse part of the dataset. In such cases, related work, in contrast to the presented algorithm, did not generate the building model over the entire building outline due to lack of data.
Moreover, as shown in Figure 10, it can be observed how a LiDAR point cloud can contain relatively large empty spaces between scan lines in practice. It is important to keep this in mind when choosing the segmentation method and the appropriate parameters. Apart from the segmentation of point clouds, no additional parameters are required for controlling the reconstruction. This simplifies the use of the proposed algorithm significantly, especially as the segmented point cloud could also be provided as an input to the algorithm.

5. Conclusions

This work presents a novel algorithm, based on half-spaces, for 3D building reconstruction that processes airborne LiDAR data and buildings’ outlines to generate building models. It performs in two stages, where the input data is preprocessed first to obtain buildings’ base models and the corresponding half-spaces. In the final stage, 3D building models are finalised by shaping their roof using 3D Boolean operations over the analysed half-spaces.
In experiments, the presented method has shown promising reconstruction performance for the considered type of buildings. As in practice, on a large scale, there are many building roofs with height jumps, some constraints were required to obtain a more complete 3D city model. For a more accurate reconstruction, the height jumps should be considered and incorporated in future work, which could be explored by splitting the buildings’ outlines. Another possible improvement could be the integration of curved faces support, where special attention should be given to classification and limiting the reach of a curved face when using 3D Boolean operations to shape the building model. Moreover, as the segmentation performance greatly affects the algorithm’s output, the impact of various segmentation algorithms with different parameter settings could be investigated as well.
The proposed algorithm’s large-scale applicability is highly beneficial for urban simulations, or as a component of various urban analytical processes, especially those that consider environmental impact, which is a growing global concern. Apart from the segmentation part, for which any appropriate segmentation algorithm can be used, the proposed algorithm is parameter-free, which simplifies its use strongly and enhances adoption potential.

Author Contributions

Conceptualization, M.B.; methodology, M.B. and N.L.; software, M.B. and N.L.; investigation, M.B.; formal analysis, M.B., N.L. and B.Ž; data curation, M.B.; writing—original draft preparation, M.B.; writing—review and editing, M.B., N.L. and B.Ž; visualization, M.B. and N.L.; supervision, N.L.; project administration, B.Ž.; funding acquisition, B.Ž. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the financial support from the Slovenian Research Agency (Research Funding No. P2-0041 and Research Project No. L7-2633).

Acknowledgments

Thanks to the Slovenian Environment Agency and the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) for providing LiDAR data. Moreover, the authors thank The Surveying and Mapping Authority of the Republic of Slovenia for buildings’ data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Biljecki, F.; Stoter, J.; Ledoux, H.; Zlatanova, S.; Çöltekin, A. Applications of 3D City Models: State of the Art Review. ISPRS Int. J. Geo-Inf. 2015, 4, 2842–2889. [Google Scholar] [CrossRef] [Green Version]
  2. Bizjak, M.; Žalik, B.; Štumberger, G.; Lukač, N. Large-scale estimation of buildings’ thermal load using LiDAR data. Energy Build. 2021, 231, 110626. [Google Scholar] [CrossRef]
  3. Ali, U.; Shamsi, M.H.; Hoare, C.; Mangina, E.; O’Donnell, J. Review of urban building energy modeling (UBEM) approaches, methods and tools using qualitative and quantitative analysis. Energy Build. 2021, 246, 111073. [Google Scholar] [CrossRef]
  4. Wang, R. 3D building modeling using images and LiDAR: A review. Int. J. Image Data Fusion 2013, 4, 273–292. [Google Scholar] [CrossRef]
  5. Wang, R.; Peethambaran, J.; Dong, C. LiDAR Point Clouds to 3D Urban Models: A Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 606–627. [Google Scholar] [CrossRef]
  6. Biljecki, F.; Heuvelink, G.B.; Ledoux, H.; Stoter, J. The effect of acquisition error and level of detail on the accuracy of spatial analyses. Cartogr. Geogr. Inf. Sci. 2018, 45, 156–176. [Google Scholar] [CrossRef] [Green Version]
  7. Rottensteiner, F.; Sohn, G.; Gerke, M.; Wegner, J.D.; Breitkopf, U.; Jung, J. Results of the ISPRS benchmark on urban object detection and 3D building reconstruction. ISPRS J. Photogramm. Remote Sens. 2014, 93, 256–271. [Google Scholar] [CrossRef]
  8. Tomljenovic, I.; Höfle, B.; Tiede, D.; Blaschke, T. Building extraction from Airborne Laser Scanning data: An analysis of the state of the art. Remote Sens. 2015, 7, 3826–3862. [Google Scholar] [CrossRef] [Green Version]
  9. Axelsson, M.; Soderman, U.; Berg, A.; Lithen, T. Roof Type Classification Using Deep Convolutional Neural Networks on Low Resolution Photogrammetric Point Clouds From Aerial Imagery. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 1293–1297. [Google Scholar]
  10. Zhang, L.; Zhang, L. Deep Learning-Based Classification and Reconstruction of Residential Scenes From Large-Scale Point Clouds. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1887–1897. [Google Scholar] [CrossRef]
  11. Yu, D.; Ji, S.; Liu, J.; Wei, S. Automatic 3D building reconstruction from multi-view aerial images with deep learning. ISPRS J. Photogramm. Remote Sens. 2021, 171, 155–170. [Google Scholar] [CrossRef]
  12. Wichmann, A.; Agoub, A.; Schmidt, V.; Kada, M. RoofN3D: A Database for 3D Building Reconstruction with Deep Learning. Photogramm. Eng. Remote Sens. 2019, 85, 435–443. [Google Scholar] [CrossRef]
  13. Vosselman, G. Building Reconstruction Using Planar Faces In Very High Density Height Data. Int. Arch. Photogramm. Remote Sens. 1999, 32, 87–92. [Google Scholar]
  14. Dorninger, P.; Pfeifer, N. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds. Sensors 2008, 8, 7323–7343. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Elberink, S.O.; Vosselman, G. Building reconstruction by target based graph matching on incomplete laser data: Analysis and limitations. Sensors 2009, 9, 6101–6118. [Google Scholar] [CrossRef]
  16. Sampath, A.; Shan, J. Segmentation and reconstruction of polyhedral building roofs from aerial LiDAR point clouds. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1554–1567. [Google Scholar] [CrossRef]
  17. Chen, Y.; Cheng, L.; Li, M.; Wang, J.; Tong, L.; Yang, K. Multiscale grid method for detection and reconstruction of building roofs from airborne LiDAR data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4081–4094. [Google Scholar] [CrossRef]
  18. Chen, D.; Wang, R.; Peethambaran, J. Topologically Aware Building Rooftop Reconstruction From Airborne Laser Scanning Point Clouds. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7032–7052. [Google Scholar] [CrossRef]
  19. Lafarge, F.; Mallet, C. Creating large-scale city models from 3D-point clouds: A robust approach with hybrid representation. Int. J. Comput. Vis. 2012, 99, 69–85. [Google Scholar] [CrossRef]
  20. Zhou, Q.Y.; Neumann, U. 2.5D building modeling by discovering global regularities. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 326–333. [Google Scholar] [CrossRef]
  21. Chen, D.; Zhang, L.; Mathiopoulos, P.T.; Huang, X. A methodology for automated segmentation and reconstruction of urban 3-D buildings from ALS point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4199–4217. [Google Scholar] [CrossRef]
  22. Vosselman, G.; Dijkman, S. 3D building model reconstruction from point clouds and ground plans. Int. Arch. Photogramm. Remote Sens. 2001, 34, 37–44. [Google Scholar]
  23. Li, M.; Rottensteiner, F.; Heipke, C. Modelling of buildings from aerial LiDAR point clouds using TINs and label maps. ISPRS J. Photogramm. Remote Sens. 2019, 154, 127–138. [Google Scholar] [CrossRef]
  24. Wang, S.; Cai, G.; Cheng, M.; Marcato, J., Jr.; Huang, S.; Wang, Z.; Su, S.; Li, J. Robust 3D reconstruction of building surfaces from point clouds based on structural and closed constraints. ISPRS J. Photogramm. Remote Sens. 2020, 170, 29–44. [Google Scholar] [CrossRef]
  25. Zhang, K.; Yan, J.; Chen, S.C. A Framework for Automated Construction of Building Models from Airborne LiDAR Measurements. In Topographic Laser Ranging and Scanning: Principles and Processing, 2nd ed.; Shan, J., Toth, C., Eds.; CRC Press: Boca Raton, FL, USA, 2018; pp. 563–585. [Google Scholar]
  26. Shan, J.; Yan, J.; Jiang, W. Global Solutions to Building Segmentation and Reconstruction. In Topographic Laser Ranging and Scanning: Principles and Processing, 2nd ed.; Shan, J., Toth, C., Eds.; CRC Press: Boca Raton, FL, USA, 2018; pp. 459–484. [Google Scholar]
  27. Tarsha Kurdi, F.; Awrangjeb, M.; Liew, A.W.C. Automated Building Footprint and 3D Building Model Generation from Lidar Point Cloud Data. In Proceedings of the 2019 Digital Image Computing Techniques and Applications (DICTA), Perth, Australia, 2–4 December 2019; pp. 75–82. [Google Scholar]
  28. Tarsha Kurdi, F.; Awrangjeb, M.; Munir, N. Automatic filtering and 2D modeling of airborne laser scanning building point cloud. Trans. GIS 2021, 25, 164–188. [Google Scholar] [CrossRef]
  29. Henn, A.; Gröger, G.; Stroh, V.; Plümer, L. Model driven reconstruction of roofs from sparse LIDAR point clouds. ISPRS J. Photogramm. Remote Sens. 2013, 76, 17–29. [Google Scholar] [CrossRef]
  30. Poullis, C.; You, S. Photorealistic large-scale Urban city model reconstruction. IEEE Trans. Vis. Comput. Graph. 2009, 15, 654–669. [Google Scholar] [CrossRef] [PubMed]
  31. Huang, H.; Brenner, C.; Sester, M. A generative statistical approach to automatic 3D building roof reconstruction from laser scanning data. ISPRS J. Photogramm. Remote Sens. 2013, 79, 29–43. [Google Scholar] [CrossRef]
  32. Haala, N.; Brenner, C. Virtual city models from laser altimeter and 2D map data. Photogramm. Eng. Remote Sens. 1999, 65, 787–795. [Google Scholar]
  33. Kada, M.; McKinley, L. 3D Building Reconstruction from LIDAR based on a Cell Decomposition Approach. In Proceedings of the CMRT09: Object Extraction for 3D City Models, Road Databases and Traffic Monitoring—Concepts, Algorithms and Evaluation, Paris, France, 3–4 September 2009; Volume XXXVIII, pp. 47–52. [Google Scholar]
  34. Kada, M.; Wichmann, A. Feature-Driven 3d Building Modeling Using Planar Halfspaces. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, II-3/W3, 37–42. [Google Scholar] [CrossRef] [Green Version]
  35. Verma, V.; Kumar, R.; Hsu, S. 3D Building Detection and Modeling from Aerial LiDAR Data. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; Volume 2, pp. 2213–2220. [Google Scholar] [CrossRef]
  36. Xiong, B.; Oude Elberink, S.; Vosselman, G. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds. ISPRS J. Photogramm. Remote Sens. 2014, 93, 227–242. [Google Scholar] [CrossRef]
  37. Xiong, B.; Jancosek, M.; Oude Elberink, S.; Vosselman, G. Flexible building primitives for 3D building modeling. ISPRS J. Photogramm. Remote Sens. 2015, 101, 275–290. [Google Scholar] [CrossRef]
  38. Yan, W.Y.; Shaker, A.; El-Ashmawy, N. Urban land cover classification using airborne LiDAR data: A review. Remote Sens. Environ. 2015, 158, 295–310. [Google Scholar] [CrossRef]
  39. Mongus, D.; Lukač, N.; Žalik, B. Ground and building extraction from LiDAR data based on differential morphological profiles and locally fitted surfaces. ISPRS J. Photogramm. Remote Sens. 2014, 93, 145–156. [Google Scholar] [CrossRef]
  40. Rabbani, T.; den Heuvel, F.; Vosselmann, G. Segmentation of point clouds using smoothness constraint. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 248–253. [Google Scholar]
  41. Bizjak, M. The segmentation of a point cloud using locally fitted surfaces. In Proceedings of the 18th Mediterranean Electrotechnical Conference: Intelligent and Efficient Technologis and Services for the Citizen, MELECON, Limassol, Cyprus, 18–20 April 2016; pp. 1–6. [Google Scholar] [CrossRef]
  42. Czerniawski, T.; Sankaran, B.; Nahangi, M.; Haas, C.; Leite, F. 6D DBSCAN-based segmentation of building point clouds for planar object classification. Autom. Constr. 2018, 88, 44–58. [Google Scholar] [CrossRef]
  43. Li, L.; Yao, J.; Tu, J.; Liu, X.; Li, Y.; Guo, L. Roof plane segmentation from airborne LiDAR data using hierarchical clustering and boundary relabeling. Remote Sens. 2020, 12, 1363. [Google Scholar] [CrossRef]
  44. Nguyen, A.; Le, B. 3D point cloud segmentation: A survey. In Proceedings of the 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), Manila, Philippines, 12–15 November 2013; pp. 225–230. [Google Scholar]
  45. Bevington, P.R.; Robinson, D.K. Data Reduction and Error Analysis for the Physical Sciences, 3rd ed.; McGraw–Hill: New York, NY, USA, 2002. [Google Scholar]
  46. Cramer, M. The DGPF-Test on Digital Airborne Camera Evaluation Overview and Test Design. Photogramm.—Fernerkund.—Geoinf. 2010, 2010, 73–82. [Google Scholar] [CrossRef]
  47. Rottensteiner, F.; Sohn, G.; Jung, J.; Gerke, M.; Baillard, C.; Benitez, S.; Breitkopf, U. The ISPRS benchmark on urban object classification and 3D building reconstruction. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, I-3, 293–298. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Workflow of the proposed algorithm.
Figure 1. Workflow of the proposed algorithm.
Remotesensing 13 04430 g001
Figure 2. For a building outline (a) a base model is generated (b), where visible faces and points are marked.
Figure 2. For a building outline (a) a base model is generated (b), where visible faces and points are marked.
Remotesensing 13 04430 g002aRemotesensing 13 04430 g002b
Figure 3. A segmentation of a LiDAR point cloud, where the LiDAR points on top (a) are coloured by their height and the LiDAR points on the bottom (b) in regard to which roof face they correspond to. The point cloud belongs to the same building as the base model from Figure 2.
Figure 3. A segmentation of a LiDAR point cloud, where the LiDAR points on top (a) are coloured by their height and the LiDAR points on the bottom (b) in regard to which roof face they correspond to. The point cloud belongs to the same building as the base model from Figure 2.
Remotesensing 13 04430 g003aRemotesensing 13 04430 g003b
Figure 4. Illustration of shaping the building model with unobstructed half-spaces, where on the top (a) unobstructed half-spaces are shown, which are then subtracted from the base model (b) from Figure 2b.
Figure 4. Illustration of shaping the building model with unobstructed half-spaces, where on the top (a) unobstructed half-spaces are shown, which are then subtracted from the base model (b) from Figure 2b.
Remotesensing 13 04430 g004
Figure 5. Illustration of shaping the model with obstructed half-spaces, where (a) shows the input model from the previous step, and (b) illustrates a slice that will be subtracted from the model. The slice is determined by three obstructed half-spaces that determine the smallest possible slice for one of the half-spaces. The result of the subtraction of the first slice is shown in (c). This is followed by the next slice (d), after which the final building model is obtained (e).
Figure 5. Illustration of shaping the model with obstructed half-spaces, where (a) shows the input model from the previous step, and (b) illustrates a slice that will be subtracted from the model. The slice is determined by three obstructed half-spaces that determine the smallest possible slice for one of the half-spaces. The result of the subtraction of the first slice is shown in (c). This is followed by the next slice (d), after which the final building model is obtained (e).
Remotesensing 13 04430 g005
Figure 6. Input LiDAR point cloud.
Figure 6. Input LiDAR point cloud.
Remotesensing 13 04430 g006
Figure 7. Segmented building points from Figure 6.
Figure 7. Segmented building points from Figure 6.
Remotesensing 13 04430 g007
Figure 8. Illustration of shaping a large building with height jumps and too many obstructed half-spaces where from (a) the input point cloud segments of points (b) were obtained and only unobstructed half-spaces were considered for the reconstruction of the building (c).
Figure 8. Illustration of shaping a large building with height jumps and too many obstructed half-spaces where from (a) the input point cloud segments of points (b) were obtained and only unobstructed half-spaces were considered for the reconstruction of the building (c).
Remotesensing 13 04430 g008
Figure 9. Reconstructed building models for the input point cloud from Figure 6.
Figure 9. Reconstructed building models for the input point cloud from Figure 6.
Remotesensing 13 04430 g009
Figure 10. Illustrations of shaping several cases of buildings of various layouts with roofs without height jumps, where for the buildings (left) segments of points (middle) were obtained from the input point cloud for the reconstruction of buildings (right).
Figure 10. Illustrations of shaping several cases of buildings of various layouts with roofs without height jumps, where for the buildings (left) segments of points (middle) were obtained from the input point cloud for the reconstruction of buildings (right).
Remotesensing 13 04430 g010
Figure 11. Height difference comparison of the output of the proposed algorithm with the ground truth data of the ISPRS benchmark.
Figure 11. Height difference comparison of the output of the proposed algorithm with the ground truth data of the ISPRS benchmark.
Remotesensing 13 04430 g011
Table 1. Parameters used for segmentation of the point cloud.
Table 1. Parameters used for segmentation of the point cloud.
ParameterValue
k20
t θ 2.0
t C C 50
t d [m]2
Table 2. Validation of the obtained results from the comparison with ISPRS benchmark data.
Table 2. Validation of the obtained results from the comparison with ISPRS benchmark data.
MetricValue
RMSE [m]1.31
Completeness [%]98.9
e0.5 [%]79.4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bizjak, M.; Žalik, B.; Lukač, N. Parameter-Free Half-Spaces Based 3D Building Reconstruction Using Ground and Segmented Building Points from Airborne LiDAR Data with 2D Outlines. Remote Sens. 2021, 13, 4430. https://doi.org/10.3390/rs13214430

AMA Style

Bizjak M, Žalik B, Lukač N. Parameter-Free Half-Spaces Based 3D Building Reconstruction Using Ground and Segmented Building Points from Airborne LiDAR Data with 2D Outlines. Remote Sensing. 2021; 13(21):4430. https://doi.org/10.3390/rs13214430

Chicago/Turabian Style

Bizjak, Marko, Borut Žalik, and Niko Lukač. 2021. "Parameter-Free Half-Spaces Based 3D Building Reconstruction Using Ground and Segmented Building Points from Airborne LiDAR Data with 2D Outlines" Remote Sensing 13, no. 21: 4430. https://doi.org/10.3390/rs13214430

APA Style

Bizjak, M., Žalik, B., & Lukač, N. (2021). Parameter-Free Half-Spaces Based 3D Building Reconstruction Using Ground and Segmented Building Points from Airborne LiDAR Data with 2D Outlines. Remote Sensing, 13(21), 4430. https://doi.org/10.3390/rs13214430

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop