Next Article in Journal
Evaluation of River Network Generalization Methods for Preserving the Drainage Pattern
Previous Article in Journal
Tagging in Volunteered Geographic Information: An Analysis of Tagging Practices for Cities and Urban Regions in OpenStreetMap
Previous Article in Special Issue
Retrieval of Remote Sensing Images with Pattern Spectra Descriptors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extraction and Simplification of Building Façade Pieces from Mobile Laser Scanner Point Clouds for 3D Street View Services

1
School of Information Engineering, Chang’an University, Xi’an 710064, China
2
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
3
Xi’an Research Institute of Surveying and Mapping, Xi’an 710054, China
4
Shandong Provincial Institute of Land Surveying and Mapping, Jinan 250013, China
5
China Railway SIYUAN Survey & Design Group Co. Ltd., Wuhan 430063, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2016, 5(12), 231; https://doi.org/10.3390/ijgi5120231
Submission received: 31 July 2016 / Revised: 24 November 2016 / Accepted: 26 November 2016 / Published: 5 December 2016
(This article belongs to the Special Issue Mathematical Morphology in Geoinformatics)

Abstract

:
Extraction and analysis of building façades are key processes in the three-dimensional (3D) building reconstruction and realistic geometrical modeling of the urban environment, which includes many applications, such as smart city management, autonomous navigation through the urban environment, fly-through rendering, 3D street view, virtual tourism, urban mission planning, etc. This paper proposes a building facade pieces extraction and simplification algorithm based on morphological filtering with point clouds obtained by a mobile laser scanner (MLS). First, this study presents a point cloud projection algorithm with high-accuracy orientation parameters from the position and orientation system (POS) of MLS that can convert large volumes of point cloud data to a raster image. Second, this study proposes a feature extraction approach based on morphological filtering with point cloud projection that can obtain building facade features in an image space. Third, this study designs an inverse transformation of point cloud projection to convert building facade features from an image space to a 3D space. A building facade feature with restricted facade plane detection algorithm is implemented to reconstruct façade pieces for street view service. The results of building facade extraction experiments with large volumes of point cloud from MLS show that the proposed approach is suitable for various types of building facade extraction. The geometric accuracy of building façades is 0.66 m in x direction, 0.64 in y direction and 0.55 m in the vertical direction, which is the same level as the space resolution (0.5 m) of the point cloud.

1. Introduction

3D scanning is gaining popularity as a novel surveying and mapping technique and is gradually becoming the main measure for obtaining and updating spatial geographic data [1,2]. Mobile laser scanning (MLS) systems are utilized for collecting high-density point clouds in urban areas to obtain roadside data. MLS systems use adequate scanning angles for measuring vertical features along the road, such as the trees, buildings, and poles. This approach is fast, highly precise, and low-cost, and the obtained data contain strong real-time features that rely on city roads, which have become a useful 3D data source for collecting and updating city geoinformatic data [3,4,5]. The data sampling method of land-based vehicle-borne 3D laser scanning systems is suitable for collecting city geographic information; thus, these systems have rapidly become the primary means of city information collection, and they have been extensively applied in 3D city reconstruction, autonomous navigation through the urban environment, fly-through rendering, 3D street view, virtual tourism, urban mission planning, etc. [5,6]. Massive amounts of points from MLS are characterized by having large volumes, being independent, having no structure and form information. As such, point cloud data cannot be directly used for 3D modeling. The building façade structure is crucial for 3D building modeling, automatic texture mapping, information annotation, and street view services. The fast and accurate extraction of building façade pieces is challenging for 3D reconstruction using point cloud data because of varying point densities, different viewing positions, scan geometry, and occluding objects [7,8].
In May 2007, Google first launched Google Street View (GSV) with 360° panoramic images [9], which provides a street view put together by collecting pictures of all buildings and roadsides while driving along every street in the world. A simplified 3D mesh is generated from mobile laser scanner data to model the facades with both the point cloud and panoramic images. Then, building facade information (Figure 1) is obtained as background space information, whereas the continuous panoramic image data of the street is used for display. GSV can generate spatial 3D effects by detecting the background space data based on the mouse moving position of the user and the changing direction, angle, and shape information of the mouse detecting plane.
GSV provides online street view services for public users worldwide. One can navigate forward and backward, upward and downward, as well as zoom in and out, as shown in Figure 1 (Internet application of GSV). Once the user clicks the facade piece, the interface would jump and face toward the nearest image view of the facade piece. GSV data can be traced back to the street view collection vehicle with panoramic imaging and 3D laser scanning. In addition, the point of interest (POI) can be tagged on the street view image supported by the façade pieces. The street view collection vehicle captures tremendous amounts of LIDAR data that cannot be released on the Internet. Extracting building facade pieces from 3D LIDAR data is the foundation for realizing the release of measurable, position able, and jump-free panoramic images on the Internet [9,10,11].
Effectively extracting building facade structures from enormous point cloud data for 3D digital city automatic modeling and street view Internet application is difficult [9,10,11]. Many researchers worldwide have extensively studied this problem. For airborne 3D LIDAR data, original point cloud data are first classified, and building boundaries are subsequently confirmed based on the top surface data of the building. A popular issue in the study of LIDAR point cloud processing is the extraction of building roofs, boundaries and footprints [12,13,14,15]. The detection of building outlines from airborne laser scanning (ALS) or aerial imagery is hampered by the fact that the building outlines in these datasets are defined by the roof extension and not by the building walls as in cadastral maps [16].
3D point cloud data from MLS provide much more detailed information about building facades that is more helpful for façade extraction and 3D building modeling. In contrast, large amounts of data volumes, redundancy, and occlusion affect the efficiency and automation ability of object extraction from the MLS point cloud. Point cloud data are difficult to separate into ground-data and non-ground-data on the basis of the feature of elevation for extracting boundaries for non-ground buildings [17,18,19]. Wang Jan et al. proposed a method for extracting building height from the building top [20]. Huang Lei et al. proposed a “horizontal point reference frame” feature extraction algorithm based on horizontal point information in scanning data corresponding to problems such as rejected noise points, immense data amount, slow processing, and so on [21]. However, the restraint prerequisites of this approach are complex, and the requirements are high for the original data. Li performed threshold segmentation of point clouds by utilizing dense projection to extract the building geometry boundary [22]. This method is suitable for independent buildings but not for abundant buildings beside the road that are scanned by vehicle-borne LIDAR because they are affected by multi targets. Lu et al. proposed a building gridding extraction method based on the scanning dense projections from vehicle-borne point cloud data containing dense information and on the differences of various city objects with features. In this method, a 3D dispersed point cloud was converted into a feature image through point cloud projection to extract 3D features [23]. However, point cloud projection is mainly a top view projection, which does not take full advantage of the vehicle-borne side view scanning of LIDAR features; thus, the 3D feature cannot be effectively extracted. Both Yang et al. [24] and Martin et al. [19] presented several novel methods for the automated footprint extraction of building facades from mobile LIDAR point clouds. These approaches mainly focus on the footprint of building façades. Aijazi et al. [14] presented a novel method that automatically detects different window shapes in 3D LIDAR point clouds obtained from MLS in the urban environment. Window level building façades for street view services have larger data volume, which affects the efficiency of street view services.
Mathematical morphology can quantitatively describe content in terms of shape and size for geometry object extraction, such as roads, curbs, building façades, etc. [25,26,27,28]. Hernández and Marcotegui [27] presented a morphological segmentation of building façade images based on the tecture information. Rodríguez-Cuenca et al. [28] proposed a robust approach to segment the facades from 3D MLS point cloud projection image based on the assented segmentation algorithm, which uses morphological operations to determine the location of street boundaries for the straight and curved road extraction from MLS dataset. Serna et al. [29,30] proposed a robust approach to segment the facades from 3D MLS point cloud using morphological attribute-based operators. The processing is implemented with the elevation image for visualization or evaluation purposes. Morphological processing can be an effective approach for point cloud segmentation, extraction and classification [27,28,29,30,31,32,33,34].
This paper aims to develop a methodology for identifying and extracting building façade pieces from MLS data for street view and 3D modeling applications. A point cloud projection algorithm is presented together with high-accuracy orientation parameters from the position and orientation system (POS) of MLS, which can convert large volumes of point cloud data to a raster image. A feature extraction and simplification approach based on morphological filtering with point cloud projection images is presented. This approach can obtain the building facade feature in an image space. Thereafter, an inverse transformation of point cloud projection is designed to convert the building facade feature from an image space to a 3D space. A building facade feature with restricted facade plane detection algorithm is implemented to reconstruct precise building façade pieces. Experiment results show that various kinds of building facades can be rapidly and effectively extracted using the method, and the geometrical precision is the same as the space resolution of point cloud from MLS.

2. Methodology

2.1. Technical Framework of the Proposed Approach

MLS mainly obtains point cloud data with facade information from the side view. Topological relationships can be found among high-precision POS data, city roads, and buildings along the road. The point cloud data of buildings can be used as complex data for city facade extraction. A building facade extraction approach is proposed based on morphological filtering with point cloud from vehicle-borne LIDAR projection image. Vehicle-borne LIDAR point cloud data and the corresponding high-precision POS data were used to perform the conversion from 3D LIDAR point cloud data to raster images. Therefore, the morphological filtering image processing method was applied with point cloud raster image for building facade extraction. Based on the transformation relationship between the point cloud raster image and the 3D space, the building facade features in the image space were transformed into a 3D point cloud space. Thereafter, the 3D building facade structure was reconstructed with different heights by restraining the facade features and plane detection in the 3D point cloud space. The flow of the algorithm is shown in Figure 2.

2.2. Point Cloud Projection Based on POS

The amount of vehicle-borne 3D LIDAR point cloud data is enormous, and directly extracting features in the 3D point cloud space would result in complex 3D space computation. Therefore, in view of the requirements of feature analysis, 3D LIDAR point cloud projection can be transformed into 2D images [35,36], and the image feature analyzing algorithm can be used for features extraction from LIDAR point cloud. The vehicle-borne LIDAR scanning system possesses high-precision POS data, whereas the building facades beside the road mainly possess the same road direction. Nevertheless, POS data represent the trajectory in which the vehicle-borne system was sampled. Therefore, the obtained trajectory data from POS can serve as the standard, and LIDAR point cloud data can be projected to the projection plane parallel to the road or heading direction to generate point cloud projection images of the sides of the road.
The point cloud facade projection principles based on POS are shown in Figure 3, where points A, B, and C are the three points of POS trajectory. In projecting the point cloud data around the AB segment, point A is selected as the origin of the coordinate, straight line AB as axis X, and the vertical upward direction as axis Z. Axis Y can be determined by right-hand principle, and plane XZ is the projection image plane. Similarly, the BC segment projection plane can be constructed by using the preceding rules.
In Figure 3, points p 1 , p 2 , p 3 , p 4 are four points in the LIDAR point cloud data. L 1 , L 2 , L 3 , L 4 are the normal vectors of the four points to the projection plane. The vehicle-borne 3D LIDAR point cloud facade projection algorithm is as follows:
Step 1: Calculate every normal vector L i of LIDAR point to the nearest projection plane.
Step 2: Examine the LIDAR points in the left/right facades. When the direction of L × A B is upward, the point belongs to the left side of the road. When the direction of L × A B is downward, the point belongs to the right side of the road.
Step 3: Projection calculation. Use each LIDAR point p i as the corresponding image point of the pedal of the normal vector in the nearest projection plane. The grey value of the LIDAR point can be calculated based on Equation (1):
f ( x i , y i ) = | L i | | L | max × 255
where | L i | is the length of the normal vector L i , and | L | max is the maximum length of the normal vector of all spaced points corresponding to the POS line.
Step 4: Divide the grids of the projection plane based on the space resolution of point cloud. If there are more than one projected point to a grid, use the max grey value as the grey value of the grid.
Figure 4 shows the facade projection images of both sides of the road. These images are obtained from the vehicle-borne LIDAR point cloud data of a street block based on POS data. Figure 4a is the top view projection image of the street block, and Figure 4b is the facade projection result.
When the turning angle of the two neighboring projection planes is not large in the POS trajectory, two projection images can be mosaicked into one image to avoid segmenting the building at the corner into two images.

2.3. Façade Extraction from Point Cloud Projection Image after Morphological Filtering

Building façade features are constructed from outlines and structures. Morphological image-processing algorithms can be utilized in point cloud projection images. By using morphological open and close computing, basic shapes of building boundaries in projection images may be retained, and irrelevant structures may then be deleted. The following features may be used to describe the façade point cloud projection images of buildings:
Relationships between each pixel gray value and distance of LIDAR point to the projection plane. The bigger the gray value, the farther it is from the projection plane.
As the chosen projection planes are a group of continuous façades of different buildings, an abrupt change may appear along the direction of the connection position of the two façades. This manifests in images as missing pixel information of one or many columns.
The algorithm flow of façade feature extraction with point cloud projection images based on morphological filtering is shown in Figure 5.
The concrete algorithm flow is as follows:
Step 1: Noise filtering of projection image. In the projection image of point cloud, the pixel gray value represents the projection distance. Since the gray values distribution is known, the average value f ¯ and the standard deviation σ can be calculated. Thus, the pixels whose difference from the average value is more than the thrice of the standard deviation are taken as noises, as in Equation (2).
f ( x , y ) = 255 i f ( | f ( x , y ) f ¯ | > 3 σ )
where f ( x , y ) is the gray value in image plane ( x , y ) , f ¯ is the average value of gray, N is the number of point cloud, and the standard deviation σ can be calculated as in Equation (3):
σ = x = 0 R o w 1 y = 0 C o l 1 ( f ( x , y ) f ¯ ) 2 / ( N 1 )
where Row is the row number of projection image and Col is the column number of projection image.
Step 2: Binarization of projection image. Set the pixels whose gray value are not equal to 0 as 255 in the projection image to realize image binarization. The calculation speed of morphological filtering with a binary image is faster than that for an original gray image. Moreover, the binary image can simplify the follow-on processing. Figure 6 and Figure 7 show the original point cloud projection image and the image after its binarization segmentation, respectively.
Figure 6 and Figure 7 illustrate that the features of the building façade become apparent and beneficial for further façade extraction through image processing after transforming the 3D point cloud into a projection image.
Step 3: Morphological Filtering. According to the projection image traits, a 5 × 5 rectangle structure element b ( ( 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ) ) can be used to perform dilation processing twice at first and then erosion processing twice. The operation f b is defined as follows:
f b = ( ( ( f b ) b ) b ) b )
where represents the dilation processing, and - denotes the erosion processing.
The purpose of using a large structure element and performing dilation calculation is to ensure the compensation for the gray lost in façade connection positions. Figure 8 illustrates the effective compensation for missing gray positions.
Step 4: Boundary line extraction. Use Laplacian operator ( ( 0 1 0 1 4 1 0 1 0 ) ) to extract the boundary; the results are shown in Figure 9:
Step 5: Segmentation line calculation. After the dilation and erosion calculations of point cloud projection image, buildings that were not connected originally may be joined as a whole. Therefore, building segmentation lines must be extracted. Extracted building boundaries can then be segmented into several sub-polygons that can better represent real buildings.
Calculate the sum of absolute values of discrete approximations of the gradient t ( n ) for each pixel in column n sequentially as in Equation (5).
t ( n ) = i = 1 H 1 | 2 f ( n , i ) f ( n 1 , i ) f ( n + 1 , i ) | , 0 < n < W
where H is the height of projection image and W is the width of the projection image.
For the reason of projection deformation and scanning resolution, the building boundaries that should be vertical in space may be projected to neighboring columns, which will not be in one column. A window of L columns is taken to count the gradient as Equation (5), where L is calculated as Equation (6):
L = d F d G r i d
where dF is the projection deformation. We take the spatial resolution as the maximum projection deformation. The projection deformation was less than 0.5 m in our experiments based on the spatial resolution of the point cloud. dGrid is the grid size of the point cloud projection.
In each window of L columns, the largest gradient t max l is calculated as the possible position of segmentation line.
t max l = max { t ( j L ) , t ( j + L ) } , j L < l < j + L , L < j < W L
where l represents the column with the largest gradient in window [jL, j + L]. Then we reset the t ( n ) as Equation (8):
{ t ( n ) = j = n L n + L t ( j ) i f   n = l , t ( n ) = t max l t ( n ) = 0 i f   n l
where t ( n ) is the reset gradient within the projection deformation window L.
Building width is generally larger than 5 m. Therefore, a new window column S is taken for the determination of segmentation line, as Equation (9).
S = h W d G r i d
where hW is the building width. In our paper, hW is taken as 5 m. Then the maximum gradient t max s in window S is considered as the possible position of segmentation line.
t max s = max { t ( j S ) , t ( j + S ) } , j S < s < j + S , S < j < W S
where s represents the possible position of segmentation line, as in Equation (7).
t ( n ) = 0 i f   t ( n ) t max s ,   n ( S , S )
According to house structure traits, the borders of the chimney and window that are vertical in the left and right boundaries may also be taken as segmentation lines. Therefore, the existence of a second local maximum (noted as column n ) inside the ( S , S ) interval must be investigated. If it exists, the intrinsic max in the interval can then be taken as a non-segmentation line, as in Equation (12).
t ( n ) = 0 i f   t ( n ) t max s ,   j ( S , S )
Then, take column t ( n ) 0 as a segmentation line, as shown in Figure 10.
Step 6: Sub-polygon segmentation. According to the segmentation lines from Step 5, divide the extracted boundary graph in Step 3 into independent sub-polygons of the buildings, as shown in Figure 10.
Step 7: Simplify the sub-polygons. We can get the building façades in 3D space through inverse projection of sub-polygons from Step 6. In this way, the shapes of the facades are more coincident with the real building. However, these facades are complex relatively, which will result in tremendous computing load. In addition, the complex facades will affect the efficiency of the spatial query with façade pieces of the street view browsing from on-line end users. In order to simplify the polygon, we stretch the two segmentation lines of each polygon to the top height of the polygon, and then connect the stretched segmentation lines to convert the irregular polygon to a rectangle. The simplification results are shown in Figure 11.
As in Figure 12, all polygons have been simplified into rectangles. The amount of data can be reduced while plane-detecting efficiency is enhanced. Therefore, for one building boundary, only three values, namely, left and right corner positions and rectangle height, should be recorded.

2.4. Building Façade Pieces Reconstruction in 3D Point Cloud Space

The 2D building façade features that are determined by the projection space can be taken as constraints to detect and reconstruct the plane in 3D point cloud space. 3D building façade structures can be precisely reconstructed on this base through the following steps:
Step 1: Bounding rectangle calculation of the building façade. The bounding rectangle w simplified from the 2D building sub-polygons that were extracted from the point cloud projection image space can be taken as restraints. Projection inverse transformation can be conducted according to point cloud projection references to obtain the bounding rectangles of the building façade object corresponding to plane-feature rectangle.
In Figure 13, the green rectangle is the bounding rectangle obtained from the projection inverse transformation that uses the feature rectangle in the projection plane. The blue point cloud data are the point cloud data in the bounding rectangle and are also the calculation data used in the plane fitting of the space plane in this district.
Step 2: The point cloud data in building an object-bounding rectangle can be derived as the calculation data for space plane fitting.
Step 3: Use an iterative robust plane fitting method to delete outliers in the point cloud data and obtain stable fitting results.
Define the plane of the building façade, as in Equation (13) [26]:
a x + b y + c z = d
where a, b, c, and d are the references in the plane, and P ( x i , y i , z i ) can be the coordinate of any point cloud. Thus,
( a x i + b y i + c z i d ) 2 = d i 2
The result of the best fitting plane is resolved based on the least square principle. Each plane with plane parameters a, b, c that satisfies a 2 + b 2 + c 2 = 1 can be taken to calculate the residual d i 2 . The parameters with the minimum residual d i 2 can be taken as the best plane, which is also where the building façade is located. The solving of the least square is an iterative process. The iteration is stopped when the difference of the two adjacent iteration d i 2 is less than 0.1 m.
The plane constructed with the red boundary in Figure 13 is the fitting result of the building façade being in the position.

3. Experiments and Results

3.1. Datasets

Experimental data were obtained from vehicle-borne LIDAR 3D scanning system supported by Wuhan University 985 platform. This MLS comprises three scanners (Sick® LMS 511, Düsseldorf, Germany), one panorama camera (Ladybug® 3, Richmond, BC, Canada), two binocular stereo vision CCD cameras, and a POS device (Leador® LDA03, Wuhan, China), as shown in Figure 14a.
The experiment district is a 3 km2 business street in Wuhan, which is a typical urban area with buildings of different shapes and heights. The road length is about 1.8 km. Figure 14b shows all the point cloud data in this district. The dataset includes not only city buildings and infrastructure, but also non-fixed mobile data, such as running cars and pedestrians. The average density of the LIDAR points is approximately 0.5 m within the distance of 30 m.

3.2. Result Analysis

The building façades beside the road were extracted using the proposed method. Figure 15 and Figure 16 respectively show the 3D building façade extraction results beside the road and the overlay image of extracted building façade and point cloud.
Figure 16 shows the building façade structures that were extracted in point cloud projection image space by using the proposed method. As shown in Figure 16, the structures are strictly overlaid with the original point cloud. The façades extracted from whole district road building were examined and were found to satisfy the requirements of street view browsing with façade pieces.
Figure 17 shows the registration result of a 3D building façade structure and the panoramic image with georeferenced matching. This manifests the validity of the method.
The simple 3D model of the building can be rapidly constructed by the proposed approach. According to the point cloud, georeferenced panoramic images and the extracted building 3D façade, the 3D models were inversely calculated in the panoramic image space, and then the texture images were tailored directly from panoramic images. The texture images were then mapped with the building 3D façades as shown in Figure 18.
To analyze the structure extraction precision of the building façades, 15 feature points, including building top and bottom, were evenly selected from the original LIDAR point cloud data. The 3D coordinates and corresponding point coordinates of the building facades, which were automatically extracted by the proposed method, were compared for the precision analysis. The error statistic results are shown in Table 1.
In Table 1, the extracted building facade structure and extraction precision of the proposed method in this paper is about 0.7 m in plane, including the point selection error of the feature points from the building facade structure. Considering that the distance between a building and a road is usually more than 30 m, the point cloud resolution is approximately 0.5 m. This accuracy is similar to the interactive artificial facade structure feature extraction precision based on 3D LIDAR point cloud. The façade extraction error is about 2 pixels, both in the point cloud and the street view image, which meets the requirements of the street view service.

4. Discussion

Building facades have many applications for the 3D modeling of the smart city. The 3D modeling has different levels for different applications. Although the proposed approach is a building facades extraction method, its aim is to use these facades pieces for online street view services. This requires that building facades pieces be matched with the panoramic images with a good vision effect. Thus, morphological dilation and erosion processing were adopted to transfer the complex building facades to facade pieces. These will reduce the data volume for online street view services. The complexities of facades were determined by morphological processing. The proposed approach is for the building structure modeling. Further study should be made as regards the morphological processing of windows, doors, and platforms to obtain higher accuracy and more detailed information. Structural elements and morphological processing times affect the results of facades extraction both in accuracy and details [37,38].
Another important issue is the density of the point cloud from MLS. Although morphological processing will affect the accuracy and details of the façade extraction, density is a crucial factor. The proposed approach of building facades extraction is for street view application. As such, the position precision of the façade pieces is adequate for scene jumping, POI tagging and street view browsing. We implemented point cloud data from MLS with a resolution of 0.3 m. The density of the point cloud should be improved to the cm level for a more detailed building structure modeling. However, doing so would affect the speed of the data processing [39,40].

5. Conclusions

A 3D LIDAR point cloud segmentation and side-view projection method based on high precision POS data was proposed in this paper. Through-point cloud projection transformation effectively enhanced the building façade features. On this basis, side-view projection image feature and building façade traits in projection image were analyzed. A building façade extraction method based on morphological filtering was proposed, and a precise reconstruction of 3D building façades was ultimately realized by the space feature transformation from 2D polygon to space 3D polygon. This method is feasible for most city buildings, demand less computational workload than the full processing of the whole point cloud and requires only driving and recording data on roads. 3D façade pieces of buildings are less than 3D LIDAR point cloud data. Therefore, 3D simple models can be rapidly reconstructed. This method can also be employed in the release and application of Internet street view images. Automation of building façade extraction in complex road sections (e.g., crossroads) should be explored further in the future.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grand No.41271452 and No.41271431). This research is also supported by the Key Laboratory for National Geographic Census and Monitoring, National Administration of Surveying, Mapping and Geoinformation, and Key Technologies R&D Program of China (Grand No.2015BAK03B04).

Author Contributions

Yan Li wrote the main program and most of the paper. Qingwu Hu conceived the study and designed the experiments. Meng Wu and Jianming Liu conceived and designed the experiments. Xuan Wu performed the experiments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Majid, Z.; Chong, A.K.; Ahmad, A.; Setan, H.; Samsudin, A.R. Photogrammetry and 3D laser scanning as spatial data capture techniques for a national craniofacial database. Photogramm. Rec. 2005, 20, 48–68. [Google Scholar] [CrossRef]
  2. Wang, R. 3D building modeling using images and LIDAR: A review. Int. J. Image Data Fusion 2013, 4, 273–292. [Google Scholar] [CrossRef]
  3. Elseberg, J.; Borrmann, D.; Nüchter, A. Algorithmic solutions for computing precise maximum likelihood 3D point clouds from mobile laser scanning platforms. Remote Sens. 2013, 5, 5871–5906. [Google Scholar] [CrossRef]
  4. Poreba, M.; Goulette, F. Line segment-based approach for accuracy assessment of MLS point clouds in urban areas. In Proceedings of the 8th International Symposium on Mobile Mapping Technology, Tainan, Taiwan, 1–3 May 2013; pp. 660–665.
  5. Aijazi, A.K.; Checchin, P.; Trassoudaine, L. Automatic detection and feature estimation of windows in 3D urban point clouds exploiting façade symmetry and temporal correspondences. Int. J. Remote Sens. 2014, 35, 7726–7748. [Google Scholar] [CrossRef]
  6. Bing, L.V.; Zhong, R.F.; Wang, J.N. Vehicle-borne mobile laser scanner products: A review. Geomat. Spat. Inf. Technol. 2012, 35, 184–187. [Google Scholar]
  7. Rutzinger, M.; Höfle, B.; Oude Elberink, S.; Vosselman, G. Feasibility of facade footprint extraction from mobile laser scanning data. Photogramm. Fernerkund. Geoinf. 2011, 6952, 97–107. [Google Scholar] [CrossRef]
  8. Rutzinger, M.; Pratihast, A.K.; Elberink, S.J.O.; Vosselman, G. Tree modelling from mobile laser scanning data-sets. Photogramm. Rec. 2011, 26, 361–372. [Google Scholar] [CrossRef]
  9. Anguelov, D.; Dulong, C.; Filip, D.; Christian, F.; Stéphane, L.; Richard, L. Google street view: Capturing the world at street level. Computer 2010, 43, 32–38. [Google Scholar] [CrossRef]
  10. Hara, K.; Le, V.; Froehlich, J. Combining crowdsourcing and Google street view to identify street-level accessibility problem. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 631–640.
  11. Torii, A.; Havlena, M.; Pajdla, T. From Google Street View to 3D city models. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Kyoto, Japan, 27 September–4 October 2009; pp. 2188–2195.
  12. Sampath, A.; Shan, J. Segmentation and reconstruction of polyhedral building roofs from aerial LIDAR point clouds. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1554–1567. [Google Scholar] [CrossRef]
  13. Wang, X.; Li, P. Extraction of earthquake-induced collapsed buildings using very high-resolution imagery and airborne LIDAR data. Int. J. Remote Sens. 2015, 36, 2163–2183. [Google Scholar] [CrossRef]
  14. Gilani, S.A.N.; Awrangjeb, M.; Lu, G. An automatic building extraction and regularization technique using LiDAR point cloud data and orthimage. Remote Sens. 2016, 8, 258. [Google Scholar] [CrossRef]
  15. Hui, Z.Y.; Hu, Y.J.; Xu, P. Automatic Extraction of Building Footprints from LIDAR Using Image Based Methods Geo-Informatics in Resource Management and Sustainable Ecosystem; Springer: Berlin/Heidelberg, Germany, 2015; pp. 79–86. [Google Scholar]
  16. Sun, S.; Savalggio, C. Complex building roof detection and strict description from LIDAR data and orthorectified aerial imagery. In Proceedings of the Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; Volume 22, pp. 5466–5469.
  17. Arachchige, N.H.; Perera, S.N.; Maas, H.G. Automatic processing of mobile laser scanner point clouds for building facade detection. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXIX-B5, 187–192. [Google Scholar]
  18. Rodríguez-Cuenca, B.; García-Cortés, S.; Ordóñez, C.; Alonso, M.C. An approach to detect and delineate street curbs from MLS 3D point cloud data. Autom. Construct. 2015, 51, 103–112. [Google Scholar] [CrossRef]
  19. Yang, B.; Dong, Z.; Zhao, G.; Dai, W. Hierarchical extraction of urban objects from mobile laser scanning data. ISPRS J. Photogramm. Remote Sens. 2015, 99, 45–57. [Google Scholar] [CrossRef]
  20. Wang, J.; Jin, F.X.; Lu, H.Y.; Lin, Z.M. Extraction of Building Façade Information based on Vehicular Laser Scanner. J. Shandong Univ. Sci. Technol. 2004, 23, 8–11. [Google Scholar]
  21. Huang, L.; Lu, X.S.; Chen, C.F. Extraction of building’s facade information from laser scanning data. Sci. Surv. Map. 2006, 31, 141–142. [Google Scholar]
  22. Li, B.J.; Li, Q.Q.; Shi, W.Z.; Wu, F.F. Feature extraction and modeling of urban building from vehicle-borne laser scanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 934–939. [Google Scholar]
  23. Lu, X.S.; Huang, L. Grid method on building information using laser scanner data. Geomat. Inf. Wuhan Univ. 2007, 32, 852–855. [Google Scholar]
  24. Yang, B.; Wei, Z.; Li, Q.; Li, J. Semi-automated building facade footprint extraction from mobile LIDAR point clouds. IEEE Geosci. Remote Sens. Lett. 2013, 10, 766–770. [Google Scholar] [CrossRef]
  25. Singh, S.; Grewal, S.K. Role of mathematical morphology in digital image processing: A review. Int. J. Sci. Eng. Res. 2014, 2, 1–3. [Google Scholar]
  26. Wu, H.; Li, N.; Liu, C.; Shi, B. Airborne LIDAR data segmentation based on 3D mathematical morphology. J. Remote Sens. 2011, 6, 1189–1201. [Google Scholar]
  27. Hernández, J.; Marcotegui, B. Morphological segmentation of building façade images. In Proceedings of the IEEE International Conference on Image Processing, Cairo, Egypt, 7–10 November 2009; pp. 4029–4032.
  28. Rodríguez-Cuenca, B.; García-Cortés, S.; Ordóñez, C.; Alonso, M. Morphological operations to extract urban curbs in 3D MLS point clouds. ISPRS Int. J. Geo-Inf. 2016, 5, 93. [Google Scholar] [CrossRef]
  29. Serna, A.; Marcotegui, B.; Hernández, J. Segmentation of façades from urban 3D point clouds using geometrical and morphological attribute-based operators. ISPRS Int. J. Geo-Inf. 2016, 5, 6. [Google Scholar] [CrossRef]
  30. Serna, A.; Marcotegui, B. Urban accessibility diagnosis from mobile laser scanning data. ISPRS J. Photogramm. Remote Sens. 2013, 84, 23–32. [Google Scholar] [CrossRef] [Green Version]
  31. Zhang, K.; Chen, S.C.; Whitman, D.; Shyu, M.L. A progressive morphological filter for removing nonground measurements from airborne LIDAR data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 872–882. [Google Scholar] [CrossRef]
  32. Hui, Z.; Hu, Y.; Yevenyo, Y.; Yu, X. An improved morphological algorithm for filtering airborne LIDAR point cloud based on multi-level kriging interpolation. Remote Sens. 2016, 8, 1–16. [Google Scholar] [CrossRef]
  33. Mukherjee, K.; Banerjee, T.; Roychowdhury, P.; Yamane, T. Terrestrial LIDAR survey and morphological analysis to identify infiltration properties in the Tamala Limestone, western Australia. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 4871–4881. [Google Scholar]
  34. Cao, Z.; Wu, Y. A Novel Multi-scale 3D Area morphological filtering method for airborne LIDAR building extraction. Int. J. Smart Home 2016, 10, 267–276. [Google Scholar]
  35. Shi, W.Z.; Li, B.; Li, Q. A method for segmentation of range image captured by vehicle-borne laser scanning based on the density of projected points. Acta Geodaetica Cartogr. Sin. 2005, 34, 96–100. [Google Scholar]
  36. Guan, Y.; Chen, X.; Shi, G. A robust method for fitting a plane to point clouds. J. Tongji Univ. (Nat. Sci.) 2008, 36, 981–984. [Google Scholar]
  37. Parape, C.D.; Premachandra, C.; Tamura, M. Optimization of structure elements for morphological hit-or-miss transform for building extraction from VHR airborne imagery in natural hazard areas. Int. J. Mach. Learn. Cybern. 2015, 6, 641–650. [Google Scholar] [CrossRef]
  38. Wu, X.L.; Yang, L.L.; Liang, F.; Cui, S.G. Petiole segmentation method based on multi-structure elements morphology. Appl. Mech. Mater. 2015, 734, 581–585. [Google Scholar] [CrossRef]
  39. Mehdisouzani, C.; Digne, J.; Audfray, N.; Lartigue, C.; Morel, J.M. Feature extraction from high-density point clouds: Toward automation of an intelligent 3D contactless digitizing strategy. Comput. Aided Des. Appl. 2010, 7, 863–874. [Google Scholar]
  40. Hackel, T.; Wegner, D.; Schindler, K. Fast segmentation of 3D point cloud with strongly varying density. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. 2016, III-3, 177–184. [Google Scholar]
Figure 1. Google Street View (GSV) with building facade pieces.
Figure 1. Google Street View (GSV) with building facade pieces.
Ijgi 05 00231 g001
Figure 2. Algorithm Flow.
Figure 2. Algorithm Flow.
Ijgi 05 00231 g002
Figure 3. Point cloud facade projection based on position and orientation system (POS) trajectory.
Figure 3. Point cloud facade projection based on position and orientation system (POS) trajectory.
Ijgi 05 00231 g003
Figure 4. Facade projection results based on position and orientation system (POS) trajectory. (a) Point cloud projection image from top view; (b) Facade projection image.
Figure 4. Facade projection results based on position and orientation system (POS) trajectory. (a) Point cloud projection image from top view; (b) Facade projection image.
Ijgi 05 00231 g004
Figure 5. Façade feature extraction algorithm with point cloud projection image.
Figure 5. Façade feature extraction algorithm with point cloud projection image.
Ijgi 05 00231 g005
Figure 6. Original point cloud projection image.
Figure 6. Original point cloud projection image.
Ijgi 05 00231 g006
Figure 7. Binary image of point cloud projection Image.
Figure 7. Binary image of point cloud projection Image.
Ijgi 05 00231 g007
Figure 8. Projection image after dilation and erosion processing.
Figure 8. Projection image after dilation and erosion processing.
Ijgi 05 00231 g008
Figure 9. Boundary extraction result of the building.
Figure 9. Boundary extraction result of the building.
Ijgi 05 00231 g009
Figure 10. Result of segmentation line calculation.
Figure 10. Result of segmentation line calculation.
Ijgi 05 00231 g010
Figure 11. Sub-polygon segmentation result of building.
Figure 11. Sub-polygon segmentation result of building.
Ijgi 05 00231 g011
Figure 12. Result of sub-polygon simplification.
Figure 12. Result of sub-polygon simplification.
Ijgi 05 00231 g012
Figure 13. Building façade from plane detection.
Figure 13. Building façade from plane detection.
Ijgi 05 00231 g013
Figure 14. Vehicle-borne LIDAR scanning system and obtained point cloud data. (a) Vehicle-borne 3D LIDAR Scanning System; (b) Experiment dataset.
Figure 14. Vehicle-borne LIDAR scanning system and obtained point cloud data. (a) Vehicle-borne 3D LIDAR Scanning System; (b) Experiment dataset.
Ijgi 05 00231 g014
Figure 15. Spatial façade of the building beside pedestrian street in the experiment district.
Figure 15. Spatial façade of the building beside pedestrian street in the experiment district.
Ijgi 05 00231 g015
Figure 16. Overlaid image of extracted building façade and point cloud.
Figure 16. Overlaid image of extracted building façade and point cloud.
Ijgi 05 00231 g016
Figure 17. Overlaid result of street view image and façade pieces.
Figure 17. Overlaid result of street view image and façade pieces.
Ijgi 05 00231 g017
Figure 18. Building simple model rapid creation based on street view image and LIDAR data.
Figure 18. Building simple model rapid creation based on street view image and LIDAR data.
Ijgi 05 00231 g018
Table 1. Precision statistics of building façade pieces extraction.
Table 1. Precision statistics of building façade pieces extraction.
No.Building Feature Points in Original Point Cloud (m)Extracted Building Facade Feature Points (m)Coordinate Differences (m)
xyHxyHdxdydH
1538,450.7776,201.9041.44538,450.7776,201.9841.400.00−0.080.04
2538,375.3676,239.6638.98538,375.9676,239.5939.50−0.600.06−0.52
3538,388.4476,230.8036.80538,389.5976,232.2237.70−1.16−1.42−0.89
4538,262.7676,292.7938.09538,262.2076,291.9937.700.560.800.40
5538,293.0376,270.0119.96538,292.4476,269.5819.500.590.430.46
6538,287.5376,281.1219.73538,286.3576,279.9418.701.181.181.03
7538,312.6776,260.0624.27538,312.4576,260.2524.460.22−0.19−0.19
8538,322.8476,263.3730.52538,321.8476,263.3730.801.000.00−0.28
9538,331.2476,259.5137.79538,331.2876,259.4837.77−0.04 0.030.02
10538,329.9576,257.2437.65538,329.9176,257.2337.900.040.01−0.25
11538,444.4976,204.9533.39538,445.5376,204.9333.80−1.040.02−0.41
12538,474.5276,221.8918.64538,474.3976,221.8117.770.130.080.88
13538,329.6876,256.8037.86538,329.9176,257.2337.70−0.24−0.430.16
14538,241.8176,303.0721.22538,241.7176,303.1020.400.10 −0.040.83
15538,241.2676,301.9438.51538,241.7176,303.1038.30−0.45−1.170.21
Average0.490.400.44
Standard deviation0.660.640.55

Share and Cite

MDPI and ACS Style

Li, Y.; Hu, Q.; Wu, M.; Liu, J.; Wu, X. Extraction and Simplification of Building Façade Pieces from Mobile Laser Scanner Point Clouds for 3D Street View Services. ISPRS Int. J. Geo-Inf. 2016, 5, 231. https://doi.org/10.3390/ijgi5120231

AMA Style

Li Y, Hu Q, Wu M, Liu J, Wu X. Extraction and Simplification of Building Façade Pieces from Mobile Laser Scanner Point Clouds for 3D Street View Services. ISPRS International Journal of Geo-Information. 2016; 5(12):231. https://doi.org/10.3390/ijgi5120231

Chicago/Turabian Style

Li, Yan, Qingwu Hu, Meng Wu, Jianming Liu, and Xuan Wu. 2016. "Extraction and Simplification of Building Façade Pieces from Mobile Laser Scanner Point Clouds for 3D Street View Services" ISPRS International Journal of Geo-Information 5, no. 12: 231. https://doi.org/10.3390/ijgi5120231

APA Style

Li, Y., Hu, Q., Wu, M., Liu, J., & Wu, X. (2016). Extraction and Simplification of Building Façade Pieces from Mobile Laser Scanner Point Clouds for 3D Street View Services. ISPRS International Journal of Geo-Information, 5(12), 231. https://doi.org/10.3390/ijgi5120231

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop