Next Article in Journal
Estimation of Downwelling Surface Longwave Radiation under Heavy Dust Aerosol Sky
Previous Article in Journal
Hyperspatial and Multi-Source Water Body Mapping: A Framework to Handle Heterogeneities from Observations and Targets over Large Areas
Previous Article in Special Issue
Multivariate Spatial Data Fusion for Very Large Remote Sensing Datasets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Approach for Realistic 3D Reconstruction of Planar Surfaces from Laser Scanning Data and Imagery Collected Onboard Modern Low-Cost Aerial Mapping Systems

1
Department of Geomatics Engineering, University of Calgary, 2500 University Drive NW, Calgary, Canada T2N 1N4, AB, Canada
2
Lyles School of Civil Engineering, Purdue University, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(3), 212; https://doi.org/10.3390/rs9030212
Submission received: 30 August 2016 / Revised: 17 January 2017 / Accepted: 20 February 2017 / Published: 25 February 2017
(This article belongs to the Special Issue Multi-Sensor and Multi-Data Integration in Remote Sensing)

Abstract

:
Over the past few years, accurate 3D surface reconstruction using remotely-sensed data has been recognized as a prerequisite for different mapping, modelling, and monitoring applications. To fulfill the needs of these applications, necessary data are generally collected using various digital imaging systems. Among them, laser scanners have been acknowledged as a fast, accurate, and flexible technology for the acquisition of high density 3D spatial data. Despite their quick accessibility, the acquired 3D data using these systems does not provide semantic information about the nature of scanned surfaces. Hence, reliable processing techniques are employed to extract the required information for 3D surface reconstruction. Moreover, the extracted information from laser scanning data cannot be effectively utilized due to the lack of descriptive details. In order to provide a more realistic and accurate perception of the scanned scenes using laser scanning systems, a new approach for 3D reconstruction of planar surfaces is introduced in this paper. This approach aims to improve the interpretability of the extracted planar surfaces from laser scanning data using spectral information from overlapping imagery collected onboard modern low-cost aerial mapping systems, which are widely adopted nowadays. In this approach, the scanned planar surfaces using laser scanning systems are initially extracted through a novel segmentation procedure, and then textured using the acquired overlapping imagery. The implemented texturing technique, which intends to overcome the computational inefficiency of the previously-developed 3D reconstruction techniques, is performed in three steps. In the first step, the visibility of the extracted planar surfaces from laser scanning data within the collected images is investigated and a list of appropriate images for texturing each surface is established. Successively, an occlusion detection procedure is carried out to identify the occluded parts of these surfaces in the field of view of captured images. In the second step, visible/non-occluded parts of the planar surfaces are decomposed into segments that will be textured using individual images. Finally, a rendering procedure is accomplished to texture these parts using available images. Experimental results from overlapping laser scanning data and imagery collected onboard aerial mapping systems verify the feasibility of the proposed approach for efficient realistic 3D surface reconstruction.

Graphical Abstract

1. Introduction

In recent years, accurate 3D surface reconstruction has been noticed as one of the most important necessities of different mapping and monitoring applications such as urban planning [1], environmental monitoring [2], infrastructure monitoring [3], cultural heritage documentation [4], indoor localization [5], and disaster management [6]. Considering the requirements of these applications, the required data for 3D surface reconstruction are usually acquired using different passive and active digital imaging systems mounted on static or mobile airborne and terrestrial platforms. Among these systems, laser scanners have been recognized as the leading technology for the rapid collection of high density 3D data due to their capability for fast and accurate data acquisition, flexibility, and accessibility in different atmospheric conditions [7]. Despite their proven potential for 3D data acquisition, the collected data using these systems cannot be solely utilized for accurate 3D modeling applications [8]. The acquired laser scanning data should be processed to extract and reconstruct individual scanned surfaces. Moreover, the reconstructed surfaces through laser scanning data processing cannot be effectively interpreted due to the lack of descriptive details. On the other hand, imagery collected onboard multi-platform photogrammetric systems provide rich descriptive information regarding the scanned surfaces, which facilitates the interpretability of those surfaces. In order to take advantage of complementary characteristics of these data sources, efficient data integration techniques are needed to accurately incorporate these two data sources, reconstruct the scanned surfaces, and generate realistic 3D views of the those surfaces.
Traditionally, the required imagery for 3D reconstruction applications were acquired using high-end photogrammetric systems. However, the utilization of these systems for emerging/lower-budget 3D mapping applications is not cost-effective due to their high initialization costs and the need for expert users. Hence, significant attempts have recently been made to develop and operate lower-cost mapping systems which are affordable and applicable tools for the collection of required data for diverse 3D reconstruction applications. The progressive hardware advancements (e.g., development of low-cost high-resolution digital cameras and multi-camera systems) and emergence of low-cost mapping platforms (Unmanned Aerial Vehicles (UAVs) and mobile robots) have also facilitated these developments and access to the required data for 3D surface reconstruction. Although these newly-developed mapping systems have tremendously been acknowledged by different user sectors due to their cost-saving benefits and ability to provide descriptive details from the scanned surfaces, they are not widely adopted for 3D reconstruction applications due to concerns about using consumer-grade sensors onboard unstable platforms [9]. The application of these low-cost sensors onboard newly-developed mapping platforms presents several processing challenges and deteriorates the quality and accuracy of 3D surface reconstruction [10].
In order to tackle these challenges, 3D surface reconstruction techniques need to be developed while exploiting the complementary characteristics of laser scanning data and imagery collected onboard modern mapping systems. Hence, different research activities have been conducted over the past few years to introduce novel 3D surface modelling techniques using laser scanning-derived positional information and imagery descriptive details [11,12,13,14,15,16,17,18,19,20]. The first step for exploiting the synergistic properties of these data sources is to successfully register them relative to a common reference frame [21,22,23]. In the second step, the descriptive details and positional surface information from both datasets are linked together through a texturing procedure. Traditionally, the integration of laser scanning data and images is carried out through point-by-point projection of the dense laser point clouds onto the images [11,13,14,15]. Due to the large volume of scanned points, the texturing of a complete laser scanning dataset using this approach will be computationally inefficient. Moreover, the laser-scanned surfaces might not be completely represented due to possible occlusions and point density variations. Hence, different surface-based texturing approaches have been introduced in recent years to overcome the limitations of the point-based surface modelling. These approaches are initiated by structuring the laser scanning point cloud into continuous surfaces using surface modelling techniques (e.g., fitting smooth surfaces [24], fitting basic geometric shapes [25], or triangulation/meshing [26]). The vertices of the established surfaces are then projected onto the images using collinearity equations [27,28,29]. Finally, these surfaces are textured using their projection onto the images to generate a photo-realistic representation of those surfaces [12,30,31,32,33,34,35]. Similar to point-based texturing approaches, these techniques are also computationally inefficient due to vertex by vertex projection of the reconstructed mesh surfaces onto the overlapping images.
The other drawback of point-by-point projection—in both point-based and surface-based texturing approaches—is the occlusion problem. This problem occurs when two laser points, in the vicinity of sudden elevation changes, are projected onto the same image location. To identify and resolve instances of this problem, different point-based visibility analysis techniques have been adopted [36]. These visibility analysis approaches can be categorized into distance-based, angle-based, and polygon-based methods. In distance-based methods (e.g., Z-buffer method), the occluding and occluded points are determined while comparing the distances between the competing points and the perspective center of the given image [36,37,38]. The point which is closest to the perspective center is visible in the given image, while the other point is recognized to be occluded. Although distance-based approaches are easily implemented, they cannot accurately and efficiently identify the instances of occlusion problem in complex scenes [39]. In angle-based methods, the occluded points are detected based on the effect of relief displacement in perspective imagery [29]. These approaches assume that abrupt surface changes usually take place along radial directions from the image space nadir point [40]. Hence, these approaches determine the occluded points by sequentially checking the off-nadir angles with the lines that connect individual laser scanning points and the perspective center of the image. The instances of occlusion problem are identified where the investigated angles decrease proceeding away from the nadir point along a radial direction. The drawback of these approaches is that they are only applicable for frame imagery and cannot be applied for the images captured by line cameras due to the existence of multiple exposure locations [41].
In contrast to distance-based and angle-based methods, polygon-based methods are implemented based on the detection of occluded regions [42]. In these approaches, the occluded areas are detected using the polygonal surfaces generated from a DBM. The polygons that are closer to the perspective center are considered to be visible, while the other polygons are deemed to be occluded areas. Although this method is fast and accurate, it demands the availability of a DBM. Moreover, it can only be applied for the detection of the occluded surfaces in airborne laser scanning data [43]. In the past few years, different types of polygon-based approaches have also been proposed for the detection of occluded surfaces derived from terrestrial laser data within overlapping images (depth sorting algorithm [44] and Binary Space Partitioning (BSP) algorithm [45]). Since these approaches compare the surfaces in the object space, they provide much more accurate results in complex scenes. However, the implemented algorithms are still computationally intensive.
To avoid the problems associated with the aforementioned techniques, a new approach for photo-realistic reconstruction of 3D planar surfaces, which are most common features especially in urban areas, is introduced in this paper. The main motivation behind such an alternative technique is to realistically reconstruct the 3D planar surfaces scanned by laser scanners and low-cost digital cameras onboard modern aerial mapping systems (e.g., UAVs which are low-cost mapping platforms of interest for different traditional and emerging applications). This approach tries to avoid challenges of image-based 3D surface reconstruction techniques (i.e., dense matching techniques). These challenges are usually originated in using low-cost cameras onboard modern mapping systems and include limited quality of the utilized digital cameras, instability of system calibration parameters, as well as the nature of the collected imagery (e.g., tilted/oblique imagery with irregular overlap/side lap characteristics). This approach also aims at reducing the volume of the required computations for 3D surface reconstruction and effectively handling images captured onboard modern low-cost aerial mapping systems. Furthermore, the introduced 3D surface reconstruction approach presents a novel visibility analysis technique to identify the occluded surfaces within available imagery. Figure 1 shows the outline of the proposed method for 3D scene reconstruction using datasets collected onboard modern mapping systems.
This paper starts with the introduction of the utilized procedure for the extraction of planar surfaces from laser scanning data. In the next section, the proposed approach for texturing of the extracted planar surfaces using nadir and oblique imagery acquired onboard modern photogrammetric systems is described. The presented visibility analysis technique for the identification of the occluded surfaces within those images is also explained in this section. The performance of the introduced approach for photo-realistic reconstruction of 3D planar surfaces is then assessed through experiments using real airborne and UAV-borne laser scanning data and imagery. Finally, concluding remarks and recommendations for future research are provided.

2. Laser Scanning Data Segmentation and Planar Feature Extraction

The first step in the proposed technique for 3D reconstruction of planar surfaces is to segment and extract individual planar surfaces from laser scanning point cloud. Over the past few decades, different methodologies have been proposed for the segmentation and feature extraction from unstructured laser point clouds. However, the majority of these techniques do not take the internal characteristics of laser scanning data—i.e., local point density variations and noise level in data—into account [8]. In order to overcome this limitation, an adaptive approach for the segmentation and extraction of planar surfaces, proposed by [46], is employed in this research. This approach is implemented while considering the possibility of application to multi-platform laser scanning datasets and considers local point density variations and random errors within datasets. This segmentation procedure is implemented in three successive steps: (1) point cloud characterization and planar features detection; (2) segmentation attributes computation; and (3) clustering of the estimated attributes and planar features segmentation. In the following subsections, the detailed explanation of above-mentioned steps will be provided. One should note that although this approach is equivalently applicable for both airborne and terrestrial laser scanning datasets, this paper and implemented experiments have mainly focused on the datasets achieved from aerial mapping systems.

2.1. Point Cloud Characterization and Planar Features Detection

The implemented segmentation procedure starts with characterization of the acquired laser scanning point cloud. The objective of this characterization procedure is to evaluate and quantify the internal characteristics of the point clouds which will be considered for adaptive point cloud processing. In the first step of this characterization procedure, the systematic biases and random noise in point clouds is quantified and investigated [47]. So far, different data-driven [47,48,49] and system-driven [47,50] approaches have been presented for the quantification and elimination of these errors. However, since raw laser scanning systems’ measurements (i.e., POS information, range, and scanner encoder angles) are not always accessible for the provided laser scanning datasets, the quantification of systematic and random errors is performed using data-driven methods in this research [47,48,49]. In these approaches, the errors in laser scanning data are evaluated by analyzing the compatibility between conjugate features in overlapping point clouds.
In the second step, the laser scanning points are classified into primitive features based on Principal Component Analysis (PCA) of their local 3D neighborhoods [51] and potential planar neighborhoods are identified (Figure 2a). Once the primitive planar features are detected through the PCA procedure, appropriate representation models are selected to parametrically describe these features. These representation models are initially selected based on the components of normal vectors to the planes and perpendicular distance from an arbitrary origin to the planes. Once the best representation models for local planar neighborhoods are chosen, an iterative plane fitting procedure for individual planar neighborhoods is carried out to precisely estimate their describing parameters [52]. This procedure aims at minimizing the squared sum of the normal distances between the points in a local planar neighborhood and the best-fitted plane to that neighborhood (Figure 2b). To eliminate the impact of outliers (gray points in Figure 2b) during iterative plane fitting procedure, the points within a local planar neighborhood are assigned weights that are inversely proportional to their normal distances from the best-fitted plane in the previous iteration.
Finally, this point cloud characterization procedure is concluded by the estimation of local point density variations along locally classified and represented planar neighborhoods. In this research, these variations are quantified using cylindrical buffers established based on a novel approach proposed by [53], while considering the 3D relationship among the points belonging to a local planar neighborhood and noise level in the point cloud. For individual points belonging to planar neighborhoods, the diameter of cylindrical buffer defined for local point density estimation is specified by the distance between the query point and its furthest neighboring point within that neighborhood. The orientation of this buffer is determined to be aligned along the normal to the represented planar neighborhood and its height is defined based on the noise level in the data (Figure 3). The successive steps for the segmentation of planar features are performed while considering the internal characteristic of laser scanning points estimated and quantified during this characterization procedure.

2.2. Segmentation Attributes Computation

The utilized procedure for the segmentation of planar surfaces is then followed by the definition and estimation of characteristic attributes describing those surfaces. In this approach, the coordinates of normal projections of an arbitrary origin to the best-fitted planes to locally-classified planar neighborhoods are defined as segmentation attributes [46]. The coordinates of these attribute points are computed based on the precisely-estimated parameters representing these neighborhoods. The benefit of such an attribute definition is that it provides unique and homogeneous parameters for laser scanning points belonging to individual planar surfaces regardless of their position and orientation in spatial domain. Figure 4 shows the attribute points defined for the best-fitted planes to three different locally-classified planar neighborhoods.

2.3. Clustering of the Estimated Attributes and Segmentation of Planar Surfaces

In the final step of this segmentation procedure, a clustering technique is implemented to detect accumulated peaks of attribute points representing individual planar surfaces. Traditionally, the clustering of segmentation attributes was performed based on a tessellated accumulator array that keeps track of the frequency of the estimated attributes in individual cells. In such a discretization procedure, the segmentation outcome is highly sensitive to the selected cell size. Moreover, these techniques suffer from large storage requirements and computational inefficiency when dealing with large point clouds. In order to tackle these drawbacks, a new clustering technique is introduced and utilized in this segmentation procedure [46]. This clustering procedure starts by organizing attribute points in a kd-tree structure and followed by meaningful definition of the extent of clusters in parameter domain. The appropriate extent of individual clusters is estimated while considering the acceptable deviations among the attributes associated to coplanar points (acceptable normal distance, Δd, and angular deviation, Δα, among coplanar points) as shown in Figure 5. These thresholds are determined based on the noise level in point cloud and previous knowledge about separable neighboring planes.
Once the cluster extent associated to each attribute point is estimated, a two-step peak detection approach is implemented for the identification of clusters of attributes in parameter domain [46]. In this approach, an octree space portioning procedure is initially implemented to detect approximate peak locations in parameter domain [54]. This space division technique is sequentially carried out in the octants with highest attributes’ count and continued until the extent of last octant becomes equivalent or less than the previously-estimated cluster extent for coplanar points (Figure 6a). All the points within last octant—i.e., approximate peak location—are then checked for the number of neighboring attribute points within the established cluster extent (Figure 6b). The neighborhood including highest number of adjacent attribute points is finally detected as the precise peak location and laser scanning points whose associated attributes are clustered within that neighborhood are segmented as a single planar surface. The attribute points within the first detected peak are then excluded from the parameter domain and the search for next peaks is continued until the number of attribute points in the latest-identified peak is less than the minimum number of points required for reliable planar surface definition. This technique optimizes the computational efficiency of clustering attributes in parameter domain by avoiding unnecessary neighborhoods definition. However, it cannot differentiate between the attribute points belonging to coplanar but spatially-disconnected planar surfaces. Hence, a neighborhood analysis through the boundaries of the segmented planar surfaces is implemented to separate disjoint coplanar surfaces in the spatial domain.

3. Texturing of Extracted Planar Surfaces from Laser Scanning Data

The proposed 3D surface reconstruction procedure is followed by texturing the extracted planar surfaces from laser scanning data using collected imagery onboard aerial mapping systems. This texturing procedure, which is an extension of the research work presented in [55], aims to integrate image-based descriptive details and laser scanning-based positional information and provide an accurate and realistic perception of the scanned planar surfaces. As a prerequisite for this procedure, the acquired laser scanning datasets and overlapping imagery are firstly registered relative to a common reference frame to ensure accurate integration of these two data sources. The proposed approach for texturing the extracted planar surfaces from laser scanning data is then implemented in four steps. In the first step, the visibility of the extracted planar surfaces in the available imagery is investigated using a novel visibility analysis technique. In the second step, an occlusion detection procedure is performed to identify the parts of segmented planar surfaces that are occluding/being occluded by other surfaces in the field of view of provided images. In the third step, the extracted planar surfaces are decomposed based on their visibility within the available images. Finally, a rendering procedure is implemented to texture and visualize these planar surfaces on the screen. Detailed explanation of these steps will be provided in the following subsections.

3.1. Visibility Check

In this subsection, a novel visibility analysis approach is introduced to identify fully/partially visible parts of the extracted planar surfaces in the available images. The main objective of this analysis is to investigate if the footprint of a given image along the infinite plane defined by the intended planar surface overlaps that surface fully or partially. This visibility check is initiated by investigating the suitability of captured images for texturing of individual segmented planar surfaces. For a segmented planar surface, this appropriateness is inspected by considering the angle between the normal to that surface and optical axes of the acquired images. The images whose optical axes make an acute angle (between 0° to 25°) with surface normal are considered to be appropriate for texturing an extracted planar surface (Figure 7).
In the second step of this visibility analysis procedure, the appropriate images for texturing a planar surface are individually projected onto the infinite plane enclosing that surface using collinearity equations, while enforcing the planar surface’s mathematical equation. Finally, in the third step, the overlap area between the projected image footprints and the intended planar surface is checked—using Weiler-Atheron algorithm [56]—to determine if that planar surface is fully or partially visible within those images. A segmented planar surface is fully visible in an image if it is completely enfolded by the projected image footprint (Figure 8a). On the other hand, that surface is partially visible in a given image if more than the predefined percentage of its area overlaps the projected image footprint (Figure 8b).
Once the visibility check for all segmented planar surfaces within the acquired images is accomplished, a list of the appropriate images for the texturing those surfaces will be established. In such a list, the images whose optical axes make smaller angles with the normal to a planar surface and have been captured closer to that surface are considered as the highest qualified candidates for texturing procedure and provide the best sampling distance along the infinite plane encompassing that surface.

3.2. Occlusion Detection

In the previous step, the visibility of the segmented planar surfaces within the acquired images was investigated using the proposed visibility analysis technique. Although this approach can precisely determine the visible parts of planar surfaces using a polygon-clipping algorithm, it does not consider and exclude the occlusions caused by other planar surfaces. Hence, an occlusion detection procedure for the identification of occluded parts of the segmented planar surfaces needs to be implemented. Traditionally, this occlusion detection was performed for individual points within the visible part/parts of a planar surface in the field of view of a given image. In such a case, a point is considered to be occluded if the line connecting the perspective center of that image to the query point intersects any other planar surface which is closer to the perspective center. The drawback of this occlusion detection procedure is its computational inefficiency when dealing with massive number of the points aggregated in the visible part/parts of planar surfaces. In order to avoid this shortcoming, a novel procedure for occlusion detection is presented in this subsection [55]. In this procedure, the occluded part/parts of a visible planar surface is/are specified based on visibility analysis of its/their inner and outer boundary points. This visibility analysis starts by defining the line segments connecting the perspective center of a given image and those boundary points. The established line segments are then intersected with the visible parts of all planar surfaces within the field of view of that image. The boundary points whose associated line segments do not intersect other visible planar surfaces are visible within the field of view of the investigated image (Figure 9a). On the other hand, for the boundary points whose respective line segments intersect one or multiple visible planar surfaces (Figure 9b), two different situations might occur:
  • The boundary point (as well as its respective planar surface) is occluded by the intersecting planar surface if the latter is closer to the perspective center than the former (green plane in Figure 10).
  • The boundary point (as well as its respective planar surface) is occluding the intersecting planar surface if the former is closer to the perspective center than the latter (yellow plane in Figure 10).
In the first situation, where the intended planar surface is occluded, the boundary points of the occluding surface are projected onto that surface’s infinite plane to specify its invisible part within the given image. The projected occluding surface is then intersected by the intended surface and the overlap area between these surfaces is omitted from the intended planar surface. The remaining part of the intended planar surface is visible within the given image. Figure 11 shows how the occluded part of the intended planar surface is excluded. In the second situation, where the intended planar surface is occluding another surface, the boundary points of the intended surface are projected onto the occluded surface. If all the boundary points of the intended planar surface are projected inside the occluded surface, the invisible part of the occluded surface is determined and omitted by intersecting the projected surface and the occluded planar surface (Figure 12a). On the other hand, for a planar surface that is not completely projected inside the occluded surface, the situation will be handled when the occluded surface is investigated as the intended planar surface (Figure 12b).
In summary, to identify visible parts of a planar surface in an image, one should sequentially trace the points along the image footprint intersection with that surface, its visible boundary points, and the internal points occluded by the boundary points of the other surfaces. The outcome of this procedure, for a given planar surface, will be the segment/segments that is/are visible in different images. In optimal situations (where the planar surface has not been occluded by the other surfaces), the union of these sub-surfaces should add up to the entire area of the intended surface.

3.3. Planar Surface Decomposition

Once the visible part/parts of the segmented planar surfaces within available images are determined using the introduced visibility analysis and occlusion detection procedures, the segments of these parts, which will be optimally textured using individual images, need to be identified. Hence, a new procedure for decomposition of the visible parts of planar surfaces into the segments that will be textured using the individual images is introduced in this subsection [55]. In order to accurately texture visible parts of planar surfaces within the acquired images, two different scenarios will be considered in this subsection:
  • In the first scenario, the intended planar surface is entirely visible in one or multiple images, which are appropriate for texturing that surface. For a planar surface, which is completely visible in a single image, the rendering procedure will be carried out using that image. However, for the planar surface, which is fully visible in more than one image, the rendering procedure is performed using the image which has the best sampling distance along that surface. This image (either nadir or oblique) is selected as the one which is within an acceptable distance from the surface’s centroid and its optical axis makes the smallest angle with the surface’s normal. In order to identify the best candidate image satisfying the above-mentioned conditions, a cost function is defined as in Equation (1):
    C o s t   C a n d i d a t e   I m a g e = w d ( d m a x d d m a x + d ) + w a n g ( ( c ) · n )
    In this cost function, d is the distance between the perspective center of a candidate image and the centroid of the intended surface, dmax is the maximum allowable distance between the perspective center of an appropriate image and the centroid of the intended surface, c is the candidate image’s optical axis, n is the normal vector to the intended planar surface, and wd and wang are the weight parameters which determine the contribution of the distance and angle components in the selection of best candidate image. This cost function will be maximized for the image which has the closest distance to the intended surface and the smallest angle between its optical axis and normal to that surface. The respective image is used for rendering the intended surface. Once the most appropriate image for texturing procedure is selected, the boundary points of the intended planar surface are projected onto the chosen image and the part of the image within the projected boundary will be rendered onto the given surface (Figure 13).
  • In the second scenario, the intended surface is partially visible within multiple images. The rendering of such a surface is carried out by decomposing this surface into the segments which are visible in single or multiple appropriate images. In order to delineate these segments, visible/non-occluded parts of the intended surface within individual appropriate images (Figure 14) are first intersected together using the Weiler-Atheron algorithm [56]. The outcome of this polygon intersection procedure will be the segments, which are visible in a single image (e.g., segment11, segment22, and segment33 in Figure 15) or multiple images (e.g., segment12 and segment23 in Figure 15). Figure 15 shows the segments of the intended planar surface that are visible in single or multiple images.
    For the segments, which are visible in a single image (segment11, segment22, and segment33), the rendering procedure is carried out in the same way as the previous scenario. However, for the segments which are visible in multiple images, the rendering procedure is carried out using all appropriate images including those segments. Accordingly, the boundary points of such a segment are firstly projected onto all the appropriate images enclosing that segment. The projected boundary onto the best candidate image for texturing that segment is selected as the master texture and the spectral information from other enclosing images are incorporated into the identified master texture. In order to accurately integrate the spectral information from multiple images, the correspondence between the conjugate pixels within the projected boundary onto the relevant images is established using a 2D projective transformation (Equation (2)). The coefficients of this 2D projective transformation are derived using the image coordinates of corresponding boundary points projected onto the images through a least-squares adjustment procedure.
    x l = a 1 x r + b 1 y r + c 1 a 3 x r + b 3 y r + 1 y l = a 2 x r + b 2 y r + c 2 a 3 x r + b 3 y r + 1
    The assigned color to a pixel within the master texture is ultimately derived by averaging the colors of its conjugate pixels within all images enclosing the intended segment as in Equation (3). This segment will ultimately be textured using the modified master texture. Figure 16 shows the rendering procedure for a segment which is visible in two images (segment12).
    R G B P i x e l i   M a s t e r   t e x t u r e = i = 1 n o . o f   t h e   i m a g e s   w h e r e   t h e   s e g m e n t   i s   v i s i b l e R G B P i x e l i n o . o f   a p p r o p r i a t e   i m a g e s   w h e r e   t h e   s e g m e n t   i s   v i s i b l e
  • In the third scenario, which occurs in special cases, a planar surface is partially occluded by other planar surface/surfaces in the field of view of the best candidate image for texturing that surface. In such a case, the rendering procedure for the intended surface is performed after excluding the occluded area from that surface. However, some parts of the occluded/excluded area might be visible in other images. Hence, the intersection between the visible surfaces within multiple images is carried out to determine the parts of the occluded area within a surface that can be textured using the other images. Figure 17 shows the intersection between two surfaces which are partially visible in two images, where one of them has an occluded area inside. Figure 18 shows the segments of these surfaces which are visible in single or multiple images. As seen in Figure 18, a part of the occluded area within visible surface in image 1 is covered by the visible surface in image 2 (segment222). Therefore, segment11 will be projected onto and textured using image 1, segment221 and segment222 will be projected onto and textured using image 2, and segment12 will be projected onto and textured using both images.

3.4. Rendering and Visualization

Once the extracted planar surfaces were decomposed into visible segments within the acquired images and the associated textures for rendering individual segments are delineated using the presented approach in the previous subsection, a texture mapping procedure is implemented to apply these textures onto the decomposed planar surfaces and visualize them on a 2D screen [57]. This texture mapping procedure is composed of the transformations between three spaces (Figure 19):
  • Texture space (2D image space),
  • 3D object space, and
  • 2D screen space.
More specifically, in texture mapping procedure, the identified textures are firstly mapped onto the extracted planar surfaces in 3D object space and then mapped onto the destination image (on the screen) using a projective transformation.
The transformation between 3D object space and 2D texture(image) space has already been established using the collinearity equations in the previous section. Furthermore, the transformation between the 3D object space and 2D screen space is performed using the Open Graphics Library (OpenGL) interface [58]. This library, which is a cross-language multi-platform Application Programming Interface (API), provides several functions which control rendering 3D objects via computer hardware accelerators. In order to optimize the computational efficiency of the rendering procedure, OpenGL is only capable of rendering convex and solid planar surfaces onto 2D screen space (Figure 20a). However, in reality, we might also deal with concave polygons (Figure 20b), polygons with holes (Figure 20c), and complex polygons (Figure 20d) within the segmented planar surfaces. In order to handle the projection of theses surfaces onto a 2D screen, they should be tessellated into simple convex polygons. Therefore, a tessellation procedure is performed to divide concave, hollow, and complex polygons into easier-to-render convex polygons (triangles).
Since the correspondence between texture space and object space has already been established through the texturing procedure, the vertices of the derived triangles (in the object space) are projected onto the assigned textures to the surfaces to identify the parts of the textures that belongs to those triangles. The clipped textures are finally rendered onto the triangles and visualized in 2D screen space. Detailed explanation of the aforementioned steps will be provided in the following subsections.

3.4.1. Tessellation

As mentioned before, the utilized rendering interface (OpenGL) cannot depict concave, hollow, and complex planar surfaces. Therefore, a tessellation procedure should be performed to subdivide these surfaces into simple convex polygons and render them efficiently. Therefore, the Delaunay triangulation algorithm [59,60] is implemented to tessellate different types of planar surfaces (concave, hollow, and/or complex convex surfaces), derived through laser scanning data segmentation, into triangular surfaces while preserving their geometric details. This triangulation procedure starts with larger coarse triangles and gradually adds points to the triangulated mesh. After each additional point is added, the generated triangle is checked to ensure satisfying Delaunay triangulation criteria:
  • No other vertex lies within the interior of any of the circumcircles of the triangles constructed by three nearby vertices in the planar surface (Figure 21).
  • The minimum interior angle is maximized, and the maximum is minimized. Therefore, triangles are generated as equiangular as possible and long and thin triangles are avoided.
This procedure provides a unique set of simple polygons (triangles) for each planar surface that can be easily and efficiently rendered using OpenGL API routines.

3.4.2. Texture Mapping

Once the segmented planar regions were tessellated, the texture mapping procedure is implemented to apply the 2D texture images to the laser scanning-derived planar surfaces in object space and visualize them in 2D screen space. The textured planar surfaces also can be individually investigated for object extraction and interpretation purposes. The texture mapping procedure is implemented in four successive steps:
  • Specification of the texture,
  • Assignment of texture coordinates to the triangulated polygon vertices,
  • Specification of filtering method, and
  • Drawing the surfaces on the screen using geometric coordinates and texture images.
In the first step, the textures corresponding to individual triangular surfaces are read from the processor memory and assigned a specific ID. In the second step, the image coordinates of these textures are arranged in a specific order to ensure proper mapping of the texture image onto the triangulated polygon (Figure 22). The object coordinates of the triangle vertices determine where a particular vertex is rendered on the screen, and the image coordinates specify which pixel in the texture image is assigned to that vertex.
The identified textures and the objects to be textured (on the screen) are rarely the same size in pixels. Therefore, in the third step of this texture mapping procedure, filtering methods are employed to determine how each pixel, in the texture image, should be expanded or shrunk to match a screen pixel size (Figure 23). In this case, the color information for each pixel on the screen is simulated or interpolated based on the utilized texture image. If the quality of the texture image is lower than the screen resolution, the color information for each pixel in the filtered texture image is sampled using the color information from the nearest neighboring pixel in the original texture image. However, if the quality of the texture image is higher than the screen resolution, the assigned color to each pixel in the filtered texture is derived by linear interpolation of its neighboring pixels in the original texture image. OpenGL API supports alternative types of filters for texture mapping procedure. The filters that provide better results need greater computational power from the GPU and may have an impact on the visualization frame rate. Consequently, choosing the appropriate filter type is performed while considering the balance between the desired result and the capability of the target platform. In our case, since the quality of the utilized images is higher than screen resolution, bilinear interpolation technique introduced in [61] is applied. Finally, the filtered textures are mapped onto the transformed surfaces on the screen using OpenGL routines and visualized to provide a three-dimensional perception of the two-dimensional textured model. The rendered scenes (textured planar surfaces) can be utilized for object extraction, processing, and interpretation activities.

4. Experimental Results

In this paper, we introduced a new approach for realistic 3D surface reconstruction using laser scanning data and imagery collected onboard modern photogrammetric systems. This approach tries to overcome the computational inefficiency of traditional point-based 3D surface reconstruction approaches by implementing a surface-based texturing procedure, introducing novel visibility analysis and occlusion detection techniques, and conducting adaptive texturing mapping procedure. In order to evaluate the performance of the proposed approach for realistic 3D reconstruction of planar surfaces using, three different datasets were chosen and utilized in this section.
The first dataset comprises a laser scanning point cloud collected using a Reigl LMS-Q1560 airborne scanner and overlapping imagery acquired using a Cannon Powershot S110 Camera mounted on a 3DR X8+ Multicopter drone at an average flying height of 85 m over an educational complex in Calgary, AB, Canada. Table 1 summarizes the specifications of the utilized laser scanning data and imagery as presented by the data providers. Figure 24a,b show an overview of the area of interest as seen from Google Earth and provided laser scanning point cloud over this complex (displayed in different colors according to the height) and Figure 24c shows two of the captured images over this complex.
The second dataset includes a laser scanning point cloud collected using a Leica ALS50 airborne laser scanner and overlapping almost-nadir imagery acquired using a Rollei ACI Pro(P65+) mounted on an integrated airborne mapping system at an average flying height of 400 m over an urban area in Burnaby, BC, Canada. Table 2 summarizes the specifications of the utilized laser scanning data and imagery as presented by the data providers. Figure 25a–c show an overview of the area of interest as seen from Google Earth, provided laser scanning point cloud (displayed in different colors according to the height) and some of the acquired images over this area.
The third dataset comprises a laser scanning point cloud collected by an Optech ALTM 3100 airborne laser scanner and overlapping oblique and nadir imagery acquired using a GoPro Hero 3+ camera mounted on a DJI Phantom 2 UAV at an average flying height of 26 m over a complex building located in Calgary, AB, Canada. Table 3 summarizes the specifications of the utilized laser scanning data and imagery as presented by the data providers. Figure 26a–c show an overview of the building of interest as seen from Google Earth, provided laser scanning point cloud (displayed in different colors according to the height) and some of the acquired images over this building.
The performance analysis of the proposed 3D surface reconstruction procedure is implemented in four successive steps. In the first step, the quality of the extracted planar surfaces from the provided laser scanning datasets is evaluated using a novel quality control procedure. The second step of this procedure is devoted to the quality assessment of realistic 3D reconstruction of the extracted planar surfaces from laser scanning datasets using the provided imagery. Afterwards, in the third step of this performance evaluation, the quality of the proposed approach for 3D surface modeling procedure is analyzed using the quality control technique presented in [62]. Finally, the computational efficiency of the introduced 3D surface reconstruction technique is assessed in comparison with a traditional point-based surface reconstruction approach. In the following subsections, the details of these experiments will be presented in detail.

4.1. Quality Control of the Extracted Planar Surfaces from Laser Scanning Data

To investigate the performance of the implemented approach for the segmentation and extraction of planar surfaces, experiments using the provided laser scanning data are conducted. These point clouds include a variety of planar surfaces such as building rooftops, building facades, and road surfaces. The provided point clouds are initially processed using the presented segmentation approach to extract individual planar surfaces while considering thresholds which have been selected based on general knowledge about the nature of the scanned area and the utilized scanner. Figure 27a–c show planar surface segmentation/extraction outcome for these laser scanning data.
The performance of the implemented approach for planar surface extraction technique is then quantitatively evaluated using a novel quality control procedure proposed in [63]. This quality control approach keeps track of different problems affecting segmentation and planar surface extraction from point clouds—i.e., unincorporated points, over-segmentation, under-segmentation, and invading/invaded surfaces, identifies the frequency of their occurrence, and recommends solutions for resolving these problems without the need for the reference data and regardless of the utilized processing procedure. Table 4 summarizes the achieved quality control measures for the extracted planar surfaces from these laser scanning point clouds. The identified segmentation problems are then resolved using the proposed solutions in this quality control procedure. Figure 28a–c show the extracted planar surfaces from the provided laser scanning datasets after implementation of the utilized quality control procedure. The qualitative analysis of final outcome through visual inspection of achieved results and re-evaluation of the modified planar surfaces using the same quality control procedure verifies that this technique is capable of effective identification and resolution of segmentation/surface extraction issues.
The re-evaluation of the modified planar surfaces can be carried out through analysis of the achieved roughness factors for the extracted surfaces. The surface roughness factors, which are preliminary tools for the investigation of the quality of the extracted planar surfaces, are defined as the deviation of the segmented points within those surfaces from the best-fitted planes to those points. This quality measure is estimated as in Equation (4).
S u r f a c e   R o u g h n e s s   F a c t o r j =   R M S E n d i = i = 1 n n d i 2 n
where ndi is the normal distance between point i (which belongs to the planar surface j) and the best-fitted plane through the entire surface j’s points, and n is the total number of the points aggregated in the segmented planar surface j. In order to perform this evaluation and ensure that surfaces with intended accuracy have been extracted, the accumulation of extracted surfaces within the provided laser scanning datasets with respect to their estimated surface roughness factors is investigated (Figure 29a–c). The analysis of the estimated surface roughness factors for the extracted surfaces from all three provided laser scanning point clouds verifies that the extracted surfaces meet the required accuracy standards for 3D surface reconstruction procedure specifically after performing quality control procedure for the laser scanning data segmentation outcome. The required accuracy standards are different for various applications. However, for the intended precise surveying and 3D modelling applications are in centimeter level.

4.2. Evaluation of Realistic 3D Reconstruction of the Extracted Planar Surfaces Using Acquired Imagery

In this subsection, the feasibility of the proposed approach for realistic 3D reconstruction of the extracted planar surfaces from laser scanning data is investigated through experiments using the imagery collected onboard UAV-borne and airborne photogrammetric systems. As mentioned in the previous section, these experiments are conducted while considering the visibility of the extracted planar surfaces within the provided nadir and oblique imagery. To provide a more complete visualization of the scanned scenes, the laser scanning points which have not been aggregated in planar surfaces are individually projected onto their enclosing images and colored using the spectral information of their nearest neighboring pixels after visibility analysis using Z-buffer approach. Figure 30, Figure 31 and Figure 32 show different views of the realistically-reconstructed 3D planar surfaces and colored non-segmented points of the datasets of interest while considering probable occlusion problems (the occluded areas have been visualized in black color).
Qualitative evaluation of the derived texturing results through visual inspection of Figure 30, Figure 31 and Figure 32 verifies the feasibility of the proposed approach for realistic 3D surface reconstruction using laser scanning data and overlapping images collected onboard various photogrammetric systems while considering the visibility of those surfaces within the images. In these textured 3D scenes, the occluded areas are visualized in black color.

4.3. Quality Control of 3D Surface Reconstruction Outcome

The third set of conducted experiments aims to evaluate the quality of the reconstructed 3D surfaces from laser scanning data and overlapping imagery collected onboard photogrammetric systems. This quality control procedure is conducted in two steps as proposed in [62]. The first step of this procedure, which is devoted to qualitative evaluation of the reconstructed 3D surfaces and identification of occluded surfaces within the captured images, has already been conducted during the implementation of the proposed 3D surface reconstruction technique (occluded surfaces have represented as black surfaces in the provided reconstruction results (Figure 30, Figure 31 and Figure 32)). The second step of this quality control procedure is implemented with respect to control surfaces extracted from higher quality data sources (point clouds achieved using more accurate laser scanners or derived through image dense matching techniques).
In order to carry out such an evaluation, a few corresponding 3D surfaces are firstly identified within extracted/achieved from both investigated data sources using a 3D surface matching technique [64]. Once the correspondence between the extracted surfaces from both data sources established, optimal rotation, translation, and scale parameters between two point clouds are estimated. The estimated transformation parameters facilitate the identification of other corresponding surfaces between two data sources. The corresponding surfaces are then compared according to their estimated surface parameters (the components of normal vectors to the extracted surfaces). The deviation of the extracted surfaces’ normal from the normal to their corresponding control surfaces is then computed and utilized as a measure for quality assessment of the reconstructed surfaces. Figure 33 shows how the quality control of reconstructed surfaces with respect to control surfaces is performed.
The quality assessment of the reconstructed surfaces using provided laser scanning and imagery datasets is then carried out comparing to control surfaces extracted from overlapping higher quality laser scanning or image-based point clouds. Figure 34 provides the pie charts representing the distribution of reconstructed surfaces according to their deviations from control datasets in the provided datasets. To explain these charts in more details, the discrepancy angle between normal vectors to all the reconstructed surfaces and normal vectors to their matched surface within the control data is estimated and classified. The percentage of the accumulated surfaces in each class with respect to all the reconstructed surfaces is then calculated and represented on the provided pie charts. The qualitative evaluation of achieved results verifies that for the surfaces reconstructed from the first two datasets, which compared to the extracted surfaces to higher-quality laser scanning point clouds, the deviations of the reconstructed surfaces is less than the deviations of the reconstructed surfaces from third dataset, where control surfaces extracted from higher density imagery-based point cloud. Furthermore, the analysis of the reconstructed surfaces’ deviations from their control surfaces proves that the proposed surface reconstruction approach provides realistic 3D models with acceptable accuracy (which in our case is below 15° deviation from control surfaces for more than 75% of the reconstructed surfaces).

4.4. Evaluation of the Computational Efficiency of the Proposed 3D Scene Reconstruction Technique

The performance evaluation of the proposed technique for 3D surface reconstruction using overlapping laser scanning data and imagery is finally concluded by comparative analysis of computational efficiency of this technique with respect to traditional point-based and image-based scene reconstruction techniques. In order to perform this comparative analysis, a laser scanning and imagery dataset collected over a complex building is utilized. The laser scanning point cloud has been collected using a Leica ALS50 airborne laser scanner and overlapping imagery has been acquired using Leica ADS100 digital imaging system (Figure 35). The computational efficiency of the proposed surface-based 3D reconstruction approach in comparison with a traditional point-based reconstruction technique is then investigated through comparison of the required processing time for 3D reconstruction using these two techniques. Figure 36 shows the reconstructed 3D surfaces using these two techniques and Table 5 lists the number of the projected points onto the images and required processing time for realistic 3D surface reconstruction using these two approaches as well as image-based surface reconstruction (dense matching). The comparison of the tabulated processing times for these approaches shows that the implementation of the proposed surface-based technique greatly improves the efficiency of realistic 3D surface reconstruction procedure. In addition, the visual inspection of the reconstructed surfaces using both approaches (Figure 36a,b) verifies that the proposed surface-based 3D surface reconstruction technique provides a more homogenous 3D representation of the scanned surfaces.

5. Conclusions and Recommendations for Future Research Work

Over the past few decades, accurate 3D surface reconstruction has been established as a prerequisite for a variety of mapping, modelling, and monitoring applications. The required data for realistic 3D reconstruction of surfaces satisfying the needs of these applications are usually provided by high-end photogrammetric and alternative active digital imaging systems. Despite their proven feasibility for the collection of the required data for 3D scene reconstruction, these systems cannot be widely used due to their high initialization costs and the need for high-level expertise for their operation. To overcome these limitations, tremendous research attempts have been made to provide lower-cost passive and active digital imaging system. However, the application of these newly-developed consumer-grade digital imaging systems introduces challenges which negatively affect the quality of final reconstructed 3D models. Therefore, novel 3D scene reconstruction techniques need to be developed which address these challenges while preserving the quality of final 3D reconstructed surfaces in a scene.
Hence, a new surface-based technique for 3D scene reconstruction using laser scanning data and imagery captured using modern photogrammetric systems was introduced in this paper. The proposed technique is implemented in four steps: In the first step of this procedure, the laser scanning point cloud is processed using a novel segmentation approach to extract individually scanned planar surfaces. In the second step, the visibility of the extracted planar surfaces within the acquired images investigated in two stages. A new visibility analysis procedure is initially conducted to identify the visible parts of the extracted surfaces within individual images. A novel occlusion detection approach is consecutively implemented to check whether the visible part/parts of a single surface is occluding/being occluded by other extracted surfaces within the field of view of individual images. In the third step of this 3D surface reconstruction procedure, a surface decomposition procedure is performed to determine which part/parts of each planar surface will be textured as best as possible using individual images. An efficient rendering technique is finally used to apply the identified textures within the images to the extracted surfaces in 3D space and visualize them on 2D screen.
The feasibility and computational efficiency of the proposed approach is then verified through experiments conducted using real laser scanning data and imagery captured by both high-end and modern low-cost photogrammetric systems. The quality evaluation of the extracted surfaces from provided laser scanning data was initially conducted using a novel quality control procedure to identify problems affecting the quality of planar feature segmentation and provide solutions to resolve these issues. The qualitative and quantitative analysis of the reconstructed surfaces was then performed through visual inspection of realistically-reconstructed 3D surfaces and comparing the quality of the reconstructed surfaces with control surfaces extracted with higher-quality imagery-based or laser scanning point clouds. The outcome of these experiments verified that the proposed approach delivers more homogenous representation of the scanned surfaces with acceptable accuracy. The final experiment was carried out to investigate the computational efficiency of the proposed surface-based 3D scene reconstruction approach in comparison with traditional point-based scene reconstruction techniques using laser scanning point cloud and imagery. The comparative analysis of the achieved processing times from both techniques proved that the proposed surface-based 3D scene reconstruction technique overcomes the computational inefficiency of the previously-developed point-based reconstruction techniques. This improvement in computational efficiency is mainly achieved through checking the visibility of the boundary points of the extracted surfaces within the acquired images rather than their entire points.
In conclusion, the contributions of the proposed realistic 3D surface reconstruction technique can be summarized as follows:
  • Implementation a surface-based 3D surface reconstruction procedure as opposed to previously-developed point-based techniques,
  • Expansion of the occlusion/visibility analysis approaches to handle surface-based texturing procedure,
  • Rendering the extracted surfaces using the images where they are fully or partially visible, and,
  • Enhancement of the interpretability of the segmented planar surfaces.
Future research work will be concentrated on the extension of the proposed approach for 3D reconstruction of non-planar surfaces (e.g., linear/cylindrical features, cones, spheres) to provide complete view of the scanned scenes. It will also be focused on the development of boundary regularization techniques for the extracted planar surfaces to generate more visually appealing 3D reconstruction outcomes. In addition, the assigned textures to the extracted planar surfaces can be processed using image processing techniques (e.g., image segmentation) to identify and quantify characteristics of individual planar surfaces (e.g., possible deteriorations or cracks that have not been properly represented in the laser scanning point cloud).

Acknowledgments

The authors gratefully acknowledge Natural Sciences and Engineering Research Council of Canada (Strategic Grants Program) and Canada Research Chairs Program for the financial support of this research work.

Author Contributions

Z. Lari and A. Habib conceived the idea and designed the proposed methodology. Z. Lari implemented the methodology and conducted experiments under N. El-Sheimy’s supervision. The manuscript was written by Z. Lari, reviewed, and revised by N. El-Sheimy.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Poullis, C.; You, S. 3D reconstruction of urban areas. In Proceedings of the Visualization and Transmission 2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, Hangzhou, China, 16–19 May 2011; pp. 33–40.
  2. Campos, R.; Garcia, R.; Alliez, P.; Yvinec, M. A surface reconstruction method for in-detail underwater 3D optical mapping. Int. J. Rob. Res. 2015, 34, 64–89. [Google Scholar] [CrossRef]
  3. Kwak, E.; Detchev, I.; Habib, A.; El-badry, M.; Hughes, C. Precise photogrammetric reconstruction using model-based image fitting for 3D beam deformation monitoring. J. Surv. Eng. 2013, 139, 143–155. [Google Scholar] [CrossRef]
  4. Remondino, F. Heritage recording and 3D Modeling with Photogrammetry and 3D Scanning. Remote Sens. 2011, 3, 1104–1138. [Google Scholar] [CrossRef] [Green Version]
  5. Zou, Y.; Chen, W.; Wu, X.; Liu, Z. Indoor localization and 3D scene reconstruction for mobile robots using the Microsoft Kinect sensor. In Proceedings of the IEEE 10th International Conference on Industrial Informatics, Beijing, China, 25–27 July 2012; pp. 1182–1187.
  6. Kemec, S.; Sebnem Duzgun, H.; Zlatanova, S. A conceptual framework for 3D visualization to support urban disaster management. In Proceedings of the Joint Symposium of ICA WG on CEWaCM and JBGIS Gi4DM, Praugue, Czech, 19–22 January 2009; pp. 268–278.
  7. Arefi, H. From LIDAR Point Clouds to 3D Building Models. Thesis, University of Tehran, Tehran, Iran, 2009. [Google Scholar]
  8. Vosselman, G.; Maas, H.-G. Airborne and Terrestrial Laser Scanning; Whittles Publishing: Dunbeath, Scotland, 2010. [Google Scholar]
  9. Lari, Z.; El-Sheimy, N. System considerations and challenges in 3d mapping and modelling using low-cost UAV images. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-3/W3, 343–348. [Google Scholar] [CrossRef]
  10. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  11. Lamond, B.; Watson, G. Hybrid rendering - a new integration of photogrammetry and laser scanning for image based rendering. In Proceedings of the Theory and Practice of Computer Graphics, Bournemouth, UK, 8–10 June 2004; pp. 179–186.
  12. Alshawabkeh, Y.; Haala, N. Automatic multi-image photo-texturing of complex 3D scenes. In Proceedings of the CIPA 2005 XX International Symposium, Torino, Italy, 26 September–1 October 2005.
  13. El-Omari, S.; Moselhi, O. Integrating 3D laser scanning and photogrammetry for progress measurement of construction work. Autom. Constr. 2008, 18, 1–9. [Google Scholar] [CrossRef]
  14. Koch, M.; Kaehler, M. Combining 3D laser-scanning and close-range Photogrammetry: An approach to exploit the strength of both methods. In Proceedings of the 37th International Conference on Computer Applications to Archaeology, Williamsburg, VA, USA, 22–26 March 2009; pp. 1–7.
  15. Kim, C.; Habib, A. Object-based integration of photogrammetric and LiDAR data for automated generation of complex polyhedral building models. Sensors 2009, 9, 5679–5701. [Google Scholar] [CrossRef] [PubMed]
  16. Nex, F.; Remondino, F.; Rinaudo, F. Integration of range and image data for building reconstruction. In Proceedings of the Videometrics, Range Imaging, and Applications XI 2011, Munich, Germany, 23 May 2011; Volume 8085, pp. 80850A:1–80850A:14.
  17. Lerma, J.L.; Navarro, S.; Cabrelles, M.; Seguí, A.E.; Haddad, N.; Akasheh, T. Integration of Laser Scanning and Imagery for Photorealistic 3D Architectural Documentation. In Laser Scanning, Theory and Applications; Wang, C.-C., Ed.; InTech: Rijeka, Croatia, 2011; pp. 413–430. [Google Scholar]
  18. Moussa, W. Integration of Digital Photogrammetry and Terrestrial Laser Scanning for Cultural Heritage Data Recording; University of Stuttgart: Stuttgart, Germany, 2014. [Google Scholar]
  19. Kwak, E.; Habib, A. Automatic representation and reconstruction of DBM from LiDAR data using Recursive Minimum Bounding Rectangle. ISPRS J. Photogramm. Remote Sens. 2014, 93, 171–191. [Google Scholar] [CrossRef]
  20. Lichtenauer, J.F.; Sirmacek, B. A semi-automatic procedure for texturing of laser scanning point cloouds with Google streetview images. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-3-W3, 109–114. [Google Scholar] [CrossRef]
  21. Habib, A.; Ghanma, M.; Mitishita, E.A. Co-registration of photogrammetric and LiDAR data: Methodology and case study. Braz. J. Cartogr. 2004, 56, 1–13. [Google Scholar]
  22. Chen, L.; Teo, T.; Shao, Y.; Lai, Y.; Rau, J. Fusion of LiDAR data and optical imagery for building modeling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35 (Pt. B4), 732–737. [Google Scholar]
  23. Habib, A.; Schenk, T. New approach for matching surfaces from laser scanners and optical sensors. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 1999, 32, 55–61. [Google Scholar]
  24. Vosselman, G.; Gorte, B.G.H.; Sithole, G.; Rabbani, T. Recognising structure in laser scanner point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 8-W2, 33–38. [Google Scholar]
  25. Guelch, E. Advanced matching techniques for high precision surface and terrain models. In Proceedings of the Photogrammetric Week 2009, Heidelberg, Germany, 7–11 September 2009; pp. 303–315.
  26. Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson surface reconstruction. In Proceedings of the Symposium on Geometry Processing 2006, Sardinia, Italy, 26–28 June 2006; pp. 61–70.
  27. Kraus, K. Photogrammetry, Volume 1, Fundamentals and Standard Processes; Dummler Verlag: Bonn, Germany, 1993. [Google Scholar]
  28. Novak, K. Rectification of digital imagery. Photogramm. Eng. Remote Sens. 1992, 58, 339–344. [Google Scholar]
  29. Habib, A.; Kim, E.M.; Kim, C. New methodologies for true orthophoto generation. Photogramm. Eng. Remote Sens. 2007, 73, 25–36. [Google Scholar] [CrossRef]
  30. El-Hakim, S.F.; Remondino, F.; Voltolini, F. Integrating Techniques for Detail and Photo-Realistic 3D Modelling of Castles. GIM Int. 2003, 22, 21–25. [Google Scholar]
  31. Wang, L.; Kang, S.B.; Szeliski, R.; Sham, H.Y. Optimal texture map generation using multiple views. In Proceedings of the Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; pp. 347–354.
  32. Bannai, N.; Fisher, R.B.; Agathos, A. Multiple color texture map fusion for 3D models. Pattern Recognit. Lett. 2007, 28, 748–758. [Google Scholar] [CrossRef]
  33. Xu, L.; Li, E.; Li, J.; Chen, Y.; Zhang, Y. A general texture mapping framework for image-based 3D modeling. In Proceedings of the 17th IEEE International Conference on Image Processing (ICIP), Hong Kong, China, 26–29 September 2010; pp. 2713–2716.
  34. Zhu, L.; Hyyppä, J.; Kukko, A.; Kaartinen, H.; Chen, R. Photorealistic building reconstruction from mobile laser scanning data. Remote Sens. 2011, 3, 1406–1426. [Google Scholar] [CrossRef]
  35. Chen, Z.; Zhou, J.; Chen, Y.; Wang, G. 3D texture mapping in multi-view reconstruction. In Advances in Visual Computing; Lecture Notes in Computer Science; Springer: Berlin, Germany, 2012; pp. 359–371. [Google Scholar]
  36. Ahmar, F.; Jansa, J.; Ries, C. The generation of true orthophotos using a 3D building model in conjunction with a conventional DTM. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 1998, 32(Pt. 4), 16–22. [Google Scholar]
  37. Watkins, G.S. A Real-Time Visible Surface Algorithm; Computer Science, University of Utah: Salt Lake City, UT, USA, 1970. [Google Scholar]
  38. Laine, S.; Karras, T. Two methods for fast ray-cast ambient occlusion. In Proceedings of the Eurographics Symposium on Rendering, Saarbrücken, Germany, 28–30 June 2010. [CrossRef]
  39. Heckbert, P.; Garland, M. Multiresolution modeling for fast rendering. In Proceedings of Graphics Interface ’94, Banff, AB, Canada, 18–20 May 1994; pp. 43–50.
  40. Mikhail, E.M.; Bethel, J.; McGline, C. Introduction to Modern Photogrammetry; Wiley and Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
  41. Oliveira, H.C.; Galo, M. Occlusion detection by height gradient for true orthophoto generation using LiDAR data. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-1-W1, 275–280. [Google Scholar] [CrossRef]
  42. Kuzmin, Y.P.; Korytnik, S.A.; Long, O. Polygon-based true orthophoto generation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, XXXV, 529–531. [Google Scholar]
  43. Chen, L.-C.; Teo, T.-A.; Wen, J.-Y.; Rau, J.-Y. Occlusion-compensated true orthorectification for high-resolution satellite images. Photogramm. Rec. 2007, 22, 39–52. [Google Scholar] [CrossRef]
  44. Tran, A.T.; Harada, K. Depth-aided tracking multiple objects under occlusion. J. Signal Inf. Process. 2013, 4, 299–307. [Google Scholar] [CrossRef]
  45. Kovalčík, V.; Sochor, J. Fast rendering of complex dynamic scenes. 2006, 81–87. [Google Scholar]
  46. Lari, Z.; Habib, A. An adaptive approach for the segmentation and extraction of planar and linear/cylindrical features from laser scanning data. ISPRS J. Photogramm. Remote Sens. 2014, 93, 192–212. [Google Scholar] [CrossRef]
  47. Bang, K.I. Alternative methodologies for LiDAR system calibration. Ph.D. Dissertation, University of Calgary, Calgary, AB, Canada, 2010. [Google Scholar]
  48. Habib, A.; Bang, K.I.; Kersting, A.P.; Lee, D.-C. Error budget of LiDAR systems and quality control of the derived data. Photogramm. Eng. Remote Sens. 2009, 75, 1093–1108. [Google Scholar] [CrossRef]
  49. Habib, A.F.; Kersting, A.P.; Bang, K.-I.; Zhai, R.; Al-Durgham, M. A strip adjustment procedure to mitigate the impact of inaccurate mounting parameters in parallel lidar strips. Photogramm. Rec. 2009, 24, 171–195. [Google Scholar] [CrossRef]
  50. Habib, A.; Bang, K.-I.; Kersting, A.P.; Chow, J. Alternative methodologies for LiDAR system calibration. Remote Sens. 2010, 2, 874–907. [Google Scholar] [CrossRef]
  51. Pauly, M.; Gross, M.; Kobbelt, L.P. Efficient simplification of point-sampled surfaces. In Proceedings of the Conference on Visualization (VIS); IEEE Computer Society: Washington, DC, USA, 2002; pp. 163–170. [Google Scholar]
  52. Filin, S.; Pfeifer, N. Neighborhood systems for airborne laser data. Photogramm. Eng. Remote Sens. 2005, 71, 743–755. [Google Scholar] [CrossRef]
  53. Lari, Z.; Habib, A. New approaches for estimating the local point density and its impact on LiDAR data segmentation. Photogramm. Eng. Remote Sens. 2013, 79, 195–207. [Google Scholar] [CrossRef]
  54. Samet, H. An overview of quadtrees, octrees, and related hierarchical data structures. In Theoretical Foundations of Computer Graphics and CAD; Earnshaw, D.R.A., Ed.; NATO ASI Series; Springer: Berlin, Germany, 1988; pp. 51–68. [Google Scholar]
  55. Lari, Z.; Habib, A. A new approach for segmentation-based texturing of laser scanning data. ISPRS - Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W4, 115–121. [Google Scholar] [CrossRef]
  56. Weiler, K.; Atherton, P. Hidden surface removal using polygon area sorting. In Proceedings of the 4th Annual Conference on Computer Graphics and Interactive Techniques, San Jose, CA, USA, 20–22 July 1977; ACM: New York, NY, USA, 1977; pp. 214–222. [Google Scholar]
  57. Lari, Z. Adaptive Processing of Laser Scanning Data and Texturing of the Segmentation Outcome. Ph.D. Thesis, University of Calgary, Calgary, AB, USA, 2014. [Google Scholar]
  58. Shreiner, D. OpenGL Programming Guide: The Official Guide to Learning OpenGL, Versions 3.0 and 3.1, 7th ed.; Addison-Wesley Professional: Indianapolis, IN, USA, 2009. [Google Scholar]
  59. Bowyer, A. Computing Dirichlet tessellations. Comput. J. 1981, 24, 162–166. [Google Scholar] [CrossRef]
  60. Watson, D.F. Computing the n-dimensional Delaunay tessellation with application to Voronoi polytopes. Comput. J. 1981, 24, 167–172. [Google Scholar] [CrossRef]
  61. Subramanian, S.H. A novel algorithm to optimize image scaling in OpenCV: Emulation of floating point arithmetic in bilinear interpolation. Int. J. Comput. Electr. Eng. 2012, 260–263. [Google Scholar] [CrossRef]
  62. Lari, Z.; El-Sheimy, N. Quality analysis of 3D surface reconstruction using multi-platform photogrammetric systems. ISPRS - Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B3, 57–64. [Google Scholar] [CrossRef]
  63. Lari, Z.; Al-Durgham, K.; Habib, A. A novel quality control procedure for the evaluation of laser scanning data segmentation. ISPRS - Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 1, 207–210. [Google Scholar] [CrossRef]
  64. Habib, A.F.; Cheng, R.W.T. Surface matching strategy for quality control of LIDAR data. In Innovations in 3D Geo Information Systems; Abdul-Rahman, D.A., Zlatanova, D.S., Coors, D.V., Eds.; Lecture Notes in Geoinformation and Cartography; Springer: Berlin/Heidelberg, Germany, 2006; pp. 67–83. [Google Scholar]
Figure 1. The outline of the proposed 3D scene reconstruction approach.
Figure 1. The outline of the proposed 3D scene reconstruction approach.
Remotesensing 09 00212 g001
Figure 2. Established local neighborhood for (a) the detection and (b) precise representation of planar features (through iterative plane fitting procedure).
Figure 2. Established local neighborhood for (a) the detection and (b) precise representation of planar features (through iterative plane fitting procedure).
Remotesensing 09 00212 g002
Figure 3. Established adaptive cylindrical buffer for local point density estimation in a planar neighborhood.
Figure 3. Established adaptive cylindrical buffer for local point density estimation in a planar neighborhood.
Remotesensing 09 00212 g003
Figure 4. Defined attribute points for three locally-classified planar neighborhoods.
Figure 4. Defined attribute points for three locally-classified planar neighborhoods.
Remotesensing 09 00212 g004
Figure 5. 2D representation of acceptable spatial and angular deviations within a given planar surface.
Figure 5. 2D representation of acceptable spatial and angular deviations within a given planar surface.
Remotesensing 09 00212 g005
Figure 6. 2D Illustration of clustering procedure in parameter domain: (a) approximate peak detection using octree space partitioning strategy, and (b) precise peak detection.
Figure 6. 2D Illustration of clustering procedure in parameter domain: (a) approximate peak detection using octree space partitioning strategy, and (b) precise peak detection.
Remotesensing 09 00212 g006
Figure 7. Investigation of the visibility of a planar surface in an image.
Figure 7. Investigation of the visibility of a planar surface in an image.
Remotesensing 09 00212 g007
Figure 8. Representation of (a) a fully visible and (b) a partially visible planar surface in an image.
Figure 8. Representation of (a) a fully visible and (b) a partially visible planar surface in an image.
Remotesensing 09 00212 g008
Figure 9. Occlusion investigation of a boundary point of the intended planar surface within a given image (a) visible boundary point (b) occluded boundary point.
Figure 9. Occlusion investigation of a boundary point of the intended planar surface within a given image (a) visible boundary point (b) occluded boundary point.
Remotesensing 09 00212 g009
Figure 10. The surfaces occluding (green surface) and being occluded (yellow surface) by a boundary point of the intended planar surface.
Figure 10. The surfaces occluding (green surface) and being occluded (yellow surface) by a boundary point of the intended planar surface.
Remotesensing 09 00212 g010
Figure 11. Identification and omission of the occluded part of the intended planar surface—occluding by another planar surface.
Figure 11. Identification and omission of the occluded part of the intended planar surface—occluding by another planar surface.
Remotesensing 09 00212 g011
Figure 12. The occlusions caused by the intended planar surface (a) all the boundary points of the intended surface projected inside the occluded surface and (b) a section of the boundary points projected inside the occluded surface.
Figure 12. The occlusions caused by the intended planar surface (a) all the boundary points of the intended surface projected inside the occluded surface and (b) a section of the boundary points projected inside the occluded surface.
Remotesensing 09 00212 g012
Figure 13. Rendering of a fully-visible planar surface within (a) a single appropriate image or (b) multiple appropriate images.
Figure 13. Rendering of a fully-visible planar surface within (a) a single appropriate image or (b) multiple appropriate images.
Remotesensing 09 00212 g013
Figure 14. A partially-visible planar surface within multiple overlapping images.
Figure 14. A partially-visible planar surface within multiple overlapping images.
Remotesensing 09 00212 g014
Figure 15. Visible segments in (a) multiple images (segment12 and segment23) and (b) in a single image (segment11, segment22, and segment33).
Figure 15. Visible segments in (a) multiple images (segment12 and segment23) and (b) in a single image (segment11, segment22, and segment33).
Remotesensing 09 00212 g015
Figure 16. Rendering a segment visible in two images (segment12).
Figure 16. Rendering a segment visible in two images (segment12).
Remotesensing 09 00212 g016
Figure 17. The intersection between two visible surfaces within two images where one of them includes an occluded area.
Figure 17. The intersection between two visible surfaces within two images where one of them includes an occluded area.
Remotesensing 09 00212 g017
Figure 18. Visible segments of two surfaces in single or multiple images.
Figure 18. Visible segments of two surfaces in single or multiple images.
Remotesensing 09 00212 g018
Figure 19. Transformations between texture, object, and screen spaces.
Figure 19. Transformations between texture, object, and screen spaces.
Remotesensing 09 00212 g019
Figure 20. Different types of planar surfaces: (a) convex, (b) concave, (c) hollow, and (d) complex surfaces.
Figure 20. Different types of planar surfaces: (a) convex, (b) concave, (c) hollow, and (d) complex surfaces.
Remotesensing 09 00212 g020
Figure 21. Delaunay triangulation of (a) a concave and (b) a hollow planar surface.
Figure 21. Delaunay triangulation of (a) a concave and (b) a hollow planar surface.
Remotesensing 09 00212 g021
Figure 22. Correspondence between object space and texture space.
Figure 22. Correspondence between object space and texture space.
Remotesensing 09 00212 g022
Figure 23. Filtering methods for adapting a texture image to the screen resolution.
Figure 23. Filtering methods for adapting a texture image to the screen resolution.
Remotesensing 09 00212 g023
Figure 24. (a) An overview of the educational complex in CalgaryCanada) from Google Earth; (b) laser scanning point cloud colored according to height; and (c) two of the acquired images over the same complex.
Figure 24. (a) An overview of the educational complex in CalgaryCanada) from Google Earth; (b) laser scanning point cloud colored according to height; and (c) two of the acquired images over the same complex.
Remotesensing 09 00212 g024
Figure 25. (a) An overview of the area of interest (in Burnaby, BC, Canada) from Google Earth; (b) laser scanning point cloud colored according to height; and (c) three of the captured images over same area.
Figure 25. (a) An overview of the area of interest (in Burnaby, BC, Canada) from Google Earth; (b) laser scanning point cloud colored according to height; and (c) three of the captured images over same area.
Remotesensing 09 00212 g025
Figure 26. (a) An overview of the building of interest (located in Calgary, AB, Canada) from Google Earth; (b) laser scanning point cloud colored according to height; and (c) three of the captured images over the same building.
Figure 26. (a) An overview of the building of interest (located in Calgary, AB, Canada) from Google Earth; (b) laser scanning point cloud colored according to height; and (c) three of the captured images over the same building.
Remotesensing 09 00212 g026
Figure 27. Planar surface extraction results for the (a) first; (b) second; and (c) third laser scanning datasets before quality control procedure.
Figure 27. Planar surface extraction results for the (a) first; (b) second; and (c) third laser scanning datasets before quality control procedure.
Remotesensing 09 00212 g027
Figure 28. Planar surface extraction results for the (a) first; (b) second; and (c) third laser scanning datasets after quality control procedure.
Figure 28. Planar surface extraction results for the (a) first; (b) second; and (c) third laser scanning datasets after quality control procedure.
Remotesensing 09 00212 g028
Figure 29. Analysis of the quality of the extracted surfaces from: (a) first; (b) second; and (c) third provided laser scanning point clouds after quality control of segmentation/planar surface extraction outcome.
Figure 29. Analysis of the quality of the extracted surfaces from: (a) first; (b) second; and (c) third provided laser scanning point clouds after quality control of segmentation/planar surface extraction outcome.
Remotesensing 09 00212 g029
Figure 30. Realistic 3D surface reconstruction outcome for the first dataset: (a) and (b) two different views from the textured planar surfaces and individually colored non-segmented points.
Figure 30. Realistic 3D surface reconstruction outcome for the first dataset: (a) and (b) two different views from the textured planar surfaces and individually colored non-segmented points.
Remotesensing 09 00212 g030
Figure 31. Realistic 3D surface reconstruction outcome for the second dataset: a single view of the textured planar surfaces and individually colored non-segmented points.
Figure 31. Realistic 3D surface reconstruction outcome for the second dataset: a single view of the textured planar surfaces and individually colored non-segmented points.
Remotesensing 09 00212 g031
Figure 32. Realistic 3D surface reconstruction outcome for the third dataset: (a) and (b) two different views from the textured planar regions and individually colored non-segmented points.
Figure 32. Realistic 3D surface reconstruction outcome for the third dataset: (a) and (b) two different views from the textured planar regions and individually colored non-segmented points.
Remotesensing 09 00212 g032
Figure 33. Quality control of an extracted planar surface with respect to its corresponding control planar surface.
Figure 33. Quality control of an extracted planar surface with respect to its corresponding control planar surface.
Remotesensing 09 00212 g033
Figure 34. Pie charts representing distribution of the reconstructed surfaces w.r.t to their deviations from corresponding surfaces extracted from overlapping control data sources for the (a) first dataset; (b) second dataset; and (c) third dataset.
Figure 34. Pie charts representing distribution of the reconstructed surfaces w.r.t to their deviations from corresponding surfaces extracted from overlapping control data sources for the (a) first dataset; (b) second dataset; and (c) third dataset.
Remotesensing 09 00212 g034
Figure 35. (a) Laser scanning point cloud colored according to height, and (b) a part of a captured image over the complex building of interest.
Figure 35. (a) Laser scanning point cloud colored according to height, and (b) a part of a captured image over the complex building of interest.
Remotesensing 09 00212 g035
Figure 36. Realistic 3D scene reconstruction results for the provided dataset using (a) a traditional point-based and (b) the proposed surface-based scene reconstruction techniques.
Figure 36. Realistic 3D scene reconstruction results for the provided dataset using (a) a traditional point-based and (b) the proposed surface-based scene reconstruction techniques.
Remotesensing 09 00212 g036
Table 1. A summary of the laser scanning point cloud and imagery characteristics utilized in the first experimental dataset.
Table 1. A summary of the laser scanning point cloud and imagery characteristics utilized in the first experimental dataset.
DatasetLaser Scanning DataPhotogrammetric Data
SystemReigl LMS-Q1560Cannon Powershot S110
Date acquired20062015
Number of overlapping scans/images518
Average point density (laser point cloud) Ground sampling distance (image)2 pts/m22.5 cm
Planimetric accuracy68 cm2.5 cm
Vertical accuracy8 cm15 cm
Table 2. A summary of the laser scanning point cloud and imagery characteristics utilized in the second experimental dataset.
Table 2. A summary of the laser scanning point cloud and imagery characteristics utilized in the second experimental dataset.
DatasetLaser Scanning DataPhotogrammetric Data
SystemLeica ALS50Rollei ACI Pro (P65+)
Date acquired20082011
Number of overlapping scans/images636
Average point density (laser point cloud) Ground sampling distance (image)4 pts/m25 cm
Planimetric accuracy41 cm5 cm
Vertical accuracy10 cm14 cm
Table 3. A summary of the laser scanning point cloud and imagery characteristics utilized in the first experimental dataset.
Table 3. A summary of the laser scanning point cloud and imagery characteristics utilized in the first experimental dataset.
DatasetLaser Scanning DataPhotogrammetric Data
SystemOptech ALTM 3100GoPro HERO 3 + black edition
Date acquired20132015
Number of overlapping scans/images2828
Average point density (laser point cloud) Ground sampling distance (image)50 pts/m21.7 cm
Planimetric accuracy14 cm1.7 cm
Vertical accuracy6 cm15 cm
Table 4. Derived quality control measures for the planar feature segmentation results from airborne laser scanning datasets.
Table 4. Derived quality control measures for the planar feature segmentation results from airborne laser scanning datasets.
QC MeasuresFirst DatasetSecond Dataset Third Dataset
Unincorporated points15%18%22%
Over-segmentation12%14%21%
Under-segmentation0%0.3%3%
Invading/invaded surfaces segments1%0%2%
Table 5. Required processing time for 3D scene reconstruction for the provided dataset using point-based, surface-based, and image-based techniques.
Table 5. Required processing time for 3D scene reconstruction for the provided dataset using point-based, surface-based, and image-based techniques.
3D Scene Reconstruction ApproachTotal Number of PointsNumber of Projected PointsProcessing Time
Point-based45,37045,3705 min
Surface-based344969 s
Image-based (dense matching)584,320NA12 min

Share and Cite

MDPI and ACS Style

Lari, Z.; El-Sheimy, N.; Habib, A. A New Approach for Realistic 3D Reconstruction of Planar Surfaces from Laser Scanning Data and Imagery Collected Onboard Modern Low-Cost Aerial Mapping Systems. Remote Sens. 2017, 9, 212. https://doi.org/10.3390/rs9030212

AMA Style

Lari Z, El-Sheimy N, Habib A. A New Approach for Realistic 3D Reconstruction of Planar Surfaces from Laser Scanning Data and Imagery Collected Onboard Modern Low-Cost Aerial Mapping Systems. Remote Sensing. 2017; 9(3):212. https://doi.org/10.3390/rs9030212

Chicago/Turabian Style

Lari, Zahra, Naser El-Sheimy, and Ayman Habib. 2017. "A New Approach for Realistic 3D Reconstruction of Planar Surfaces from Laser Scanning Data and Imagery Collected Onboard Modern Low-Cost Aerial Mapping Systems" Remote Sensing 9, no. 3: 212. https://doi.org/10.3390/rs9030212

APA Style

Lari, Z., El-Sheimy, N., & Habib, A. (2017). A New Approach for Realistic 3D Reconstruction of Planar Surfaces from Laser Scanning Data and Imagery Collected Onboard Modern Low-Cost Aerial Mapping Systems. Remote Sensing, 9(3), 212. https://doi.org/10.3390/rs9030212

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop