Next Article in Journal
Volumetric Representation and Sphere Packing of Indoor Space for Three-Dimensional Room Segmentation
Next Article in Special Issue
Safe Documentation of Historical Monuments by an Autonomous Unmanned Aerial Vehicle
Previous Article in Journal
Detection of Schools in Remote Sensing Images Based on Attention-Guided Dense Network
Previous Article in Special Issue
A Web GIS-Based Integration of 3D Digital Models with Linked Open Data for Cultural Heritage Exploration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Post-Scan Point Cloud Colorization Method for Cultural Heritage Documentation

1
School of Geography and Planning, Sun Yat-sen University, Guangzhou 510275, China
2
Guangdong Provincial Key Laboratory of Urbanization and Geo-Simulation, Sun Yat-sen University, Guangzhou 510275, China
3
Department of Geography, College of Science, Swansea University, Swansea SA2 8PP, UK
4
China Regional Coordinated Development and Rural Construction Institute, Sun Yat-sen University, Guangzhou 510275, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2021, 10(11), 737; https://doi.org/10.3390/ijgi10110737
Submission received: 2 August 2021 / Revised: 16 October 2021 / Accepted: 23 October 2021 / Published: 29 October 2021
(This article belongs to the Special Issue Cultural Heritage Mapping and Observation)

Abstract

:
The 3D laser scanning technique is important for cultural heritage documentation. The laser itself normally does not carry any color information, so it usually requires an embedded camera system to colorize the point cloud. However, when the embedded camera system fails to perform properly under some external interferences, a post-scan colorization method is always desired to improve the point cloud visuality. This paper presents a simple but efficient point cloud colorization method based on a point-to-pixel orthogonal projection under an assumption that the orthogonal and perspective projections can produce similar effects for a planar feature as long as the target-to-camera distance is relatively short (within several meters). This assumption was verified by a simulation experiment, and the results show that only approximately 5% of colorization error was found at a target-to-camera distance of 3 m. The method was further verified with two real datasets collected for the cultural heritage documentation. The results showed that the visuality of the point clouds for two giant historical buildings had been greatly improved after applying the proposed method.

1. Introduction

The three-dimensional (3D) laser scanning, also commonly known as the terrestrial light detection and ranging (LiDAR), is nowadays an important technology for cultural heritage documentation [1]. A laser scanner can collect millions of points in a few minutes to digitalize outlines of most antique structures. The scanner comprises a laser component that emits and receives laser signals, along with measurements collected from digital angle decoders, to generate a cloud of 3D coordinates (point cloud) of a target. The mode of the laser emission and reception can be broadly classified into pulse-based and phase-based. For tasks that require a higher accuracy in terms of 3D reconstruction for the cultural heritage (usually for international standards), a phase-based laser (scanner) is preferred. Nevertheless, neither mode of the laser loads a color-related information or color model (e.g., the red, green and blue model) but only a returned-laser intensity. Therefore, many terrestrial scanners are embedded with a digital camera, which provides colors to every single point in the point clouds. This is one of the most common colorization methods adopted by the current commercialized laser scanners [2]. Accurate colorization of the point clouds is important as the color can greatly increase the information content of the point clouds, and therefore makes the point clouds become more effective in visual communication [3]. As colors are important components of culture and architecture, point cloud colorization is significant for cultural heritage documentation when laser scanners are used for such purposes [4,5].
The application of point cloud can often be extended when the color and other radiometric information are available. For example, Saglam et al. utilized the color information of the point clouds to enhance a point cloud segmentation algorithm [6]. Choi et al. incorporated the color of point clouds with their geometry to register point clouds collected from different stations [7]. Ling et al. considered distances, incident angles and target color for accurate evaluation of a scanner [8]. For a larger aspect, Chai et al. proposed a fusion-based building segmentation method that simultaneously considered the color of images and the geometry of point clouds to improve the segmentation accuracy [9]. Other than using the color information of a digital camera, the radiometric information provided by the multispectral camera can also be integrated with point cloud to enhance the applications. For example, Keskin et al. used the depth enhancement method to improve the radiometric resolution of airborne scanning data through co-registering color images to the point clouds [10]. Awrangjeb et al. proposed a roof extraction method using the point clouds and multispectral information [11].
The basic principle of the typical point cloud colorization method is that the transformation parameters (for six degrees of freedom, DoF) between the coordinate systems of the scanner and the camera are estimated via a calibration process [12]. A common target field should be set up to estimate the six transformation parameters based on the least-squares method, before the actual scanning target is performed. The point cloud will be transformed into the image plane to acquire the color using the parameters as illustrated by Figure 1. The transformation is governed by the collinearity equation (Equation (1)).
[ x x p y y p c ] = 1 λ M [ X X c Y Y c Z Z c ]
where (X,Y,Z) is the coordinate of a point from the point cloud, (x,y) is the coordinate at the image space of the camera. c is the principal distance, λ is the scale factor depending on the object distance. For the transformation, M is the rotation matrix, and (Xc,Yc,Zc) is the camera position for the translation.
After the transformation parameters are estimated, Equation (1) can be reorganized as Equation (2) so that the scale factor can be eliminated [13]. Equation (2) is used to project every point in the point clouds onto the two-dimensional (2D) image plane of the camera, so each point falls onto a pixel with a color. In other words, every point will acquire a color after the projection (the perspective projection). The cameras will capture images immediately either before or after scanning for the scene. The image capturing and scanning processes are usually in sequence for a single scanning process so that the projection from point to image can be performed correctly. This process can be defined as an “in situ” point cloud colorization. The in situ colorization process can be interrupted by certain conditions such as limited lighting at the scenes. For example, limited lighting during scanning for the indoor structure of historical buildings (Figure 2) is not uncommon as the lighting systems are usually not well developed for such historical buildings as people usually do not work or live inside the buildings. Other color impairing conditions include shadowing, occlusion, overexposure, etc.
{ x = x p c m 11 ( X X c ) + m 12 ( Y Y c ) + m 13 ( Z Z c ) m 31 ( X X c ) + m 32 ( Y Y c ) + m 33 ( Z Z c ) y = y p c m 21 ( X X c ) + m 22 ( Y Y c ) + m 23 ( Z Z c ) m 31 ( X X c ) + m 32 ( Y Y c ) + m 33 ( Z Z c )
where mij is the element of row i and the column j of M, which can be expressed in terms of the camera orientation (Ω, Φ, and Κ) as follows.
M = R 3 ( Κ ) R 2 ( Φ ) R 1 ( Ω ) = ( m 11 m 12 m 13 m 21 m 22 m 23 m 31 m 32 m 33 )
where R1, R2, and R3 are the 3D rotation matrices about the X, Y and Z axes, respectively.
As the in situ colorization process can be often hindered by certain conditions in a similar way that the photography can be, efficient post-scan colorization should be developed and applied to the point clouds. Nevertheless, there are still not many post-scan colorization-related works published in the literature. Crombez et al. used a visual servoing approach to register images with the point clouds. This method is accurate but requires interior and exterior orientation parameters, which might not always be available for users [14]. Pepe et al. installed target points onto the target for registration between a scanner and an external camera based on the bundle adjustment principle [15]. Even though the method produced high accuracy results, it required placement of target points, which would reduce the exposure of the original target to be surveyed.
Cao and Nagao developed a point cloud colorization method based on the generative adversarial networks (GANs) to colorize a set of 10,000 point clouds within 16 categories [16]. This method has a very high accuracy but relies on intensive training of the network using many sample point clouds. It is not site-dependent, and therefore may not be suitable for cultural heritage documentation unless extensive modifications are incurred. Arshad and Beksi modified the traditional GANs with graph convolutions constructed by leaf output layers and branches [17]. This method also requires many training samples, and therefore may not be adopted to the point cloud of a unique historical site. Gaiani et al. made use of the principle of photogrammetry to correct the color of the point clouds obtained from laser scanning, including the automatic color balance, exposure equalization, etc. [18]. Julin et al. proposed a new colorization evaluation method to quantify the color sensing capability of four different brands of laser scanners [2]. Their work focused on a performance evaluation rather than development of a usable colorization algorithm.
The colorization of point clouds greatly supported the development of the virtual reality (VR) for historical sites. For example, Edler et al. reviewed the potential of immersive VR environments for a multifaceted and redeveloped area for several applications in post-industrial sites in Germany, with emphases on the usage of game engine-based VR technologies [19]. Buyuksalih et al. used the Riegl VZ-400 scanner to capture point clouds to input into a Unity game engine to create immersive and interactive visualization of a cave for a VR system in the Istanbul Catalca Incegiz caves in Turkey [20]. For potential development of a VR system, Lezzerini et al. mapped the facades of the medieval church of St. Nicholas with laser scanning and a geographical information system (GIS) in Pisa, Italy [21].
In this paper, we proposed a simple but efficient point cloud colorization method to perform post-scan colorization for planar features with some images acquired independently from the colorless scans obtained for dark rooms or rooms with vast occlusions inside historical buildings. For the proposed method, a single image is enough to perform the colorization for a single planar feature as it is assumed that the perspective projection produces a similar effect as the orthogonal projection for a planar feature when the image capturing distance is relatively short (within several meters). The proposed method aimed at providing a relatively coarse colorization as the calibration process is not involved. Nevertheless, as demonstrated by the results, our method can deliver visualization results well and is easy to be implemented as the user does not require any information about the camera (e.g., the interior orientation parameters) but only a single image for colorization of a single planar feature.

2. Method

The proposed method is a semi-automatic approach for post-scan point cloud colorization for large planar features such as internal walls in a historical site. The workflow of the proposed method is shown in Figure 3. The method first invokes the random sample consensus (RANSAC) algorithm to separate individual planar features [22]. This process involves some visual inspection and manual editing to ensure that the planar features are segmented correctly. Then, the normal vectors and areas are estimated for each planar feature for identification of their attributes (walls, floors, ceilings, doors, windows or others). When the planar features are identified, images collected at the same place under improved conditions for photography or at different places with similar planar features (e.g., another similar wall with better lighting condition) are used to colorize the point cloud by using an orthogonal point-to-pixel projection under an assumption that the effects of orthogonal and perspective projections are similar for planar features when the target-to-camera distance is relatively short (within only several meters). The distribution of normal vectors of some selected planar features (or parts of them such as a windowsill) are considered to enhance the intensity variation of the assigned colors via a color intensity adjustment process.

2.1. Planar Feature Extraction

The planar feature extraction consists of three main point cloud processing steps: (1) ground filtering; (2) noise-removal; (3) RANSAC plane fitting. Firstly, the ground, which usually has the highest (or nearly the highest) point density, can be first separated from the entire point cloud via the clothe simulation filtering (CSF) method [23]. The CSF is almost parameter-free, so it can be readily applied. Secondly, the noise-removal method adopted is a simple radius-based method based on the k-nearest neighbor [24]. A point will be considered as “noisy” and be removed if that point does not possess a certain number of neighboring points within a spherical space defined by a radius and a center point (the point’s own position). Finally, the non-ground point clouds can be input to a RANSAC algorithm to separate individual planar features (Figure 4), then the normal vector can be estimated using the Principal Component Analysis (PCA) for each feature to define its orientations and positions [25]. Visual inspection and manual editing are required to guarantee the overall quality of the separated planes.

2.2. Point-to-Pixel Projection

The extracted individual planar feature will be used to generate a minimum boundary rectangle so that the size can be estimated to further compute a scaling factor [26], s, which is used to scale up the image coordinates to object coordinates (Equation (4)) associated with a pixel. The normal vector of the planar feature is computed for alignment of the central axis orthogonal to the image plane. Then, each point from the point cloud can be projected orthogonally onto a pixel with an object coordinate close to the point (searched by the k-nearest neighbor algorithm) to acquire a color (RGB value), as illustrated in Figure 5.
[ X Y Z ] = s [ i j 0 ]
where (i,j) is the image coordinate, and (X′,Y′,Z′) is the object coordinate associated with a pixel.

2.3. Color Intensity Adjustment (CIA)

Some walls inevitably contain windowsills or some other similar structures. For image capturing, the color intensity is usually weaker at the sill-like structure as less amount of light can reach there. As a result, it is needed to adjust the color intensity of the points lying on the sill-like structure during the point cloud colorization. As illustrated in Figure 6, points at the sills possess normal vectors (blue and orange), and these normal vectors are orthogonal (or approximately orthogonal) to normal vectors of the main surface of walls (red). The reduction of color intensity is governed by the intensity adjustment factor (c), which is a function of the angle between the two sets of abovementioned normal vectors:
c = ( 1 a , b α )
where a is the normal vector of the main surface of walls; b is the normal vector of the sill-like structure. α is the angular factor (in radian) that determines the extent of the RGB intensity reduction. In practice, it can be set as 3.49 (~200°), based on empirical experience.

3. Experiment

3.1. Simulated Dataset

A point cloud of a planar feature (a facia board of the Sun Yat-sen University) captured by a Trimble SX 10 scanner with color observed by its embedded camera (Figure 7a) was used for a simulation experiment. The point cloud is first translated by its centroid and rotated, so its principal direction is parallel to the X-axis. The point cloud is assumed to be a real object, and an image of point cloud was simulated using the collinearity equation augmented with some realistic parameters listed in Table 1. The IOP values are obtained from the default setting from a photogrammetric software (e.g., ContextCapture). Then, the image is used to colorize the point cloud (with the new color overlaying the old color) using the proposed method for the verification of the practicality. The root-mean-square error (RMSEcolor) of the colorization can be computed as:
R M S E c o l o r = i = 1 n ( δ R G B i ) 2 n
where, for each point, the RMSE of the red, green and blue is
δ R G B i = ( δ R ) 2 + ( δ G ) 2 + ( δ B ) 2 3
δ R , δ G , and δ B are the differences of the red, green, and blue intensity (0–255) between the color estimated by the proposed method and the original color of the point cloud, respectively. The RMSE and the standard deviation are used because they can quantify the average colorization errors. The Y position of the camera is being changed from 1 to 15 m with an interval of 2 to simulate the shift of camera position along the Y-axis to examine the impact to the results with the increasing image capturing distances.

3.2. Real Dataset

The proposed method was applied to two real datasets collected by using a Trimble SX 10 scanner and a Nikon D5600 camera (Figure 7b). The first dataset was captured on 7 November 2020, for the Zou Lu House (Figure 8a), which was the home of a late Chinese educator, Prof. Lu Zou (1885–1954), the first principle of the Sun Yat-sen University. The Zou Lu House is located in Chayang, Mei Zhou, Guangdong Province of China. The house was built in the early Qing dynasty (1644–1911), with the Hakka style. It was listed as one of the conserved cultural heritage sites by the Guangdong Province of China in 2005. The second dataset was captured on 10 December 2020, using the same set of equipment for the Wesley House (Figure 8b) located at the Sun Yat-sen University’s south campus in the Haizhu district, Guangzhou, China. The Wesley House was built in 1924 based on the eclecticism, supported by the Wesleyan Methodist Missionary Society of England. Similar to the Zou Lu House, the Wesley House was officially listed as a conserved cultural heritage site by the Guangdong Province of China. They are selected as the study cases in this work as some colors of their point clouds are deteriorated (appeared almost all black) due to missing light sources in rooms without windows or serious occlusions. The details of the datasets are tabulated in Table 2.
The variation of color across a point cloud can be quantified by computing the standard deviation, σ, of the color of the point cloud for the planar feature, j:
σ j = i = 1 n ( f i u ) 2 n
where f is the color of a point, u is the mean color, and n is the number of points on the point cloud. The higher the contrast of the color, the higher σ is.

4. Results

4.1. Results for the Simulated Data

The RMSE of the colorization (RMSEcolor) is plotted versus the image capturing (T-C, target-to-camera) distance in Figure 9. It can be seen that the RMSEcolor increases as the T-C distance increases. For a T-C distance less than 3, the RMSEcolor is only 15, which means the intensity of the RGB color will have an RMSE of approximately 5.86% of the total colorization error. This is acceptable for many applications as changes of 15 out of 256 intensities will not cause a significant difference in visual perception.
On the other hand, Figure 10 shows the comparison of the colors between the original point cloud and the colorized point cloud of the facia board at different T-C distances. It can be seen that the colorized point clouds appear almost the same as the original point cloud (object) at the T-C distance of 3 m (comparing Figure 10a,b). This is consistent with the result of the RMSEcolor of ~15 (low) shown in Figure 9, suggesting that the proposed method is practically correct under the assumption that the effects of orthogonal and perspective projections are similar for a planar feature as long as the T-C distance is within several meters. It can be seen in Figure 10 that the colorized point clouds are erroneous at T-C distance of 9 and 15 m, while the RMSEcolor for both simulations is higher than 45, which is approximately 18% of the total colorization errors.

4.2. Results for the Real Data

Figure 11 shows the original registered point cloud of the Zou Lu House, and the same point cloud after the proposed colorization method is applied. The original point cloud has approximately 30% of the planar features scanned under darkness thanks to the underdeveloped lighting system, so the point clouds of those planar features are colorless (or black). Images acquired from daytime are used to colorize those planar features using the proposed method, it can be seen that the visuality of the point cloud has been greatly improved. Two walls are shown in Figure 12 and Figure 13. It can be seen that the wall, windows and door can be colorized properly using the proposed method.
The Wesley House is currently being used as an office for a research center (China Regional Coordinated Development and Rural Construction Institute of the Sun Yat-sen University), it is occupied by more than 3 faculty members and 20 graduate students. Therefore, the Wesley House was busy while the scans were collected. Many obstacles, shadows, and noises were found as seen in Figure 14a. After applying the proposed colorization method, the abovementioned issues were mostly remedied as can be seen from Figure 14b. Figure 15 and Figure 16 show two zoomed details of the Wesley House, the colorized point clouds produce apparently higher visual perspective effects. In Figure 15, there are some artifacts caused by opening windows (the right window, the glass caused the deflection of the laser, and some mossy portions). After applying the proposed method, the artifacts were removed as images of brick walls are used to colorize the point clouds. The images were taken by a brick wall nearby. In Figure 16, it can be seen that the contrast of the floor color was significantly improved after applying the proposed method.
To quantify the change in color of the point cloud after applying the proposed method, the histogram analysis was performed and the standard deviation of the color of the point cloud was computed using Equation (8). The standard deviation of the color of the point cloud of four selected planar features from the two datasets was tabulated in Table 3. A larger standard deviation implies a higher variety of the color of the point cloud. The point clouds are usually more visually realistic, in normal circumstances, when the color contrast is higher. As a result, the proposed CIA algorithm can further increase the standard deviations of the color so that the variation of color across the features are increased.
It can be seen from Figure 17 and Figure 18 that the gray values of the point clouds shift toward the white region, and the contrasts increase after the proposed colorization method is applied. As a result, the CIA is shown to be an essential process to colorize the sill-like component of walls to gain higher color contrast. On the one hand, it can be seen from Figure 17 that the distribution of the gray value becomes closer to the normal distribution. The visuality of the outcome becomes more realistic. It is much better to assign a homogenous color (e.g., the white color) to the wall, as illustrated in Figure 17. On the other hand, it can be seen from Figure 18 that the distribution of the gray value also becomes closer to the normal distribution. The fluctuation of the gray value is significantly reduced after applying the proposed method, as shown in Figure 18. The range of the gray values span more widely in the histogram after applying the proposed method. These significantly enhance the visuality of the wall.
For the point clouds of the sill-like component, they often suffer from the edge effect (the severity depends on the beamwidth of the laser) [27] and the field of view issues so that the quality of the color of the point cloud has also deteriorated. This is the reason for why the intensity of the color should be adjusted to deliver more realistic results. It is worth noting that the proposed CLA algorithm works in a way that the color of image is transferred to point clouds with fairly high accuracy to enhance the visuality and comprehensiveness of the point clouds. Therefore, the final outcomes actually depend on the quality of the input images. The input images can be captured at different times so that the flexibility of usage of the method is high. As long as the images have high contrast and visuality, the images can be used to colorize the point cloud.

5. Discussion

The proposed method consists of a straightforward approach to project the color of images (captured independently) onto the point clouds. As shown in Section 4, the proposed could complete the colorization, turning colorless or color-faded point clouds into colorful and realistic ones. Compared to other colorization method based on machine learning [16,17], the proposed method does not require collection of training datasets. It only requires a single image captured under a better condition (e.g., with extra lighting) or for a similar object/scene. Based on our results obtained from simulation with a projection of an actual image onto a point cloud of an object, it is confirmed that the prospective nature of the imaging caused only slight displacement of color (with only 5% colorization error at a camera-to-target distance of 3 m). Normally, the image is captured within 2 m away from the target, so the colorization error can be further reduced. In addition, the proposed method was applied to two historical point clouds. It is shown that the promising visual effects were achieved.

6. Conclusions

In this paper, we proposed a simple but efficient point cloud colorization method to remedy destroyed or lost color of the point clouds of planar features after scans are collected and registered, as long as some images of the features can be collected independently. The method does not require any calibration (between the scanner and the camera) or any information about the camera (e.g., the interior and exterior orientation parameters) to colorize large planar features such as walls for buildings. To conclude, the method is based on a point-to-pixel orthogonal projection, which assumes the effects of orthogonal and perspective projections are close to each other for planar features when the target-to-camera distance is relatively short (within several meters). This assumption is verified by a simulation experiment, and the result suggested that the proposed method normally will cause approximately 5% of colorization error at the target-to-camera distance of 3 m, so the point clouds colorized by the proposed method appear almost the same as the point cloud with the actual color. The method was also verified with two real datasets collected in Guangzhou, China for cultural heritage purposes. Our results indicate that the visuality of the point clouds for two giant historical buildings greatly improved after the proposed method was applied.

Author Contributions

Conceptualization, Ting On Chan, Hang Xiao, Lixin Liu, Wei Lang and Tingting Chen; methodology, Ting On Chan and Hang Xiao; software, Hang Xiao, Ting On Chan and Ming Ho Li; validation, Hang Xiao, Lixin Liu, Wei Lang and Tingting Chen; formal analysis, Ting On Chan, Hang Xiao, Ming Ho Li and Lixin Liu; investigation, Yeran Sun and Wei Lang; resources, Ting On Chan and Yeran Sun; data curation, Hang Xiao and Ming Ho Li; writing—original draft preparation, Ting On Chan and Lixin Liu; writing—review and editing, Ting On Chan, Lixin Liu, Yeran Sun, Wei Lang and Tingting Chen; visualization, Hang Xiao and Ming Ho Li; supervision, Ting On Chan, Lixin Liu, Wei Lang and Tingting Chen; project administration, Lixin Liu, Yeran Sun, Wei Lang and Tingting Chen; funding acquisition, Lixin Liu, Wei Lang and Tingting Chen. All authors have read and agreed to the published version of the manuscript.

Funding

This work is primarily supported by the Natural Science Foundation of Guangdong Province, China, grant number (2018A0303130087), and and Science and Technology Program of Guangzhou, China (202102080287).

Acknowledgments

Special thanks to Xun Li, the dean of the China Regional Coordinated Development and Rural Construction Institute of the Sun Yat-sen University, for providing guidance and venues.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. English Heritage. 3D Laser Scanning for Heritage (Second Edition): Advance and Guidance to User on Laser Scanning in Archaeology and Architecture. 2011. Available online: https://docplayer.net/1211136-3d-laser-scanning-for-heritage-second-edition-advice-and-guidance-to-users-on-laser-scanning-in-archaeology-and-architecture.html (accessed on 28 July 2021).
  2. Julin, A.; Kurkela, M.; Rantanen, T.; Virtanen, J.P.; Maksimainen, M.; Kukko, A.; Kaartinen, H.; Vaaja, M.T.; Hyyppä, J.; Hyyppä, H. Evaluating the quality of TLS point cloud colorization. Remote Sens. 2020, 12, 2748. [Google Scholar] [CrossRef]
  3. Lindsay, M. Using color effectively in computer graphics. IEEE Comput. Graph. 1999, 19, 20–35. [Google Scholar]
  4. Franceschi, E.; Letardi, P.; Luciano, G. Colour measurements on patinas and coating system for outdoor bronze monuments. J. Cult. Herit. 2006, 7, 166–170. [Google Scholar] [CrossRef]
  5. Lorenza, A.; Vittoria, B.; Rossella, C.; Domenico, V. Computer-aided monitoring of buildings of historical importance based on color. J. Cult. Herit. 2006, 7, 85–91. [Google Scholar]
  6. Saglam, A.; Baykan, N.A. A new color distance measure formulated from the cooperation of the Euclidean and the vector angular differences for lidar point cloud segmentation. Int. J. Eng. Sci. 2021, 6, 117–124. [Google Scholar]
  7. Choi, O.; Park, M.G.; Hwang, Y. Iterative k-closest point algorithms for colored point cloud registration. Sensors 2020, 20, 5331. [Google Scholar] [CrossRef]
  8. Ling, X. Research on building measurement accuracy verification based on terrestrial 3D laser Scanner. In Proceedings of the 2020 Asia Conference on Geological Research and Environmental Technology, Kamakura, Japan, 10–11 October 2021; Volume 632, p. 052086. [Google Scholar]
  9. Chai, D. A probabilistic framework for building extraction from airborne color image and DSM. IEEE J.-Stars 2016, 10, 948–959. [Google Scholar] [CrossRef]
  10. Keskin, G.; Gross, W.; Middelmann, W. Color-guided enhancement of airborne laser scanning data. IGARSS 2017, 4, 2617–2620. [Google Scholar]
  11. Awrangjeb, M.; Zhang, C.; Fraser, C.S. Automatic extraction of building roofs using LIDAR data and multispectral imagery. ISPRS J. Photogramm. Remote Sens. 2013, 83, 1–18. [Google Scholar] [CrossRef] [Green Version]
  12. Habib, A.F.; Kersting, J.; McCaffrey, T.M.; Jarvis, A.M. Integration of LIDAR and airborne imagery for realistic visualization of 3D urban environments. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 617–623. [Google Scholar]
  13. Förstner, W.; Wrobel, B. Mathematical concepts. In Photogrammetry; McGlone, J.C., Mikhail, E.M., Bethel, J., Mullen, R., Eds.; Manual of Photogrammetry: Bethesda, MA, USA, 2004; Volume 5, pp. 15–180. [Google Scholar]
  14. Crombez, N.; Caron, G.; Mouaddib, E. 3D point cloud model colorization by dense registration of dense images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 123–130. [Google Scholar] [CrossRef] [Green Version]
  15. Pepe, M.; Ackermann, S.; Fregonese, L.; Achille, C. 3D Point cloud model color adjustment by combining terrestrial laser scanner and close range photogrammetry datasets. Int. J. Comput. Inf. Eng. 2016, 10, 1942–1948. [Google Scholar]
  16. Xu, C.; Katashi, N. Point Cloud Colorization Based on Densely Annotated 3D Shape Dataset; Springer: Cham, Germany, 2019; pp. 436–446. [Google Scholar]
  17. Arshad, M.S.; Beksi, W. A Progressive conditional generative adversarial network for generating dense and colored 3D point clouds. In Proceedings of the 2020 International Conference on 3D Vision, Fukuoka, Japan, 25–28 November 2020; Volume 5, pp. 712–722. [Google Scholar]
  18. Gaiani, M.; Apollonio, F.I.; Ballabeni, A.; Remondino, F. Securing color fidelity in 3D architectural heritage scenarios. Sensors 2017, 17, 2437. [Google Scholar] [CrossRef] [Green Version]
  19. Edler, D.; Keil, J.; Wiedenlübbert, T.; Sossna, M.; Kuhne, O.; Dickmann, F. Immersive VR experience of redeveloped post-industrial sites: The example of “Zeche Holland” in Bochum-Wattenscheid. KN J. Cartogr. Geogr. Inf. 2019, 69, 267–284. [Google Scholar] [CrossRef] [Green Version]
  20. Büyüksalih, G.; Kan, T.; Özkan, G.E.; Meric, M.; Isin, L.; Kersten, T.P. Preserving the knowledge of the past through virtual visits: From 3D laser scanning to virtual reality visualization at the Istanbul Çatalca İnceğiz Caves. PFG 2020, 88, 133–146. [Google Scholar] [CrossRef]
  21. Lezzerini, M.; Antonelli, F.; Columbu, S.; Gadducci, R.; Marradi, A.; Miriello, D.; Parodi, L.; Secchiari, L.; Lazzeri, A. Cultural heritage documentation and conservation: Three-dimensional (3D) laser scanning and geographical information system (GIS) techniques for thematic mapping of facade stonework of St. Nicholas Church (Pisa, Italy). Int. J. Archit. Herit. 2016, 10, 9–19. [Google Scholar] [CrossRef]
  22. Qian, X.; Ye, C. NCC-RANSAC: A fast plane extraction method for 3D range data segmentation. IEEE Trans. Cybern. 2014, 44, 2771–2783. [Google Scholar] [CrossRef]
  23. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An easy-to-use airborne LiDAR data filtering method based on cloth simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  24. Friedman, J.H.; Bentley, J.L.; Finkel, R.A. An algorithm for finding best matches in logarithmic expected time. ACM Trans. Math. Softw. 1977, 3, 209–226. [Google Scholar] [CrossRef]
  25. Demantké, J.; Mallet, C.; David, N.; Vallet, B. Dimensionality based scale selection in 3D LiDAR point cloud. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 37, 97–102. [Google Scholar] [CrossRef] [Green Version]
  26. Chelishchev, P.; Sørby, K. Estimation of minimum volume of bounding box for geometrical metrology. Int. J. Metrol. Qual. Eng. 2020, 11, 9. [Google Scholar] [CrossRef]
  27. Klapa, P.; Mitka, B. Edge efect and its impact upon the accuracy of 2d and 3d modelling using laser scanning. Geomat. Landmanag. Landsc. 2017, 1, 25–33. [Google Scholar] [CrossRef]
Figure 1. Spatial relationship between the scanner space and the image space.
Figure 1. Spatial relationship between the scanner space and the image space.
Ijgi 10 00737 g001
Figure 2. Scanning being performed inside a dark room of a historical building.
Figure 2. Scanning being performed inside a dark room of a historical building.
Ijgi 10 00737 g002
Figure 3. Workflow of the proposed method.
Figure 3. Workflow of the proposed method.
Ijgi 10 00737 g003
Figure 4. Planar features segmented from the original point cloud using the RANSAC plane fitting algorithm: (a) original point cloud of a historical building; (b) the segmented planar features.
Figure 4. Planar features segmented from the original point cloud using the RANSAC plane fitting algorithm: (a) original point cloud of a historical building; (b) the segmented planar features.
Ijgi 10 00737 g004
Figure 5. The proposed point-to-pixel projection.
Figure 5. The proposed point-to-pixel projection.
Ijgi 10 00737 g005
Figure 6. Normal vectors of the wall surfaces and sill-like structure.
Figure 6. Normal vectors of the wall surfaces and sill-like structure.
Ijgi 10 00737 g006
Figure 7. Survey equipment for experiment: (a) Trimble SX 10 scanner; (b) Nikon D5600 camera.
Figure 7. Survey equipment for experiment: (a) Trimble SX 10 scanner; (b) Nikon D5600 camera.
Ijgi 10 00737 g007
Figure 8. Historical sites for digital documentation in Guangdong, China: (a) Zou Lu House; (b) Wesley House.
Figure 8. Historical sites for digital documentation in Guangdong, China: (a) Zou Lu House; (b) Wesley House.
Ijgi 10 00737 g008
Figure 9. RMSEcolor (calculated by using Equation (6)) versus the T-C distances.
Figure 9. RMSEcolor (calculated by using Equation (6)) versus the T-C distances.
Ijgi 10 00737 g009
Figure 10. Original and the colorized point cloud (captured at different T-C distances under the simulation) of the facia board with some Chinese characters (meaning: Sun Yat-sen University): (a) the original point cloud (object); (b) colorized point cloud with the T-C distance of 3 m under the simulation; (c) colorized point cloud with the T-C distance of 9 m under the simulation; (d) colorized point cloud with the T-C distance of 15 m under the simulation; (e) the captured gateway; (f) the zoom-in image of the target.
Figure 10. Original and the colorized point cloud (captured at different T-C distances under the simulation) of the facia board with some Chinese characters (meaning: Sun Yat-sen University): (a) the original point cloud (object); (b) colorized point cloud with the T-C distance of 3 m under the simulation; (c) colorized point cloud with the T-C distance of 9 m under the simulation; (d) colorized point cloud with the T-C distance of 15 m under the simulation; (e) the captured gateway; (f) the zoom-in image of the target.
Ijgi 10 00737 g010
Figure 11. Entire point clouds of the Zou Lu House: (a) the original registered point cloud; (b) registered point cloud after the proposed colorization method is applied.
Figure 11. Entire point clouds of the Zou Lu House: (a) the original registered point cloud; (b) registered point cloud after the proposed colorization method is applied.
Ijgi 10 00737 g011
Figure 12. Point cloud of a wall with a window and door of the Zou Lu House: (a) the original registered point cloud; (b) registered point cloud after the proposed colorization method is applied.
Figure 12. Point cloud of a wall with a window and door of the Zou Lu House: (a) the original registered point cloud; (b) registered point cloud after the proposed colorization method is applied.
Ijgi 10 00737 g012
Figure 13. Point cloud of a wall with a window of the Zou Lu House: (a) the original registered point cloud; (b) registered point cloud after the proposed colorization method is applied.
Figure 13. Point cloud of a wall with a window of the Zou Lu House: (a) the original registered point cloud; (b) registered point cloud after the proposed colorization method is applied.
Ijgi 10 00737 g013
Figure 14. Entire point clouds of the Wesley House: (a) the original registered point cloud; (b) registered point cloud after the proposed colorization method is applied.
Figure 14. Entire point clouds of the Wesley House: (a) the original registered point cloud; (b) registered point cloud after the proposed colorization method is applied.
Ijgi 10 00737 g014
Figure 15. Point cloud of a brick wall with two windows of the Wesley House: (a) the original registered point cloud; (b) registered point cloud after the proposed colorization method is applied.
Figure 15. Point cloud of a brick wall with two windows of the Wesley House: (a) the original registered point cloud; (b) registered point cloud after the proposed colorization method is applied.
Ijgi 10 00737 g015
Figure 16. Point cloud of a corner of the Wesley House: (a) the original registered point cloud; (b) registered point cloud after the proposed colorization method is applied.
Figure 16. Point cloud of a corner of the Wesley House: (a) the original registered point cloud; (b) registered point cloud after the proposed colorization method is applied.
Ijgi 10 00737 g016
Figure 17. Color histogram of the wall extracted from the Zou Lu House (corresponding to Figure 13).
Figure 17. Color histogram of the wall extracted from the Zou Lu House (corresponding to Figure 13).
Ijgi 10 00737 g017
Figure 18. Color histogram of the wall extracted from the Wesley House (corresponding to Figure 15).
Figure 18. Color histogram of the wall extracted from the Wesley House (corresponding to Figure 15).
Ijgi 10 00737 g018
Table 1. Parameters and their values used for the simulation of images.
Table 1. Parameters and their values used for the simulation of images.
Param.μ (m)xp (m)yp (m)c (m)Xc (m)Yc (m)Zc (m)Ω (°)Φ (°)Κ (°)
Value1.37 × 10−510−510−54 × 10−301,3,5,7,9,11,13,1509000
Table 2. Details of the two real datasets captured for the experiment.
Table 2. Details of the two real datasets captured for the experiment.
SiteArea (m2)Number of ScansNumber of Images
Zou Lu House135221389
Wesley House40632582
Table 3. Standard deviation of the color of the point cloud.
Table 3. Standard deviation of the color of the point cloud.
DatasetsPlanar FeatureOriginal Point Cloud (Unitless)Colorized Point Cloud without CIA (Unitless)Colorized Point Cloud without CIA (Unitless)
Zou Lu House1235371626
2293407583
Wesley House3103412011395
4311427548
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chan, T.O.; Xiao, H.; Liu, L.; Sun, Y.; Chen, T.; Lang, W.; Li, M.H. A Post-Scan Point Cloud Colorization Method for Cultural Heritage Documentation. ISPRS Int. J. Geo-Inf. 2021, 10, 737. https://doi.org/10.3390/ijgi10110737

AMA Style

Chan TO, Xiao H, Liu L, Sun Y, Chen T, Lang W, Li MH. A Post-Scan Point Cloud Colorization Method for Cultural Heritage Documentation. ISPRS International Journal of Geo-Information. 2021; 10(11):737. https://doi.org/10.3390/ijgi10110737

Chicago/Turabian Style

Chan, Ting On, Hang Xiao, Lixin Liu, Yeran Sun, Tingting Chen, Wei Lang, and Ming Ho Li. 2021. "A Post-Scan Point Cloud Colorization Method for Cultural Heritage Documentation" ISPRS International Journal of Geo-Information 10, no. 11: 737. https://doi.org/10.3390/ijgi10110737

APA Style

Chan, T. O., Xiao, H., Liu, L., Sun, Y., Chen, T., Lang, W., & Li, M. H. (2021). A Post-Scan Point Cloud Colorization Method for Cultural Heritage Documentation. ISPRS International Journal of Geo-Information, 10(11), 737. https://doi.org/10.3390/ijgi10110737

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop