Next Article in Journal
Influence of Self-Compaction on the Airflow Resistance of Aerated Wheat Bulks (Triticum aestivum L., cv. ‘Pionier’)
Previous Article in Journal
Validation of a Wind Tunnel Propeller Dynamometer for Group 2 Unmanned Aircraft
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RGB-D Camera for 3D Laser Point Cloud Hole Repair in Mine Access Shaft Roadway

1
Faculty of Land and Resources Engineering, Kunming University of Science and Technology, Kunming 650093, China
2
Surveying & Mapping Technology and Application Research Center on Plateau Mountains of Yunnan Higher Education, Kunming University of Science and Technology, Kunming 650093, China
3
Kunming Prospecting Design Institute of China Nonferrous Metals Industry Co., Ltd., Kunming 650051, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(17), 8910; https://doi.org/10.3390/app12178910
Submission received: 29 July 2022 / Revised: 2 September 2022 / Accepted: 3 September 2022 / Published: 5 September 2022

Abstract

:
With the rapid development of the geographic information service industry, point cloud data are widely used in various fields, such as architecture, planning, cultural relics protection, mining engineering, etc. Despite that there are many approaches to collecting point clouds, we are facing the problem of point cloud holes caused by the inability of a 3D laser scanner to collect data completely in the narrow space of the mine access shaft. Thus, this paper uses RGB-D cameras to collect data and reconstruct the hole in the point cloud. We used a 3D laser scanner and RGB-D depth camera to collect the 3D point cloud data of the access shaft roadway. The maximum error was 2.617 cm and the minimum error was 0.031 cm by measuring the distance between the feature points, which satisfied the visualization repair of the missing parts of the 3D laser scanner data collection. We used the FPTH + ICP algorithm, ISS + ICP algorithm, SVD + ICP algorithm, and 3D-NDT algorithm to perform registration and fusion of the processed 3D point cloud and the original point cloud and finally repaired the hole. The study results show that the ISS + ICP registration algorithm had the most matching points and the lowest RMSE value of 13.8524 mm. In addition, in the closed and narrow roadway, the RGB-D camera was light and easy to operate and the point data acquired by it had relatively high precision. The three-dimensional point cloud of the repaired access shaft roadway has a good fit and can meet the repair requirements.

1. Introduction

With the continuous development of computer graphics and visuals, advanced sensor technology enables us to use low-cost RGB-D (RGB depth) cameras to obtain 3D scene information [1]. The RGB-D camera also provides some advantages in obtaining geospatial information. The RGB-D camera can obtain the color and depth image of the same scene in real time. The depth image is also called range image [2], which makes the distance (depth) from the image collector to each element point in three-dimensional space the image pixel value. This can directly reflect the geometric elements in three-dimensional space. Laser technology is one of the essential tools for human survival and development of an intelligent society. Taking the three advantages of laser technology, namely monochromatic wavelength, homogeneity, and parallel beam, laser technology is widely used in various industries, such as additive manufacturing (AM). Khorasani Mahyar et al. [3] have developed a hybrid experimental and computational model to estimate the absorption of the laser for Laser-Based Powder Bed Fusion (LB-PBF) of IN718. Di Wang et al. [4] have explored a structural optimization method to achieve the lightweight design of an aviation control stick part manufactured by laser powder bed fusion (LPBF) additive manufacturing. Jean-marc Linares et al. [5] have proposed a methodology to control the fatigue life of 17-4Ph stainless steel by selecting the most relevant manufacturing parameters: i.e., laser power, laser travel speed, hatch spacing, and laser defocusing. Kamel Ettaieb et al. [6] have taken advantage of an analytical thermal model to modulate the laser power upstream of manufacturing.
The rapid development of society is inseparable from the large-scale development of mineral resources. China’s non-metallic and metal underground mines are widely distributed and large in volume. It is necessary to obtain detailed three-dimensional spatial information of the mining area quickly, especially the underground space of the mine, to further facilitate risk assessment [7]. The traditional surveying and mapping methods struggle to collect adequate data for underground mine engineering, especially in the access shaft roadway, with poor integrity and visualization of data. As a non-destructive observation technology, three-dimensional laser scanning can quickly and accurately obtain the three-dimensional position of the surface topography. The advantages of the 3D laser scanner in obtaining the 3D shape and deformation of underground mines have attracted some attention to applying them in underground engineering [8,9]. For example, SLAM-based technology has used handheld 3D laser scanners for 3D data acquisition in underground mines [10]. In addition, backpack 3D laser scanners and handheld 3D laser scanners have been used to set up the modeling of indoor spaces [11]. At present, RGB-D cameras can accurately obtain three-dimensional spatial information in indoor scenes. Qian et al. [12,13] have used depth cameras to reconstruct three-dimensional dense indoor scenes and optimize them globally. Due to the influence of the 3D laser scanner environment and its own structure, the collected point cloud dataset is incomplete. Many researchers have conducted extensive research on point cloud data repair and achieved some results [14,15,16]. However, there is little research at present on the repair of 3D laser scanning point cloud data through RGB-D cameras. The indoor 3D reconstruction of the RGB-D camera is similar to the underground mine roadway environment. Therefore, we used an RGB-D camera to obtain point cloud data to repair the missing part of the 3D laser scanner point cloud of the underground mine access shaft. We adapted the RGB-D camera for 3D data acquisition in the mine access shaft roadway, which allowed the RGB-D camera to move from traditional indoor 3D point cloud model construction to underground engineering 3D spatial point cloud model construction, making the use of the RGB-D camera more extensive. For the construction of the 3D model of the underground mine, this paper provides a method of data acquisition modeling and laser point cloud repair using the RGB-D camera, which has certain significance to the development of mine digitization.

2. Principle of 3D Reconstruction of RGB-D Camera

The three-dimensional reconstruction technology of the RGB-D camera mainly includes the acquisition of RGB image and depth image, image data preprocessing, point cloud registration and fusion, and its ability to transform a real scene into a mathematical model that conforms to the logical expression of computer. At present, the commonly used RGB-D cameras mainly include the structured-light [17] depth camera (Figure 1a), which emits light with feature points to objects with smooth surfaces and no features, and assists in extracting the depth information of objects according to the three-dimensional information in the light source; the time of flight (TOF) [18,19] depth camera (Figure 1b), which obtains the distance by measuring the interval of flight time between the transmitted signal and the received signal on the premise of a certain speed of light and sound; and the binocular stereo [20] depth camera (Figure 1c), a passive depth perception method that simulates the principles of human vision.
The RGB-D camera can capture color images and depth images. As shown in Figure 2, RGB images are obtained by varying the three color channels of red (R), green (G), and blue (B) and superimposing them on each other to obtain a variety of colors. RGB is the color system representing the three channels of red, green, and blue. This standard includes almost all the colors perceived by human vision and is one of the most widely used color systems at present. In 3D computer graphics, a depth image is an image or image channel that contains information related to the distance of the surface to the scene object at the point of view. In this case, the depth image is similar to a grayscale image, except that each of its pixel values is the actual distance of the sensor from the object. According to the colors shown in Figure 2b, the blue part and the red part of the figure represent different ranging depths, increasing in depth from blue to red. Usually, RGB images and depth images are aligned and thus have a one-to-one correspondence between pixel points. In addition, the color image provides the (x, y) coordinate values in the image pixel coordinate system, and the depth image provides the distance value from the camera to each pixel in the scene, that is, the (z) coordinate value in the camera coordinate system.

3. Data Acquisition

3.1. Overview of the Study Area

The study area of this paper is located in the cement limestone mining area of Zhedong Town, Zhenyuan County, Pu’er City, Yunnan Province, China (as shown in Figure 3). Zhedong Town is located on both sides of the Zhegan River and upstream of the Amo River at the southwestern foot of Ailao Mountain. It has rich underground mineral resources. It mainly takes the coal industry as the industrial system, and its cement production is also one of the regional pillar industries.
The limestone mining in this study area is open-pit mining. The limestones are transported to the crushing chamber by transfer chutes to the factory by belt conveyor for treatment. In the process of mining, the chute can get stuck when the limestone is placed in it, which causes the rock wall of the underground mine to crack during blasting, resulting in potential safety hazards in the mine tunnel. Therefore, it is urgent to conduct an overall safety evaluation of the underground mine. The total length of the underground tunnel is 384 m, with a longitudinal section of 2.4 m × 3.6 m (2.8 m in the straight wall section and 0.8 m in the vaulted section) and an overall slope of 0.514°. The tunnel is connected to a crushing chamber at the end and a shaft at the top, with an overall depth of 109 m. The average size of the tunnel is 1.2 m × 1.7 m (1.2 m in the straight wall and 0.5 m in the vault section).
However, due to the narrow and steep roadway of the access shaft of the underground mine, the 3D laser scanner cannot completely collect the 3D data in the tunnel. Miniaturized equipment shall be used to facilitate the area with incomplete acquisition. RGB-D cameras can carry out indoor 3D reconstruction with reliable accuracy. In view of this, this paper innovatively attempted to combine the abilities of the RGB-D camera and 3D laser scanning technology to obtain the complete 3D point cloud data of the access shaft roadway, which could provide powerful data visualization support for the safety evaluation of underground mine.

3.2. Data acquisition of Ground 3D Laser Scanner

Ground three-dimensional laser scanning technology can record the three-dimensional geometric information of a measured target surface through joint calculation of the distance between the laser emission point and the target as well as the position and attitude information of the laser transmitter, so as to obtain the three-dimensional point cloud of the measured target [21]. Compared with traditional measurement methods, this technology has realized the transformation from traditional single-point measurement to area measurement. It is characterized by its high precision, automation, and strong environmental adaptability. Recently, it has been widely used in engineering construction, mine engineering, digital protection of cultural digital heritage, etc. [22].
In this paper, the Maptek I-site 820ER 3D laser scanner of MAPTEK was used to collect 3D point cloud data. The flow chart of point cloud data acquisition is shown in Figure 4. Without the interference of metal minerals and the magnetic field, the maximum range of observation is 750 m, the minimum range is 1 m, the scanning range is 125° vertically and 360° horizontally, and the repeated ranging accuracy is ±6 mm. A high-resolution panoramic camera is also available to obtain point cloud texture information. Its powerful and comprehensive performance enables complex surveying and mapping tasks, such as underground mining engineering.
After the field survey of the underground mine, the control points were arranged along the roadway by means of Total Station Branch traverse, and the 3D coordinates of six target points were collected (the coordinate system is CGCS2000 coordinate system), which was convenient for the coordinate conversion and accuracy comparison of the 3D point cloud data. The data acquisition is shown in Figure 5 (in Figure 5a, 3D laser scanner could not be completely erected due to the narrow shaft). Although there were 39 stations that we had collect data by 3D laser scanner, we only intercepted the missing part of the point cloud data of the underground access shaft roadway for 3D point cloud repair experiment.

3.3. Data Acquisition of RGB-D Camera

There are various RGB-D cameras, and the Realsense depth camera [23] is highly integrated with a low cost, and it can be applied in both indoor and outdoor scenarios. Its ideal measurement range is 0.6 m to 6 m, and the depth accuracy reaches 4 m < 2%. Therefore, the Realsense D455 depth camera was used in this study to measure the depth by classical binocular vision and binocular vision ranging assisted by an infrared projector. The data acquisition flow chart is shown in Figure 6.
Before data acquisition, the depth camera needed to be calibrated to ensure the accuracy of the data collected by the camera. Realsense D455 has laser-fused steel cages designed to help maintain calibration and performance over their lifetime. Intel RealSense Viewer software has an automatic camera calibration function. Unlike traditional camera calibration, it does not require the use of a target reference, grid pattern, or complex reference to perform calibration; we just keep the depth of the camera stationary and stable and select the software self-calibration function to complete self-calibration.
After the depth cameras were calibrated, the data connection cable was connected to the computer and the handheld depth camera was used for data acquisition. The Rosbag dataset was recorded for the access shaft roadway through the Intel Realsense SDK depth camera control software, and finally, an offline dataset package was obtained, as shown in Figure 7. Each frame of the 2D color image and synchronous depth image of the recorded access shaft roadway can be obtained in the data package, and the unprocessed color 3D point cloud can also be obtained directly.

4. Data Processing

4.1. Data Processing of Ground 3D Laser Scanning

The 3D laser scanning equipment could not be completely erected in the complex field environment of our study area, and the narrow space of the shaft roadway limited the field view angle of the scanner. These caused the problems of noise, data loss on a small scale, and data redundancy for the obtained three-dimensional point cloud of the shaft roadway. Therefore, the original collected point cloud data needed preprocessing of point cloud denoising and registration.

4.1.1. Point Cloud Denoising

Due to the influences of the complex environment, equipment accuracy, and other factors in the scanning process, some noise points and outliers occurred far away from the measured object in the original 3D point cloud. These noise points would have a certain impact on the results of the point cloud registration and 3D model reconstruction. Based on its distribution, the noise would be filtered by the corresponding filter to improve the quality of the 3D point cloud and reduce data redundancy.
We used the minimum interval filter to eliminate outliers of the original three-dimensional point cloud and carried out point cloud denoising through Maptek I-Site Studio point cloud processing software. The processing results are shown in Figure 8.

4.1.2. Point Cloud Denoising

The 3D laser scanner could not obtain the complete 3D point cloud information in one attempt due to its limited scanning perspective and the complex environment. It was necessary to splice the point cloud data of different stations into the same coordinate system [24], which is the process of point cloud registration. In the point cloud registration process, according to the different classification standards, the registration of point cloud data can be divided into two categories: point cloud registration based on features and point cloud registration without features [25].
Due to the large and narrow slope of underground tunnel shaft, it was dangerous to install the total station and difficult to layout the target; thus, we selected the point cloud registration without a target. The most basic target-free point cloud registration algorithm is the Iterative Closest Point (ICP) algorithm [26]. Its processing flow is to collect samples from the original point cloud data, determine the point set corresponding to the initial matching, filter and screen the corresponding point pairs, and solve the transformation matrix. Maptek I-Site Studio software was used to register the cloud data of each scanning survey site of the underground shaft roadway based on the ICP algorithm. The results are shown in Figure 9. After registration in Figure 9b, it is evident that the 3D laser point cloud is obviously missing.

4.2. Data Processing of the RGB-D Camera

The Rosbag offline dataset recorded by the Realsense depth camera obtained the key frame of the dataset through Intel Realsense SDK to obtain the color image and depth image in the key frame. The basic image information of the color image and depth image, point cloud sensor data, and target coordinates under the camera coordinate system of the depth camera can be used to calculate the coordinate values of any pixel in the internal and external participating image of the depth camera in the world coordinate system.
The pixel (x, y) coordinates and the camera internal parameters provided by the color image in the camera coordinate system can obtain the (x, y) coordinates in the camera coordinate system, and the depth image can directly provide the (z) coordinates in the camera coordinate system, which enables us to obtain the point cloud data, as shown in Figure 10. The conversion formula between camera coordinate P and pixel coordinate PXY is as follows:
Z P x y   =   Z x y 1 = f x 0 c x 0 f y c y 0 0 1 X Y Z   =   K P
X = Z x c x f x ,   Y = Z y c y f y , Z = d
where: x and y are the pixel coordinate values; X, Y, and Z are the camera coordinate values; K is the camera internal parameter matrix; fx, fy, cx, and cy are the camera internal parameters; and d is the depth value of each pixel measured by the depth camera.
After obtaining the key frame point cloud model of each shaft roadway, it was necessary to carry out point cloud registration on the point cloud data of different key frames. The most classic and common registration method is the ICP algorithm. The point cloud data of the RGB-D camera key frame was registered through the ICP algorithm based on Python. The registration results are shown in Figure 11a,b. The Ply data format of the registered depth camera point cloud data was converted to the general point cloud data format of Las.

4.3. Comparative Analysis of Two Kinds of Data

The accuracy of different point cloud data usually refers to the accuracy of the scanning instrument of point cloud, the accuracy of point cloud coordinates, the relative error of different collected data, etc. The data acquisition ranging accuracy of the 3D laser scanner is ±6 mm. When the ranging depth of the Realsense D455 depth camera is less than 4 m and the ranging accuracy is less than 2%, the relative accuracy of the point cloud can meet the needs of 3D modeling. Based on the 3D laser scanning point cloud data, the 3D point cloud data of the shaft roadway were collected and processed by two different devices. The characteristic points of the shaft roadway and the vertical section of the roadway were compared at the same position with the point cloud data of the depth camera.
We collected the same coordinate values of feature points for the two types of processed point cloud data respectively, calculated the three-dimensional space distance through the coordinate values of the two feature points, and measured the relative distance of the roadway vertical section from different point cloud data at the same position of the shaft. The smaller the three-dimensional distance difference, the higher the accuracy, and vice versa. The coordinates of three-dimensional feature points are shown in Figure 12. By measuring the distance between feature points, the maximum error was 2.617 cm and the minimum error was 0.031 cm, which can meet the visual repair of the missing part of 3D laser scanner data acquisition.

5. Repair of Point Cloud

The point cloud data collected by the three-dimensional laser scanner was largely missing, which had a certain impact on the establishment of three-dimensional point cloud visualization of underground shaft roadway, as shown in Figure 13. The missing parts of point cloud data were collected and repaired by RGB-D camera to finally ensure that the overall point cloud data of the manhole roadway was intuitive and true.
Generally, point cloud data repair is mainly reflected in the shape, position, and fit degree. The main steps of repair are coarse registration and fine registration of the two types of point clouds and then good repairing of the missing parts. In our study, the algorithm of FPFH [27] + ICP, ISS [28] + ICP, SVD [29] + ICP, and the 3d-NDT algorithm [30] were used to register and fuse the point cloud data of the RGB-D depth camera and 3D laser scanning point cloud to carry out further visual repair.

5.1. Point Cloud Repairing of FPFH + ICP Algorithm

The fast point feature histogram (FPFH) is an improved algorithm based on point feature histogram (PFH) [31] which refines the original algorithm and improves its computational efficiency. The process of the algorithm is to provide a fixed neighborhood interval for a certain geometric feature in the point cloud, calculate the geometric eigenvalues of the point cloud through the neighborhood interval, establish a histogram of the eigenvalues for statistics, and then obtain a statistical feature. After the corresponding point pairs of the target point cloud and the source point cloud are obtained according to the FPFH algorithm, the optimal transformation matrix is obtained through iterative calculation so that the pose of the target point cloud and the source point cloud is relatively accurate. The ICP algorithm is used for fine registration and the source point cloud data is repaired. The RGB-D roadway point cloud and the 3D laser scanning point cloud were repaired by FPFH + ICP algorithm; the RMSE value was 19.0657 mm and the repair results are shown in Figure 14a,b.

5.2. Point Cloud Repairing of ISS + ICP Algorithm

The internal shape descriptor algorithm of intrinsic shape signatures (ISS) is the representation of three-dimensional geometry of a target shape. Its principle is to establish a local coordinate system for each key point in the point cloud data, provide a weight radius with each key point as the center, and calculate the covariance matrix of all covered points and key points. The relationship between the eigenvalues of the covariance matrix is used to describe the characteristic degree of the point. After the ISS algorithm detects the feature points of the target point cloud and matches the descriptors, the target point cloud and the source point cloud already have a good initial pose, and then the ICP algorithm is used for fine registration between the source point cloud and the target point cloud, so as to meet the requirements of point cloud repair. The RGB-D roadway point cloud and 3D laser scanning point cloud were repaired by the ISS + ICP algorithm; the RMSE value was 13.8524 mm and the repair results are shown in Figure 15a,b.

5.3. Point Cloud Patching of SVD + ICP Algorithm

The singular value decomposition (SVD) algorithm is a matrix decomposition algorithm based on the sum of squares of minimum error to solve the objective function. It can reduce the dimension of the target feature space, thereby obtaining the feature description of the target and improving the noise resistance of the algorithm. After the rough registration of the target point cloud and source point cloud based on the SVD algorithm, the ICP algorithm is used to further refine the registration of target point cloud and source point cloud to successfully repair the source point cloud. The RGB-D roadway point cloud and the 3D laser scanning point cloud were repaired by SVD + ICP algorithm; the RMSE value was 22.3248 mm and the repair results are shown in Figure 16a,b.

5.4. Point Cloud Patching of 3D-NDT Algorithm

The normal distributions transform (NDT) algorithm is used to represent the three-dimensional point cloud data with a set of normal distribution, count the probability density of the point cloud data through the normal distribution value, and carry out point cloud registration. It is not necessary to calculate the characteristics of the target point cloud and the source point cloud and find the characteristic point pair. 3D-NDT can voxelize and divide all point clouds, then repeat the consistent process of the NDT algorithm. The RGB-D roadway point cloud and 3D laser scanning point cloud were repaired by the 3D-NDT algorithm; the RMSE value was 31.9413 mm and the repair results are shown in Figure 17a,b.
The main steps of point cloud visual patching are coarse registration and fine registration of two kinds of point cloud data. Therefore, the accuracy measurement of repair effect is the evaluation of point cloud registration accuracy. The repair effect is evaluated by the root mean square error (RMSE) and the registration verification diagram. The comparison of repair accuracy based on different registration algorithms is shown in Table 1. The formula of RMSE is as follows:
R M S E   =   1 n i = 1 n | | d i | | 2
where: n is the quantity; di is the distance difference error between the source point cloud data and the matching corresponding point pair.
From Table 1 and the registration verification diagram of each registration algorithm, it is evident that the registration algorithm based on ISS + ICP has advantages over others with high registration accuracy of the source point cloud and target point cloud. Therefore, the ISS + ICP registration algorithm is the best way to repair the point cloud.

6. Conclusions

The missing 3D point cloud not only affects the display effect but also has a great impact on the subsequent point cloud data processing. Therefore, it is necessary to repair the 3D point cloud data to ensure the integrity and authenticity of the point cloud data. The underground space of mines (such as goaves, access shafts, etc.) is narrow and hidden, so it is difficult to collect complete data by using a single data acquisition method. To solve this problem, this paper used the RGB-D depth camera to repair the hole caused by the missing three-dimensional laser scanning point cloud. RGB-D camera is smart and convenient for data acquisition in the narrow roadway and can collect roadway data in a small range. We used the ICP algorithm to align each key frame of the point cloud data collected by RGB-D camera. The maximum error of the point cloud constructed by two different devices was 2.617 cm, and the minimum was 0.031 cm, which can satisfy the visual repair of the missing parts of the 3D laser scanner data collection. The obtained RGB-D camera point cloud and 3D laser scanning hole point cloud could be repaired by the algorithms of FPFH + ICP, ISS + ICP, SVD + ICP, or 3D-NDT. Our study results indicated that the ISS + ICP registration algorithm performed better than others, with the number of point clouds participating in RMSE calculation being 28,153 and an RMSE value of 13.8524 mm. Its relative accuracy meets the requirements of visual repair. The repair results show that the two-point clouds fit well, which not only met the requirements of hole repair, but also maintained the original characteristics of the underground access shaft roadway for quantitative analysis.

Author Contributions

Conceptualization, H.T. and Y.X.; validation, H.T., C.L. and M.Y. (Min Yan); formal analysis, C.L.; resources, X.H. and Y.X.; data curation, X.K.; writing—original draft preparation, H.T.; writing—review and editing, X.W.; project administration, M.Y. (Minglong Yang). All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the National Natural Science Foundation of China (Grant Nos. 41861054).

Data Availability Statement

The data used to support the findings of this study are presented in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Teng, C.-H.; Chuo, K.-Y.; Hsieh, C.-Y. Reconstructing three-dimensional models of objects using a Kinect sensor. Vis. Comput. 2017, 34, 1507–1523. [Google Scholar] [CrossRef]
  2. Liu, G.H.; Duan, J.C. A deep image hole repairing method and a superpixel segmentation algorithm based on Kinect camara. Comput. Eng. Sci. 2020, 42, 851–858. [Google Scholar] [CrossRef]
  3. Mathey, M. Modelling coal pillar stability from mine survey plans in a geographic information system. J. S. Afr. Inst. Min. Metall. 2018, 118, 157–164. [Google Scholar] [CrossRef]
  4. Khorasani, M.; Ghasemi, A.; Leary, M.; Sharabian, E.; Cordova, L.; Gibson, I.; Downing, D.; Bateman, S.; Brandt, M.; Rolfe, B. The effect of absorption ratio on meltpool features in laser-based powder bed fusion of IN718. Opt. Laser Technol. 2022, 153, 108263. [Google Scholar] [CrossRef]
  5. Wang, D.; Wei, X.; Liu, J.; Xiao, Y.; Yang, Y.; Liu, L.; Tan, C.; Yang, X.; Han, C. Lightweight design of an AlSi10Mg aviation control stick additively manufactured by laser powder bed fusion. Rapid Prototyp. J. 2022; ahead of print. [Google Scholar] [CrossRef]
  6. Linares, J.-M.; Chaves-Jacob, J.; Lopez, Q.; Sprauel, J.-M. Fatigue life optimization for 17-4Ph steel produced by selective laser melting. Rapid Prototyp. J. 2022, 28, 1182–1192. [Google Scholar] [CrossRef]
  7. Ettaieb, K.; Godineau, K.; Lavernhe, S.; Tournier, C. Offline laser power modulation in LPBF additive manufacturing including kinematic and technological constraints. Rapid Prototyp. J. 2022; ahead of print. [Google Scholar] [CrossRef]
  8. Gikas, V. Three-Dimensional Laser Scanning for Geometry Documentation and Construction Management of Highway Tunnels during Excavation. Sensors 2012, 12, 11249–11270. [Google Scholar] [CrossRef]
  9. Jaboyedoff, M.; Demers, D.; Locat, J.; Locat, A.; Locat, P.; Oppikofer, T.; Robitaille, D.; Turmel, D. Use of terrestrial laser scanning for the characterization of retrogressive landslides in sensitive clay and rotational landslides in river banks. Can. Geotech. J. 2009, 46, 1379–1390. [Google Scholar] [CrossRef]
  10. Wajs, J.; Kasza, D.; Zagożdżon, P.P.; Zagożdżon, K.D. 3D modeling of underground objects with the use of SLAM technology on the example of historical mine in Ciechanowice (Ołowiane Range, The Sudetes). E3S Web Conf. 2018, 29, 24. [Google Scholar] [CrossRef] [Green Version]
  11. Yan, K.; Hu, Q.; Wang, H.; Huang, X.; Li, L.; Ji, S. Continuous Mapping Convolution for Large-Scale Point Clouds Semantic Segmentation. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  12. Zhou, Q.-Y.; Koltun, V. Color map optimization for 3D reconstruction with consumer depth cameras. ACM Trans. Graph. 2014, 33, 1–10. [Google Scholar] [CrossRef]
  13. Zhou, Q.-Y.; Koltun, V. Dense scene reconstruction with points of interest. ACM Trans. Graph. 2013, 32, 1–8. [Google Scholar] [CrossRef]
  14. Cui, W.; Chen, H.; Liu, W.H. Hole Surface Repairing for Laser Triangular Mesh Point Cloud, Laser. Optoelectronics. Progress 2021, 58, 344–355. [Google Scholar] [CrossRef]
  15. Wang, C.X.; Hao, L.W.; Wang, Y.; Ji, K.H.; Liu, L. Review of hole repair in point cloud model, Modern. Manufacturing. Engineering 2020, 9, 156–162. [Google Scholar] [CrossRef]
  16. Chen, F.Z.; Chen, Z.Y.; Ding, Z.; Ye, X.Z.; Zhang, S.Y. Filling Holes in Cloud with Radial Basis Function. J. Comput. Aided Des. Comput. Graph. 2009, 18, 1414–1419. [Google Scholar] [CrossRef]
  17. Geng, J. Structured-light 3D surface imaging: A tutorial. Adv. Opt. Photonics 2011, 3, 128–160. [Google Scholar] [CrossRef]
  18. Dorrington, A.A.; Kelly, C.B.D.; McClure, S.H.; Payne, A.D.; Cree, M.J. Advantages of 3d time-of-flight range imaging cameras in machine vision applications. In Proceedings of the 16th Electronics New Zealand Conference (ENZCon), Dunedin, New Zealand, 18–20 November 2009; pp. 95–99. Available online: http://hdl.handle.net/10289/4033 (accessed on 18 November 2009).
  19. Ganapathi, V.; Plagemann, C.; Koller, D.; Thrun, S. Real time motion capture using a single time-of-flight camera. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 755–762. [Google Scholar] [CrossRef]
  20. Zhu, H.; Wang, J.; Li, J. Variable support-weight approach for correspondence search based on modified census transform. In Proceedings of the 2012 IEEE 11th International Conference on Signal Processing, Beijing, China, 21–25 October 2012; Volume 2, pp. 1040–1043. [Google Scholar] [CrossRef]
  21. Bisheng, Y.; Fuxun, L.; Ronggang, H. Progress, challenges and perspectives of 3D LiDAR point cloud processing. Acta Geod. Cartogr. Sin. 2017, 46, 1509–1516. [Google Scholar] [CrossRef]
  22. Zang, Y.F. Spatial Alignment of Multi-platform Point Clouds and On-demand 3D Modeling. Acta Geod. Cartogr. Sin. 2018, 47, 1693. [Google Scholar] [CrossRef]
  23. Tadic, V.; Odry, A.; Burkus, E.; Kecskes, I.; Kiraly, Z.; Klincsik, M.; Sari, Z.; Vizvari, Z.; Toth, A.; Odry, P. Painting Path Planning for a Painting Robot with a RealSense Depth Sensor. Appl. Sci. 2021, 11, 1467. [Google Scholar] [CrossRef]
  24. Zheng, M.H.; Zang, Y.F.; Liang, F.X.; Yang, B.S. Registration Method Research of Terrestrial Laser Point Clouds from Different Scenarios. Bull. Surv. Mapp. 2015, 8, 30–34. [Google Scholar] [CrossRef]
  25. Chen, L.L.; Sui, L.C.; Jiang, T.; Xue, Y.; Huang, W.C. The Methods of Data Registration in Terrestial 3D Laser Scanning. Bull. Surv. Mapp. 2014, 5, 80–82. [Google Scholar] [CrossRef]
  26. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  27. Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar] [CrossRef]
  28. Zhong, Y. Intrinsic shape signatures: A shape descriptor for 3D object recognition. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, Kyoto, Japan, 27 September–4 October 2009; pp. 689–696. [Google Scholar] [CrossRef]
  29. Audette, M.A.; Ferrie, F.P.; Peters, T.M. An algorithmic overview of surface registration techniques for medical imaging. Med. Image Anal. 2000, 4, 201–217. [Google Scholar] [CrossRef]
  30. Merzouk, N.K.; Merzouk, M.; Messen, N. A mass consistent model application to the study of phenomenon in advance of sand towards the Algerian High Plains. Renew. Energy 2003, 28, 655–663. [Google Scholar] [CrossRef]
  31. Rusu, R.B.; Blodow, N.; Marton, Z.C.; Beetz, M. Aligning point cloud views using persistent feature histograms. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3384–3391. [Google Scholar] [CrossRef]
Figure 1. Depth cameras with different imaging principles. (a) Structured light camera; (b) Time of Flight camera; (c) Binocular vision camera.
Figure 1. Depth cameras with different imaging principles. (a) Structured light camera; (b) Time of Flight camera; (c) Binocular vision camera.
Applsci 12 08910 g001
Figure 2. Original image of RGB-D camera. (a) Color image; (b) Depth image.
Figure 2. Original image of RGB-D camera. (a) Color image; (b) Depth image.
Applsci 12 08910 g002
Figure 3. Overview of the study area. (The model is constructed by tilt photogrammetry). (a) Overall view of the mine; (b) Front view of the mining part of the mine.
Figure 3. Overview of the study area. (The model is constructed by tilt photogrammetry). (a) Overall view of the mine; (b) Front view of the mining part of the mine.
Applsci 12 08910 g003
Figure 4. Flow chart of 3D laser data acquisition.
Figure 4. Flow chart of 3D laser data acquisition.
Applsci 12 08910 g004
Figure 5. Point cloud data acquisition diagram of manhole roadway. (a) Point cloud data acquisition of 3D laser scanner; (b) Coordinate data acquisition of target points.
Figure 5. Point cloud data acquisition diagram of manhole roadway. (a) Point cloud data acquisition of 3D laser scanner; (b) Coordinate data acquisition of target points.
Applsci 12 08910 g005
Figure 6. Data acquisition flow chart of the depth camera.
Figure 6. Data acquisition flow chart of the depth camera.
Applsci 12 08910 g006
Figure 7. Data acquisition diagram of depth camera. (a) Depth camera data acquisition site; (b) Offline dataset recorded by Intel Realsense SDK.
Figure 7. Data acquisition diagram of depth camera. (a) Depth camera data acquisition site; (b) Offline dataset recorded by Intel Realsense SDK.
Applsci 12 08910 g007
Figure 8. Before and after point cloud denoising. (a) Point cloud before denoising; (b) Point cloud after denoising.
Figure 8. Before and after point cloud denoising. (a) Point cloud before denoising; (b) Point cloud after denoising.
Applsci 12 08910 g008
Figure 9. Before and after 3D laser point cloud registration. (a) Point cloud before registration; (b) Point cloud after registration.
Figure 9. Before and after 3D laser point cloud registration. (a) Point cloud before registration; (b) Point cloud after registration.
Applsci 12 08910 g009
Figure 10. Conversion of RGB image and depth image into point cloud data. (a) Color image; (b) Depth image; (c) Point cloud.
Figure 10. Conversion of RGB image and depth image into point cloud data. (a) Color image; (b) Depth image; (c) Point cloud.
Applsci 12 08910 g010
Figure 11. (a) Python point cloud data processing interface. (b) Process diagram of point cloud data registration of depth camera. (b1) Before registration; (b2) Registration in progress; (b3) After registration.
Figure 11. (a) Python point cloud data processing interface. (b) Process diagram of point cloud data registration of depth camera. (b1) Before registration; (b2) Registration in progress; (b3) After registration.
Applsci 12 08910 g011
Figure 12. Point cloud data measurement map. (a) 3D laser point cloud data measurement; (b) Depth camera point cloud data measurement.
Figure 12. Point cloud data measurement map. (a) 3D laser point cloud data measurement; (b) Depth camera point cloud data measurement.
Applsci 12 08910 g012
Figure 13. Missing 3D laser point cloud data. (a) Side view; (b) Front view.
Figure 13. Missing 3D laser point cloud data. (a) Side view; (b) Front view.
Applsci 12 08910 g013
Figure 14. (a) Point cloud patching diagram of FPTH + ICP algorithm. (a1) Initial pose; (a2) Coarse registration; (a3) Fine registration side view; (a4) Fine registration top view. (b) Fine registration verification diagram.
Figure 14. (a) Point cloud patching diagram of FPTH + ICP algorithm. (a1) Initial pose; (a2) Coarse registration; (a3) Fine registration side view; (a4) Fine registration top view. (b) Fine registration verification diagram.
Applsci 12 08910 g014
Figure 15. (a) Point cloud patching diagram of ISS + ICP algorithm. (a1) Initial pose; (a2) Coarse registration; (a3) Fine registration side view; (a4) Fine registration top view. (b) Fine registration verification diagram.
Figure 15. (a) Point cloud patching diagram of ISS + ICP algorithm. (a1) Initial pose; (a2) Coarse registration; (a3) Fine registration side view; (a4) Fine registration top view. (b) Fine registration verification diagram.
Applsci 12 08910 g015
Figure 16. (a) Point cloud patching diagram of SVD + ICP algorithm. (a1) Initial pose; (a2) Coarse registration; (a3) Fine registration side view; (a4) Fine registration top view. (b) Fine registration verification diagram.
Figure 16. (a) Point cloud patching diagram of SVD + ICP algorithm. (a1) Initial pose; (a2) Coarse registration; (a3) Fine registration side view; (a4) Fine registration top view. (b) Fine registration verification diagram.
Applsci 12 08910 g016aApplsci 12 08910 g016b
Figure 17. (a) Point cloud patching diagram of 3D-NDT algorithm. (a1) Initial pose; (a2) Fine registration side view; (a3) Fine registration top view. (b) Fine registration verification diagram.
Figure 17. (a) Point cloud patching diagram of 3D-NDT algorithm. (a1) Initial pose; (a2) Fine registration side view; (a3) Fine registration top view. (b) Fine registration verification diagram.
Applsci 12 08910 g017
Table 1. Comparison table of repair accuracy by different registration algorithms.
Table 1. Comparison table of repair accuracy by different registration algorithms.
AlgorithmRegistered RMSE
Interval (m)
Total Points of Point Cloud Data
(RGB-D/TLS)
Number of Points of Point Cloud Data
Involved in RMSE Calculation
RMSE
(mm)
FPTH + ICP0.0031,388,470/726,82027,05719.0657
ISS + ICP0.0031,388,470/726,82028,15313.8524
SVD + ICP0.0031,388,470/726,82027,25122.3248
3D-NDT0.0031,388,470/726,82027,44931.9413
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tai, H.; Xia, Y.; He, X.; Wu, X.; Li, C.; Yan, M.; Kong, X.; Yang, M. RGB-D Camera for 3D Laser Point Cloud Hole Repair in Mine Access Shaft Roadway. Appl. Sci. 2022, 12, 8910. https://doi.org/10.3390/app12178910

AMA Style

Tai H, Xia Y, He X, Wu X, Li C, Yan M, Kong X, Yang M. RGB-D Camera for 3D Laser Point Cloud Hole Repair in Mine Access Shaft Roadway. Applied Sciences. 2022; 12(17):8910. https://doi.org/10.3390/app12178910

Chicago/Turabian Style

Tai, Haoyu, Yonghua Xia, Xiangrong He, Xuequn Wu, Chen Li, Min Yan, Xiali Kong, and Minglong Yang. 2022. "RGB-D Camera for 3D Laser Point Cloud Hole Repair in Mine Access Shaft Roadway" Applied Sciences 12, no. 17: 8910. https://doi.org/10.3390/app12178910

APA Style

Tai, H., Xia, Y., He, X., Wu, X., Li, C., Yan, M., Kong, X., & Yang, M. (2022). RGB-D Camera for 3D Laser Point Cloud Hole Repair in Mine Access Shaft Roadway. Applied Sciences, 12(17), 8910. https://doi.org/10.3390/app12178910

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop