Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model
Abstract
:1. Introduction
1.1. Background
1.2. Related Work
1.2.1. Estimating Camera Position and Orientation
1.2.2. Point Cloud Registration
1.2.3. Position and Orientation Estimation of Sensors Using Point Cloud
2. Methodology
2.1. Camera Geometric Model and Estimating Position/Orientation Method
2.1.1. Pinhole Camera Model
2.1.2. Absolute Position/Orientation Estimation and Calibration Using Pinhole Camera Model
2.2. Data Acquisition
2.2.1. Sensors
2.2.2. Test Site
3. Results and Discussions
3.1. Point Cloud of Each Station
3.2. LiDAR Position and Orientation Estimation
3.3. Point Cloud Registration
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Chen, J.; Kira, Z.; Cho, Y.K. Deep learning approach to point cloud scene understanding for automated scan to 3D reconstruction. J. Comput. Civ. Eng. 2019, 33, 4019027. [Google Scholar] [CrossRef]
- Kim, H.G.; Yun, H.S.; Cho, J.M. Analysis of 3D accuracy according to determination of calibration initial value in close-range digital photogrammetry using VLBI antenna and mobile phone camera. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2015, 33, 31–43. [Google Scholar] [CrossRef]
- Mahmood, B.; Han, S. 3D Registration of Indoor Point Clouds for Augmented Reality. In Computing in Civil Engineering 2019: Visualization, Information Modeling, and Simulation; American Society of Civil Engineers: Reston, VA, USA, 2019; pp. 1–8. [Google Scholar]
- Cho, H.; Hong, S.; Kim, S.; Park, H.; Park, I.; Sohn, H.G. Application of a terrestrial lidar system for elevation mapping in terra nova bay, antarctica. Sensors 2015, 15, 23514–23535. [Google Scholar] [CrossRef] [PubMed]
- Persad, R.A.; Armenakis, C. Automatic co-registration of 3D multi-sensor point clouds. ISPRS J. Photogramm. Remote Sens. 2017, 130, 162–186. [Google Scholar] [CrossRef]
- Song, J.; Ko, K. Nontarget-Based Global Registration for Unorganized Point Clouds Obtained in the Dynamic Shipyard Environment. Math. Probl. Eng. 2020, 2020, 2480703. [Google Scholar] [CrossRef]
- Yang, J.; Cao, Z.; Zhang, Q. A fast and robust local descriptor for 3D point cloud registration. Inf. Sci. 2016, 346, 163–179. [Google Scholar] [CrossRef]
- Al-Manasir, K.; Fraser, C.S. Registration of terrestrial laser scanner data using imagery. Photogramm. Rec. 2006, 21, 255–268. [Google Scholar] [CrossRef]
- Kim, P.; Chen, J.; Cho, Y.K. Automated point cloud registration using visual and planar features for construction environments. J. Comput. Civ. Eng 2018, 32, 4017076. [Google Scholar] [CrossRef]
- Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the Sensor fusion IV: Control Paradigms and Data Structures, Boston, MA, USA, 14–15 November 1991; International Society for Optics and Photonics: Bellingham, WA, USA, 1992; pp. 586–606. [Google Scholar]
- Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Proceedings Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; IEEE: Piscataway, NJ, USA, 2001; pp. 145–152. [Google Scholar]
- Men, H.; Gebre, B.; Pochiraju, K. Color point cloud registration with 4D ICP algorithm. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1511–1516. [Google Scholar]
- Hong, S.; Park, I.; Lee, J.; Lim, K.; Choi, Y.; Sohn, H.G. Utilization of a terrestrial laser scanner for the calibration of mobile mapping systems. Sensors 2017, 17, 474. [Google Scholar] [CrossRef]
- Hartmann, J.; Paffenholz, J.A.; Strübing, T.; Neumann, I. Determination of Position and Orientation of LiDAR Sensors on Multisensor Platforms. J. Surv. Eng. 2017, 143, 4017012. [Google Scholar] [CrossRef]
- Ding, M.; Lyngbaek, K.; Zakhor, A. Automatic registration of aerial imagery with untextured 3d lidar models. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2009; pp. 1–8. [Google Scholar]
- Mastin, A.; Kepner, J.; Fisher, J. Automatic registration of LIDAR and optical images of urban scenes. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 2639–2646. [Google Scholar]
- Zhang, W.; Chen, Y.; Wang, H.; Chen, M.; Wang, X.; Yan, G. Efficient registration of terrestrial LiDAR scans using a coarse-to-fine strategy for forestry applications. Agric. For. Meteorol. 2016, 225, 8–23. [Google Scholar] [CrossRef]
- Potmesil, M.; Chakravarty, I. Synthetic image generation with a lens and aperture camera model. ACM Trans. Graph. (TOG) 1982, 1, 85–108. [Google Scholar] [CrossRef]
- Reznicek, J. Method for Measuring Lens Distortion by Using Pinhole Lens. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-5, 509–515. [Google Scholar] [CrossRef]
- Dawson-Howe, K.M.; Vernon, D. Simple pinhole camera calibration. Int. J. Imaging Syst. Technol. 1994, 5, 1–6. [Google Scholar] [CrossRef]
- Popescu, V.; Rosen, P.; Arns, L.; Tricoche, X.; Wyman, C.; Hoffmann, C.M. The general pinhole camera: Effective and efficient nonuniform sampling for visualization. IEEE Trans. Vis. Comput. Graph. 2010, 16, 777–790. [Google Scholar] [CrossRef] [PubMed]
- Ye, J.; Yu, J. Ray geometry in non-pinhole cameras: A survey. Vis. Comput. 2014, 30, 93–112. [Google Scholar] [CrossRef]
- Juarez-Salazar, R.; Zheng, J.; Diaz-Ramirez, V.H. Distorted pinhole camera modeling and calibration. Appl. Opt. 2020, 59, 11310–11318. [Google Scholar] [CrossRef]
- Habib, A.F.; Lin, H.T.; Morgan, M.F. Line-Based Modified Iterated Hough Transform for Autonomous Single-Photo Resection. Photogramm. Eng. Remote Sens. 2003, 69, 1351–1357. [Google Scholar] [CrossRef]
- Habib, A.; Kelley, D. Single-photo resection using the modified Hough transform. Photogramm. Eng. Remote Sens. 2001, 67, 909–914. [Google Scholar]
- Kim, E.M.; Choi, H.S. Analysis of the accuracy of quaternion-based spatial resection based on the layout of control points. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2018, 36, 255–262. [Google Scholar] [CrossRef]
- Seedahmed, G.H. On the Suitability of Conic Sections in a Single-Photo Resection, Camera Calibration, and Photogrammetric Triangulation. Ph.D. Thesis, Geodetic Science and Surveying, The Ohio State University, Columbus, OH, USA, 2004. [Google Scholar]
- Hong, S.P.; Choi, H.S.; Kim, E.M. Single Photo Resection Using Cosine Law and Three-dimensional Coordinate Transformation. J. Korea Soc. Surv. Geod. Photogramm. Cartogr. 2019, 37, 189–198. [Google Scholar]
- Crosilla, F.; Beinat, A.; Fusiello, A.; Maset, E.; Visintini, D. Advanced Procrustes Analysis Models in Photogrammetric Computer Vision; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
- Fusiello, A.; Crosilla, F.; Malapelle, F. Procrustean point-line registration and the NPnP problem. In Proceedings of the 2015 International Conference on 3D Vision, Lyon, France, 19–22 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 250–255. [Google Scholar]
- Garro, V.; Crosilla, F.; Fusiello, A. Solving the pnp problem with anisotropic orthogonal procrustes analysis. In Proceedings of the 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, Zurich, Switzerland, 13–15 October 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 262–269. [Google Scholar]
- Lepetit, V.; Moreno-Noguer, F.; Fua, P. Epnp: An accurate o (n) solution to the pnp problem. Int. J. Comput. Vis. 2009, 81, 155. [Google Scholar] [CrossRef]
- Seedahmed, G.; Schenk, T. Direct linear transformation in the context of different scaling criteria. In Proceedings of the Annual conference of American Society of Photogrammetry and Remote Sensing, St. Louis, MO, USA, 23–27 April 2001. [Google Scholar]
- Bujnak, M.; Kukelova, Z.; Pajdla, T. New Efficient Solution to the Absolute Pose Problem for Camera with Unknown Focal Length and Radial Distortion. In Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2011; pp. 11–24. [Google Scholar] [CrossRef]
- Kukelova, Z.; Albl, C.; Sugimoto, A.; Schindler, K.; Pajdla, T. Minimal Rolling Shutter Absolute Pose with Unknown Focal Length and Radial Distortion. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 698–714. Available online: https://link.springer.com/chapter/10.1007/978-3-030-58558-7_41 (accessed on 13 December 2023).
- Larsson, V.; Kukelova, Z.; Zheng, Y. Camera pose estimation with unknown principal point. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2984–2992. [Google Scholar]
- Habib, A.; Mazaheri, M. Quaternion-Based Solutions for the Single Photo Resection Problem. Photogramm. Eng. Remote Sens. 2015, 81, 209–217. [Google Scholar] [CrossRef]
- Přibyl, B.; Zemčík, P.; Čadík, M. Absolute pose estimation from line correspondences using direct linear transformation. Comput. Vis. Image Underst. 2017, 161, 130–144. [Google Scholar] [CrossRef]
- Song, J.; Song, H.; Wang, S. PTZ camera calibration based on improved DLT transformation model and vanishing Point constraints. Optik 2021, 225, 165875. [Google Scholar] [CrossRef]
- Wang, P.; Xu, G.; Cheng, Y.; Yu, Q. Camera pose estimation from lines: A fast, robust and general method. Mach. Vis. Appl. 2019, 30, 603–614. [Google Scholar] [CrossRef]
- El-Ashmawy, K.L. A comparison study between collinearity condition, coplanarity condition, and direct linear transformation (DLT) method for camera exterior orientation parameters determination. Geod. Cartogr. 2015, 41, 66–73. [Google Scholar] [CrossRef]
- Ganapathy, S. Decomposition of transformation matrices for robot vision. Pattern Recognit. Lett. 1984, 2, 401–412. [Google Scholar] [CrossRef]
- Puget, P.; Skordas, T. Calibrating a mobile camera. Image Vis. Comput. 1990, 8, 341–348. [Google Scholar] [CrossRef]
- Kukelova, Z.; Bujnak, M.; Pajdla, T. Real-Time Solution to the Absolute Pose Problem with Unknown Radial Distortion and Focal Length. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 2816–2823. [Google Scholar] [CrossRef]
- Kim, N.; Baek, S.; Kim, G. Absolute IOP/EOP Estimation Models without Initial Information of Various Smart City Sensors. Sensors 2023, 23, 742. [Google Scholar] [CrossRef]
- Albl, C.; Kukelova, Z.; Pajdla, T. R6p-rolling shutter absolute camera pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 2292–2300. [Google Scholar]
- Albl, C.; Kukelova, Z.; Pajdla, T. Rolling shutter absolute pose problem with known vertical direction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3355–3363. [Google Scholar]
- Tsai, C.Y.; Huang, C.H. Indoor scene point cloud registration algorithm based on RGB-D camera calibration. Sensors 2017, 17, 1874. [Google Scholar] [CrossRef]
- Zhang, Z.; Chen, G.; Wang, X.; Wu, H. Sparse and Low-Overlapping Point Cloud Registration Network for Indoor Building Environments. J. Comput. Civ. Eng. 2021, 35, 04020069. [Google Scholar] [CrossRef]
- Wan, T.; Du, S.; Cui, W.; Yao, R.; Ge, Y.; Li, C.; Gao, Y.; Zheng, N. RGB-D point cloud registration based on salient object detection. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 3547–3559. [Google Scholar] [CrossRef] [PubMed]
- You, B.; Chen, H.; Li, J.; Li, C.; Chen, H. Fast point cloud registration algorithm based on 3DNPFH descriptor. Photonics 2022, 9, 414. [Google Scholar] [CrossRef]
- Li, G.; Cui, Y.; Wang, L.; Meng, L. Automatic registration algorithm for the point clouds based on the optimized RANSAC and IWOA algorithms for robotic manufacturing. Appl. Sci. 2022, 12, 9461. [Google Scholar] [CrossRef]
- Alicandro, M.; Di Angelo, L.; Di Stefano, P.; Dominici, D.; Guardiani, E.; Zollini, S. Fast and Accurate Registration of Terrestrial Point Clouds Using a Planar Approximation of Roof Features. Remote Sens. 2022, 14, 2986. [Google Scholar] [CrossRef]
- Xiong, B.; Li, D.; Zhou, Z.; Li, F. Fast Registration of Terrestrial LiDAR Point Clouds Based on Gaussian-Weighting Projected Image Matching. Remote Sens. 2022, 14, 1466. [Google Scholar] [CrossRef]
- Liu, J.; Hasheminasab, S.M.; Zhou, T.; Manish, R.; Habib, A. An Image-Aided Sparse Point Cloud Registration Strategy for Managing Stockpiles in Dome Storage Facilities. Remote Sens. 2023, 15, 504. [Google Scholar] [CrossRef]
- Manish, R.; Hasheminasab, S.M.; Liu, J.; Koshan, Y.; Mahlberg, J.A.; Lin, Y.C.; Ravi, R.; Zhou, T.; McGuffey, J.; Wells, T.; et al. Image-Aided LiDAR Mapping Platform and Data Processing Strategy for Stockpile Volume Estimation. Remote Sens. 2022, 14, 231. [Google Scholar] [CrossRef]
- Mahmood, B.; Han, S.; Lee, D.E. BIM-based registration and localization of 3D point clouds of indoor scenes using geometric features for augmented reality. Remote Sens. 2020, 12, 2302. [Google Scholar] [CrossRef]
- Luo, J.; Ye, Q.; Zhang, S.; Yang, Z. Indoor mapping using low-cost MLS point clouds and architectural skeleton constraints. Autom. Constr. 2023, 150, 104837. [Google Scholar] [CrossRef]
- Baek, J.; Park, J.; Cho, S.; Lee, C. 3D Global Localization in the Underground Mine Environment Using Mobile LiDAR Mapping and Point Cloud Registration. Sensors 2022, 22, 2873. [Google Scholar] [CrossRef] [PubMed]
- Wasik, A.; Ventura, R.; Pereira, J.N.; Lima, P.U.; Martinoli, A. Lidar-based relative position estimation and tracking for multi-robot systems. In Proceedings of the Robot 2015: Second Iberian Robotics Conference, Lisbon, Portugal, 19–21 November 2015; Advances in Robotics. Springer: Cham, Switzerland, 2016; Volume 1, pp. 3–16. [Google Scholar]
- Salles, R.N.; Campos Velho, H.F.d.; Shiguemori, E.H. Automatic Position Estimation Based on Lidar× Lidar Data for Autonomous Aerial Navigation in the Amazon Forest Region. Remote Sens. 2022, 14, 361. [Google Scholar] [CrossRef]
- Jiang, P.; Chen, L.; Guo, H.; Yu, M.; Xiong, J. Novel indoor positioning algorithm based on Lidar/inertial measurement unit integrated system. Int. J. Adv. Robot. Syst. 2021, 18, 1729881421999923. [Google Scholar] [CrossRef]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar] [CrossRef]
- Bartl, V.; Špaňhel, J.; Dobeš, P.; Juránek, R.; Herout, A. Automatic camera calibration by landmarks on rigid objects. Mach. Vis. Appl. 2020, 32, 1–13. [Google Scholar] [CrossRef]
- Schoepflin, T.N.; Dailey, D.J. Dynamic camera calibration of roadside traffic management cameras for vehicle speed estimation. IEEE Trans. Intell. Transp. Syst. 2003, 4, 90–98. [Google Scholar] [CrossRef]
- Song, K.T.; Tai, J.C. Dynamic calibration of pan–tilt–zoom cameras for traffic monitoring. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2006, 36, 1091–1103. [Google Scholar] [CrossRef]
- Josephson, K.; Byrod, M. Pose estimation with radial distortion and unknown focal length. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 2419–2426. [Google Scholar] [CrossRef]
- Li, P.; Wang, R.; Wang, Y.; Tao, W. Evaluation of the ICP algorithm in 3D point cloud registration. IEEE Access 2020, 8, 68030–68048. [Google Scholar] [CrossRef]
Dimensions | Height: 165 mm/Diameter: 100 mm |
Distance measurement system | High-speed time-of-flight enhanced by Waveform Digitizing (WFD) technology |
Laser class | 1 (in accordance with per IEC 60825-1:2014) |
Wavelength | 830 nm |
Field of view | 360° (horizontal)/300° (vertical) |
Range | min. 0.6–up to 60 m |
Point measurement rate | up to 360,000 pts/s |
Ranging accuracy | 4 mm @ 10 m/7 mm @ 20 m |
Camera System | 15 Mpix 3-camera system, 150 Mpx full dome capture, HDR, LED flash Calibrated spherical image, 360° × 300° |
Thermal Camera | FLIR technology based longwave infrared camera Thermal panoramic image, 360° × 70° |
Station Number | Translation Vector | Local Camera Orientation Vector | World Camera Orientation Vector | ||||||
---|---|---|---|---|---|---|---|---|---|
X (m) | Y (m) | Z (m) | X (m) | Y (m) | Z (m) | X (m) | Y (m) | Z (m) | |
Reference station | 0.489 | 0.387 | 0.002 | - | - | - | - | - | - |
Station 1 | −12.715 | 7.464 | 0.021 | 0.359 | −0.934 | −0.005 | 0.999 | −0.017 | −0.002 |
Station 2 | 0.249 | −4.358 | 0.014 | 0.535 | −0.845 | 0.013 | 0.002 | 0.999 | −0.001 |
Station 3 | 9.012 | −4.254 | −0.026 | 0.566 | −0.825 | −0.001 | −0.999 | 0.013 | 0.006 |
Station 4 | 4.154 | 4.189 | −0.012 | 0.550 | −0.835 | 0.008 | 0.002 | −0.999 | 0.001 |
Station 5 | −6.718 | 4.249 | 0.029 | 0.932 | 0.363 | 0.002 | 0.999 | −0.004 | 0.001 |
Station Number | Position Error | Reprojection Error (pixels) | ||||
---|---|---|---|---|---|---|
X (m) | Y (m) | Z (m) | Euclidean Error (m) | Error/Distance (%) | ||
Station 1 | 0.041 | 0.022 | 0.026 | 0.054 | 0.36 | 1.88 |
Station 2 | 0.034 | 0.029 | 0.025 | 0.051 | 1.17 | 2.00 |
Station 3 | 0.009 | 0.065 | 0.021 | 0.069 | 0.69 | 3.10 |
Station 4 | 0.030 | 0.067 | 0.010 | 0.074 | 1.25 | 4.08 |
Station 5 | 0.056 | 0.096 | 0.030 | 0.115 | 1.43 | 3.61 |
Mean | 0.072 | 0.98 | 2.93 |
Target Number | Errors (m) | Target Number | Errors (m) | ||||
---|---|---|---|---|---|---|---|
Proposed
Method |
Proposed +
Fine Registration | Semi-Automatic |
Proposed
Method |
Proposed +
Fine Registration | Semi-Automatic | ||
#1 | 0.084 | 0.037 | 0.053 | #6 | 0.191 | 0.073 | 0.075 |
#2 | 0.108 | 0.024 | 0.038 | #7 | 0.210 | 0.035 | 0.056 |
#3 | 0.077 | 0.048 | 0.038 | #8 | 0.077 | 0.081 | 0.149 |
#4 | 0.058 | 0.008 | 0.030 | #9 | 0.158 | 0.021 | 0.114 |
#5 | 0.194 | 0.078 | 0.069 | #10 | 0.085 | 0.097 | 0.096 |
Mean | 0.124 | 0.050 | 0.072 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yoo, S.; Kim, N. Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model. J. Imaging 2023, 9, 279. https://doi.org/10.3390/jimaging9120279
Yoo S, Kim N. Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model. Journal of Imaging. 2023; 9(12):279. https://doi.org/10.3390/jimaging9120279
Chicago/Turabian StyleYoo, Suhong, and Namhoon Kim. 2023. "Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model" Journal of Imaging 9, no. 12: 279. https://doi.org/10.3390/jimaging9120279
APA StyleYoo, S., & Kim, N. (2023). Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model. Journal of Imaging, 9(12), 279. https://doi.org/10.3390/jimaging9120279