Point Cloud Validation: On the Impact of Laser Scanning Technologies on the Semantic Segmentation for BIM Modeling and Evaluation
Abstract
:1. Introduction
- 1.
- A detailed literature study on point cloud processing from the static and mobile Lidar data acquisition to the semantic segmentation;
- 2.
- A capacity study of four state-of-the-art static and Lidar mobile mapping solutions;
- 3.
- An empirical study of the impact on the semantic segmentation step based on international specifications;
- 4.
- An in-depth overview of the BIM information that can be reliably extracted from each system for modeling/evaluation.
2. Background and Related Work
2.1. Data Acquisition
2.2. Data Processing
2.3. Reconstruction Methods and Inputs
2.4. Validation Methods and Specifications
3. Sensors
3.1. Leica Scanstation P30
3.2. NavVis M6
3.3. NavVis VLX
3.4. Microsoft Hololens 2
4. Methodology
4.1. Sensor Capacity
4.2. Point Cloud Suitability
5. Test Setups
6. Experimental Results
6.1. Sensor Capacity
6.1.1. Impact of Control Points
6.1.2. Impact of Loop Closure
6.2. Point Cloud Suitability
6.2.1. Impact of Quality
6.2.2. Impact of Completeness
6.2.3. Impact of Detailing
6.2.4. Impact of Semantic Segmentation
7. Discussion
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- McKinsey Global Institute. Reinventing Construction: A Route to Higher Productivity; McKinsey Company: Pennsylvania, PA, USA, 2017; p. 20. [Google Scholar] [CrossRef]
- Volk, R.; Stengel, J.; Schultmann, F. Building Information Modeling (BIM) for existing buildings—Literature review and future needs. Autom. Constr. 2014, 38, 109–127. [Google Scholar] [CrossRef] [Green Version]
- Patraucean, V.; Armeni, I.; Nahangi, M.; Yeung, J.; Brilakis, I.; Haas, C. State of research in automatic as-built modelling. Adv. Eng. Inform. 2015, 29, 162–171. [Google Scholar] [CrossRef] [Green Version]
- Wang, W.; Xu, Q.; Ceylan, D.; Mech, R.; Neumann, U. DISN: Deep implicit surface network for high-quality single-view 3d reconstruction. arXiv 2019, arXiv:1905.10711. [Google Scholar]
- Mellado, F.; Wong, P.F.; Amano, K.; Johnson, C.; Lou, E.C. Digitisation of existing buildings to support building assessment schemes: Viability of automated sustainability-led design scan-to-BIM process. Archit. Eng. Des. Manag. 2020, 16, 84–99. [Google Scholar] [CrossRef]
- Agisoft. Metashape. 2018. Available online: https://www.agisoft.com/ (accessed on 1 January 2022).
- Pix4D. Pix4Dmapper. 2021. Available online: https://www.pix4d.com/ (accessed on 1 January 2022).
- RealityCapturing. Capturing Reality. 2017. Available online: https://www.capturingreality.com/ (accessed on 1 January 2022).
- Remondino, F.; Rizzi, A. Reality-based 3D documentation of natural and cultural heritage sites-techniques, problems, and examples. Appl. Geomat. 2010, 2, 85–100. [Google Scholar] [CrossRef] [Green Version]
- Bassier, M.; Yousefzadeh, M.; Genechten, B.V.; Ghent, T.C.; Mapping, M. Evaluation of data acquisition techniques and workflows for Scan to BIM. In Proceedings of the Geo Bussiness, London, UK, 27–28 May 2015. [Google Scholar]
- Lagüela, S.; Dorado, I.; Gesto, M.; Arias, P.; González-Aguilera, D.; Lorenzo, H. Behavior analysis of novel wearable indoor mapping system based on 3d-slam. Sensors 2018, 18, 766. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Thomson, C.; Apostolopoulos, G.; Backes, D.; Boehm, J. Mobile Laser Scanning for Indoor Modelling. In Proceedings of the ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, ISPRS Workshop Laser Scanning 2013, Antalya, Turkey, 11–13 November 2013; Volume II-5/W2, pp. 289–293. [Google Scholar] [CrossRef] [Green Version]
- Hübner, P.; Clintworth, K.; Liu, Q.; Weinmann, M.; Wursthorn, S. Evaluation of hololens tracking and depth sensing for indoor mapping applications. Sensors 2020, 20, 1021. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Tang, J.; Jiang, C.; Zhu, L.; Lehtomäki, M.; Kaartinen, H.; Kaijaluoto, R.; Wang, Y.; Hyyppä, J.; Hyyppä, H.; et al. The accuracy comparison of three simultaneous localization and mapping (SLAM)-based indoor mapping technologies. Sensors 2018, 18, 3228. [Google Scholar] [CrossRef] [Green Version]
- Sammartano, G.; Spanò, A. Point clouds by SLAM-based mobile mapping systems: Accuracy and geometric content validation in multisensor survey and stand-alone acquisition. Appl. Geomat. 2018, 10, 317–339. [Google Scholar] [CrossRef]
- Tucci, G.; Visintini, D.; Bonora, V.; Parisi, E.I. Examination of indoor mobile mapping systems in a diversified internal/external test field. Appl. Sci. 2018, 8, 401. [Google Scholar] [CrossRef] [Green Version]
- Lehtola, V.V.; Kaartinen, H.; Nüchter, A.; Kaijaluoto, R.; Kukko, A.; Litkey, P.; Honkavaara, E.; Rosnell, T.; Vaaja, M.T.; Virtanen, J.P.; et al. Comparison of the selected state-of-the-art 3D indoor scanning and point cloud generation methods. Remote Sens. 2017, 9, 796. [Google Scholar] [CrossRef] [Green Version]
- Armeni, I.; Sax, S.; Zamir, A.R.; Savarese, S.; Sax, A.; Zamir, A.R.; Savarese, S. Joint 2D-3D-Semantic Data for Indoor Scene Understanding. arXiv 2017, arXiv:1702.01105. [Google Scholar]
- Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J.D.; Schindler, K.; Pollefeys, M. Semantic3d.Net: A New Large-Scale Point Cloud Classification Benchmark. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, ISPRS Hannover Workshop: HRIGI 17—CMRT 17—ISA 17—EuroCOW 17, Hannover, Germany, 6–9 June 2017; Volume IV-1-W1. [Google Scholar]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Garcia-Rodriguez, J. A Review on Deep Learning Techniques Applied to Semantic Segmentation. arXiv 2017, arXiv:1704.06857. [Google Scholar]
- Xie, Y.; Tian, J.; Zhu, X.X. Linking Points With Labels in 3D: A review of point cloud semantic segmentation. IEEE Geosci. Remote Sens. Mag. 2020, 8, 38–59. [Google Scholar] [CrossRef] [Green Version]
- Boulch, A.; Guerry, J.; Le Saux, B.; Audebert, N. SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks. Comput. Graph. 2018, 71, 189–198. [Google Scholar] [CrossRef]
- Wang, K.; Shen, S. MVDepthNet: Real-time multiview depth estimation neural network. In Proceedings of the 2018 International Conference on 3D Vision, 3DV 2018, Verona, Italy, 5–8 September 2018; pp. 248–257. [Google Scholar] [CrossRef] [Green Version]
- Dai, A.; Ritchie, D.; Bokeloh, M.; Reed, S.; Sturm, J.; Niebner, M. ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4578–4587. [Google Scholar] [CrossRef] [Green Version]
- Jahrestagung, W.T. Comparison of Deep-Learning Classification Approaches for Indoor Point Clouds. In Proceedings of the 40th Wissenschaftlich-Technische Jahrestagung der DGPF in Stuttgart—Publikationen der DGPF, Stuttgart, Germany, 4–6 March 2020; pp. 437–447. [Google Scholar]
- Jiang, M.; Wu, Y.; Zhao, T.; Zhao, Z.; Lu, C. PointSIFT: A SIFT-like Network Module for 3D Point Cloud Semantic Segmentation. arXiv 2018, arXiv:1807.00652. [Google Scholar]
- Maturana, D.; Scherer, S. VoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 922–928. [Google Scholar] [CrossRef]
- Tchapmi, L.; Choy, C.; Armeni, I.; Gwak, J.; Savarese, S. SEGCloud: Semantic segmentation of 3D point clouds. In Proceedings of the 2017 International Conference on 3D Vision, 3DV 2017, Qingdao, China, 10–12 October 2017. [Google Scholar] [CrossRef] [Green Version]
- Riegler, G.; Ulusoy, A.O.; Geiger, A. OctNet: Learning deep 3D representations at high resolutions. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 6620–6629. [Google Scholar] [CrossRef] [Green Version]
- Wang, P.S.; Liu, Y.; Guo, Y.X.; Sun, C.Y.; Tong, X. O-CNN: Octree-based convolutional neural networks for 3D shape analysis. ACM Trans. Graph. 2017, 36, 72. [Google Scholar] [CrossRef]
- Meng, H.Y.; Gao, L.; Lai, Y.K.; Manocha, D. VV-net: Voxel VAE net with group convolutions for point cloud segmentation. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 8499–8507. [Google Scholar] [CrossRef] [Green Version]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. arXiv 2016, arXiv:1612.00593. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. arXiv 2017, arXiv:1706.02413. [Google Scholar]
- Li, Y.; Bu, R.; Sun, M.; Wu, W.; Di, X.; Chen, B. PointCNN: Convolution on X-transformed points. Adv. Neural Inf. Process. Syst. 2018, 31, 820–830. [Google Scholar]
- Cai, G.; Jiang, Z.; Wang, Z.; Huang, S.; Chen, K.; Ge, X.; Wu, Y. Spatial aggregation net: Point cloud semantic segmentation based on multi-directional convolution. Sensors 2019, 19, 4329. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; Markham, A. Randla-Net: Efficient semantic segmentation of large-scale point clouds. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11105–11114. [Google Scholar] [CrossRef]
- Hu, Z.; Bai, X.; Shang, J.; Zhang, R.; Dong, J.; Wang, X.; Sun, G.; Fu, H.; Tai, C.L. VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation. arXiv 2021, arXiv:2107.13824. [Google Scholar]
- Bassier, M.; Vergauwen, M. Unsupervised reconstruction of Building Information Modeling wall objects from point cloud data. Autom. Constr. 2020, 120, 103338. [Google Scholar] [CrossRef]
- Yang, F.; Zhou, G.; Su, F.; Zuo, X.; Tang, L.; Liang, Y.; Zhu, H.; Li, L. Automatic indoor reconstruction from point clouds in multi-room environments with curved walls. Sensors 2019, 19, 3798. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Nikoohemat, S.; Diakité, A.A.; Zlatanova, S.; Vosselman, G. Indoor 3D reconstruction from point clouds for optimal routing in complex buildings to support disaster management. Autom. Constr. 2020, 113, 103–109. [Google Scholar] [CrossRef]
- Tran, H.; Khoshelham, K.; Kealy, A.; Díaz-Vilariño, L. Shape Grammar Approach to 3D Modeling of Indoor Environments Using Point Clouds. J. Comput. Civ. Eng. 2019, 33, 04018055. [Google Scholar] [CrossRef]
- Rebolj, D.; Pučko, Z.; Babič, N.Č.; Bizjak, M.; Mongus, D. Point cloud quality requirements for Scan-vs-BIM based automated construction progress monitoring. Autom. Constr. 2017, 84, 323–334. [Google Scholar] [CrossRef]
- U.S. Institute of Building Documentation. USIBD Level of Accuracy (LOA) Specification Guide v3.0-2019; Technical Report; U.S. Institute of Building Documentation: Tustin, CA, USA, 2019. [Google Scholar]
- BIMForum. Level of Development Specification; Technical Report; BIMForum: Las Vegas, NV, USA, 2018. [Google Scholar]
- U.S. General Services Administration. GSA BIM Guide for 3D Imaging; U.S. General Services Administration: Washington, DC, USA, 2009.
- Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R. Scan-To-Bim Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds. In Proceedings of the ISPRS International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, ISPRS TC II 5th International Workshop LowCost 3D Sensors, Algorithms, Applications, Hamburg, Germany, 28–29 November 2017; Volume XLII-2/W8, pp. 45–52. [Google Scholar] [CrossRef] [Green Version]
- Bassier, M.; Vergauwen, M.; Van Genechten, B. Standalone Terrestrial Laser Scanning for Efficiently Capturing Aec Buildings for As-Built Bim. In Proceedings of the ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016; Volume III-6, pp. 49–55. [Google Scholar] [CrossRef]
- NavVis Gmbh. Confidential: NavVis Mapping Software Documentation; NavVis: Munchen, Germany, 2021. [Google Scholar]
- Czerniawski, T.; Leite, F. Automated digital modeling of existing buildings: A review of visual object recognition methods. Autom. Constr. 2020, 113, 103131. [Google Scholar] [CrossRef]
- Bassier, M.; Vincke, S.; de Winter, H.; Vergauwen, M. Drift invariant metric quality control of construction sites using BIM and point cloud data. ISPRS Int. J. Geo-Inf. 2020, 9, 545. [Google Scholar] [CrossRef]
- NavVis VLX. Evaluating Indoor & Outdoor Mobile Mapping Accuracy; NavVis: Munchen, Germany, 2021; pp. 1–16. [Google Scholar]
Level of Accuracy | LOA10 | LOA20 | LOA30 | LOA40 | LOA50 |
---|---|---|---|---|---|
[0.05 m; −] | [0.015 m; 0.05 m] | [0.005 m; 0.015 m] | [0.001 m; 0.005 m] | [0; 0.001 m] | |
Level of Development | |||||
LOD100 | LOD200 | LOD300 | LOD350 | LOD400 |
Classes | Level of Accuracy | Level of Development | Quality: LOA (m) | Completeness: Coverage (%) | Detailing: Resolution (m) | Semantic Segmentation mIoU S3DIS (%) |
---|---|---|---|---|---|---|
Ceilings | LOA20 | LOD300 | <0.05 | >50% | <0.05 | 93.1 |
Floors | LOA20 | LOD300 | <0.05 | >50% | <0.05 | 96.1 |
Walls | LOA30 | LOD350 | <0.015 | >50% | <0.015 | 80.6 |
Columns | LOA30 | LOD350 | <0.015 | >50% | <0.015 | 48.0 |
Beams | LOA30 | LOD350 | <0.015 | >50% | <0.015 | 62.4 |
NavVis M6 | NavVis VLX | Leica Scanstation P30 | Microsoft Hololens 2 | ||
---|---|---|---|---|---|
D-hall | points | 9,998,247 | 17,838,269 | 74,301,732 | 346,206 |
Capture time | ±10 min | ±10 min | 1 h 45 min | ±35 min | |
Process time | ±20 min | ±20 min | 35 min | ±20 min | |
E-hall | points | 14,906,837 | 21,411,219 | 49,415,423 | 193,611 |
Capture time | ±10 min | ±10 min | 1 h 30 min | ±15 min | |
Process time | ±20 min | ±20 min | 30 min | ±20 min | |
Lab 1 | points | 28,387,570 | 62,441,871 | 70,176,074 | 306,300 |
Capture time | ±20 min | ±20 min | ±1 h 45 min | ±27 min | |
Process time | ±20 min | ±20 min | ±45 min | ±25 min | |
Lab 2 | points | 19,855,995 | 32,255,982 | 55,171,563 | 218,790 |
Capture time | ±20 min | ±20 min | ±1 h 30 min | ±30 min | |
Process time | ±20 min | ±20 min | ±30 min | ±20 min | |
Lab 3 | points | 17,827,807 | 45,715,878 | 26,501,599 | 93,002 |
Capture time | ±10 min | ±10 min | ±45 min | ±20 min | |
Process time | ±20 min | ±20 min | ±15 min | ±15 min |
Quality | Completeness | Detailing | Semantic Segmentation | ||||
LOA30 (%) | LOA20 (%) | LOA10 (%) | Coverage (%) | Resolution (m) | IoU (%) | ||
Ceilings | Synth | - | - | - | - | 0.01 | 49.6 |
P30 | 52.9 | 87.7 | 98.2 | 63.5 | 0.003 | 69.4 | |
VLX | 38.1 | 86.4 | 97.5 | 77.6 | 0.004 | 70.9 | |
M6 | 22.1 | 89.1 | 97.9 | 69.9 | 0.005 | 72.2 | |
Hololens 2 | 33.4 | 89.1 | 98.5 | 51.3 | 0.05 | 72.4 | |
Floors | Synth | - | - | - | - | 0.01 | 39.5 |
P30 | 93.7 | 97.3 | 98.3 | 23.0 | 0.003 | 64.1 | |
VLX | 56.3 | 97.3 | 98.8 | 39.2 | 0.004 | 72.1 | |
M6 | 16.4 | 92.2 | 99.4 | 32.3 | 0.005 | 76.5 | |
Hololens 2 | 27.9 | 70.2 | 96.8 | 29.0 | 0.05 | 69.3 | |
Walls | Synth | - | - | - | - | 0.01 | 84.9 |
P30 | 29.7 | 79.2 | 96.7 | 22.9 | 0.003 | 55.3 | |
VLX | 36.5 | 80.5 | 96.5 | 58.3 | 0.004 | 53.0 | |
M6 | 44.2 | 82.2 | 95.5 | 49.6 | 0.005 | 69.5 | |
Hololens 2 | 31.7 | 79.6 | 97.5 | 23.4 | 0.05 | 69.7 | |
Columns | Synth | - | - | - | - | 0.01 | 0.4 |
P30 | 53.2 | 94.0 | 97.9 | 44.4 | 0.003 | 0.4 | |
VLX | 47.2 | 71.0 | 94.8 | 65.3 | 0.004 | 0.00 | |
M6 | 54.8 | 96.4 | 98.4 | 52.3 | 0.005 | 0.00 | |
Hololens 2 | 17.6 | 73.8 | 97.8 | 47.4 | 0.05 | 0.00 | |
Beams | Synth | - | - | - | - | 0.01 | 7.63 |
P30 | 65.4 | 98.7 | 99.6 | 59.8 | 0.003 | 1.5 | |
VLX | 58.1 | 98.2 | 99.2 | 83.1 | 0.004 | 17.4 | |
M6 | 49.1 | 98.0 | 99.1 | 72.2 | 0.005 | 36.7 | |
Hololens 2 | 31.5 | 79.2 | 97.7 | 55.9 | 0.05 | 1.3 |
Quality | Completeness | Detailing | Semantic Segmentation | ||||
LOA30 (%) | LOA20 (%) | LOA10 (%) | Coverage (%) | Resolution (m) | IoU (%) | ||
Ceilings | Synth | - | - | - | - | 0.01 | 49.6 |
P30 | 51.7 | 91.6 | 98.9 | 43,9 | 0.003 | 73.5 | |
VLX | 36.2 | 91.3 | 98.9 | 74,5 | 0.004 | 83.6 | |
M6 | 35.7 | 91.7 | 98.9 | 64.8 | 0.005 | 87.5 | |
Hololens 2 | 10.3 | 85.7 | 99.2 | 41.0 | 0.05 | 80.2 | |
Floors | Synth | - | - | - | - | 0.01 | 53.6 |
P30 | 95.3 | 98.2 | 99.2 | 44.5 | 0.003 | 80.3 | |
VLX | 78.7 | 95.2 | 98.3 | 43.8 | 0.004 | 89.2 | |
M6 | 47.3 | 69.6 | 98.3 | 36.0 | 0.005 | 94.2 | |
Hololens 2 | 27.0 | 94.9 | 98.4 | 32.0 | 0.05 | 90.8 | |
Walls | Synth | - | - | - | - | 0.01 | 85.1 |
P30 | 38.7 | 90.5 | 98.8 | 39.2 | 0.003 | 58.8 | |
VLX | 48.1 | 92.4 | 98.6 | 69.9 | 0.004 | 51.7 | |
M6 | 41.5 | 88.6 | 98.3 | 64.2 | 0.005 | 67.7 | |
Hololens 2 | 27.3 | 87.2 | 98.2 | 35.4 | 0.05 | 67.6 | |
Columns | Synth | - | - | - | - | 0.01 | 0.0 |
P30 | 37.9 | 91.0 | 96.8 | 21.0 | 0.003 | 0.1 | |
VLX | 40.6 | 87.2 | 97.9 | 66.7 | 0.004 | 0.0 | |
M6 | 43.9 | 88.2 | 97.9 | 53.2 | 0.005 | 0.0 | |
Hololens 2 | 33.1 | 86.0 | 98.4 | 23.0 | 0.05 | 0.0 | |
Beams | Synth | - | - | - | - | 0.01 | 0.8 |
P30 | 48.7 | 95.7 | 99.2 | 60.8 | 0.003 | 3.3 | |
VLX | 61.9 | 98.8 | 99.0 | 80.0 | 0.004 | 21.2 | |
M6 | 60.3 | 99.0 | 99.2 | 70.9 | 0.005 | 42.7 | |
Hololens 2 | 33.7 | 93.6 | 99.5 | 42.8 | 0.05 | 3.6 |
Quality | Completeness | Detailing | Semantic Segmentation | ||||
LOA30 (%) | LOA20 (%) | LOA10 (%) | Coverage (%) | Resolution (m) | IoU (%) | ||
Ceilings | Synth | - | - | - | - | 0.01 | 48.1 |
P30 | 40.3 | 99.8 | 99.9 | 30.8 | 0.003 | 85.3 | |
VLX | 11.9 | 98.3 | 98.8 | 37.5 | 0.004 | 86.7 | |
M6 | 26.3 | 99.7 | 99.8 | 34.0 | 0.005 | 83.1 | |
Hololens 2 | 49.3 | 95.2 | 97.7 | 26.0 | 0.05 | 94.6 | |
Floors | Synth | - | - | - | - | 0.01 | 40.4 |
P30 | 76.1 | 99.1 | 99.6 | 15.0 | 0.003 | 91.4 | |
VLX | 32.5 | 93.7 | 99.2 | 22.0 | 0.004 | 89.9 | |
M6 | 31.8 | 93.2 | 99.8 | 24.0 | 0.005 | 81.2 | |
Hololens 2 | 14.4 | 96.6 | 99.7 | 21.0 | 0.05 | 94.1 | |
Walls | Synth | - | - | - | - | 0.01 | 82.4 |
P30 | 9.8 | 46.5 | 83.7 | 21.0 | 0.003 | 79.9 | |
VLX | 15.0 | 60.0 | 82.7 | 41.0 | 0.004 | 70.5 | |
M6 | 15.4 | 60.0 | 83.0 | 36.0 | 0.005 | 86.2 | |
Hololens 2 | 12.9 | 48.9 | 90.6 | 18.0 | 0.05 | 81.2 | |
Beams | Synth | - | - | - | - | 0.01 | 1.5 |
P30 | 25.0 | 80.9 | 97.8 | 90.0 | 0.003 | 0.0 | |
VLX | 43.7 | 85.4 | 97.4 | 95.7 | 0.004 | 0.0 | |
M6 | 46.4 | 93.8 | 98.7 | 81.3 | 0.005 | 0.0 | |
Hololens 2 | 20.0 | 53.0 | 91.6 | 75.1 | 0.05 | 0.0 |
Classes | Leica P30 | NavVis VLX | NavVis M6 | Microsoft Hololens 2 |
---|---|---|---|---|
Ceilings | LOA40/LOD200 | LOA20-30/LOD300 | LOA20-30/LOD300 | LOA10-20/LOD100 |
Floors | LOA40/LOD200 | LOA20-30/LOD300 | LOA20-30/LOD200 | LOA10-20/LO200 |
Walls | LOA40/LOD300 | LOA20-30/LOD350 | LOA20-30/LOD350 | LOA10-20/LOD200 |
Columns | LOA40/LOD300 | LOA20-30/LOD350 | LOA20-30/LOD350 | LOA10-20/LOD100 |
Beams | LOA40/LOD300 | LOA20-30/LOD350 | LOA20-30/LOD350 | LOA10-20/LOD200 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
De Geyter, S.; Vermandere, J.; De Winter, H.; Bassier, M.; Vergauwen, M. Point Cloud Validation: On the Impact of Laser Scanning Technologies on the Semantic Segmentation for BIM Modeling and Evaluation. Remote Sens. 2022, 14, 582. https://doi.org/10.3390/rs14030582
De Geyter S, Vermandere J, De Winter H, Bassier M, Vergauwen M. Point Cloud Validation: On the Impact of Laser Scanning Technologies on the Semantic Segmentation for BIM Modeling and Evaluation. Remote Sensing. 2022; 14(3):582. https://doi.org/10.3390/rs14030582
Chicago/Turabian StyleDe Geyter, Sam, Jelle Vermandere, Heinder De Winter, Maarten Bassier, and Maarten Vergauwen. 2022. "Point Cloud Validation: On the Impact of Laser Scanning Technologies on the Semantic Segmentation for BIM Modeling and Evaluation" Remote Sensing 14, no. 3: 582. https://doi.org/10.3390/rs14030582
APA StyleDe Geyter, S., Vermandere, J., De Winter, H., Bassier, M., & Vergauwen, M. (2022). Point Cloud Validation: On the Impact of Laser Scanning Technologies on the Semantic Segmentation for BIM Modeling and Evaluation. Remote Sensing, 14(3), 582. https://doi.org/10.3390/rs14030582