Line Structure Extraction from LiDAR Point Cloud Based on the Persistence of Tensor Feature
Abstract
:Featured Application
Abstract
1. Introduction
2. Related Works
3. Methodology
- (1)
- initialize the tensor for unstructured point and get geometric features in different dimensions;
- (2)
- revote the tensor feature in different dimensions and compute the refined geometric feature;
- (3)
- represent the geometric feature and construct the line candidate subset (referred to as LCS);
- (4)
- construct the Morse–Smale complex and extract line structures based on the discrete Morse theory.
3.1. Initial Tensor Voting and Dimensional Feature Presentation
3.2. The Refinement of Initial Tensor Using Different Dimensional Geometric Feature
3.3. The Construction of LCS Based on Geometric Saliency
- (1)
- in 1-dimensional normal space , the geometric feature turns out to be “surface”, since there is only the 1D stick-shaped normal vector, and geometric saliency is ;
- (2)
- in 2-dimensional normal space , the geometric feature turns out to be “line”, since there is the 2D surface-shaped normal vector, and geometric saliency is ;
- (3)
- in 3-dimensional normal space , the geometric feature turns out to be “point”, since there is the 3D ball-shaped normal vector, and geometric saliency is .
3.4. Line Structure Extraction Based on the Discrete Morse Theory
- (1)
- compute the bounding box of space S, take the minimum edge as the referenced length, and divide it into sub-edges of equal length ;
- (2)
- divide the edge of bounding box in other 2 dimensions using the length . Then, space S is divided into subspace of equal size. Take the center position of as the coordinate of the point in LCS’;
- (3)
- compute the relation between the point pi in LCS and the subspace , and take the point with the max line saliency as the new attribute for , as denoted in Equation (12);
- (4)
- count the number of in each , and label with as the mask area of mask space , for the computation of persistent homology.
3.5. The Algorithm of the Line Structure Extraction Framework
Algorithm 1 The algorithm of the line structure extraction framework. |
Line structure extraction framework LSE(P,r,,,) |
INPUT: point cloud P, searching distance for neighborhood r, saliency thresholds , resampled grid , persistence threshold for line segment |
OUTPUT: line structure with connection relations |
//step 1: compute the initial tensor and decompose geometric features in different dimensions |
FOREACHpinP //for each point in P |
= Neighborhood(p, r); |
Tp = TensorVoting(p,); //compute initial tensor based on Equations (1)–(3) |
= GeoFeatureDec(Tp); //eigen decomposition for tensor Tp |
END |
//step 2: revote tensor and compute refined geometric feature |
FOREACHpinP //for each point in P |
FOREACHdinN //for each dimension in N, based on Equations (4)–(7) |
sd = SaliencyInDimD(); //compute dimensional saliency |
Td = TensorInDimD(); //compute dimensional tensor |
END |
Tp = AggTensor (sd, Td); //aggregate tensor in each dimension based on Equation (8) |
T = RevoteTensorFromNeig(); //refine the voting result based on Equation (9) |
END |
//step 3: represent geometric feature and construct the LCS |
= DimSaliency(T); //compute dimensional saliency for each point |
= LineCandidateSubset(,); //compute the LCS based on Equation (10) |
//step 4: extract line structure based on the discrete Morse theory. |
= ResampleLCS(P, , ); //resample the LCS based on Equation (11) |
LinePer = MorseSmaleComplex(); //compute line structure based on Equation (12) |
LineSeg = LineExtract(LinePer, ); //extract line segment using threshold |
LineStr = BuildLineStructure(LineSeg); //build line structure |
RETURNLine Structure |
4. Experiments and Discussions
4.1. The Dataset Collected by the iPhone-Based LiDAR Sensor
4.2. Tensor Voting and Geometric Dimension Representation
4.3. The LCS Construction and Resampling
4.4. Comparisons with Other Line Structure Extraction Methods
4.5. Line Structure Extraction in the Complex Area
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Wang, C.; Hou, S.; Wen, C.; Gong, Z.; Li, Q.; Sun, X.; Li, J. Semantic line framework-based indoor building modeling using backpacked laser scanning point cloud. ISPRS J. Photogramm. Remote Sens. 2018, 143, 150–166. [Google Scholar] [CrossRef]
- Mallya, A.; Lazebnik, S. Learning informative edge maps for indoor scene layout prediction. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 936–944. [Google Scholar]
- Zhang, W.; Zhang, W.; Gu, J. Edge-semantic learning strategy for layout estimation in indoor environment. IEEE Trans. Cybern. 2019, 50, 2730–2739. [Google Scholar] [CrossRef] [PubMed]
- Lin, Y.; Wang, C.; Chen, B.; Zai, D.; Li, J. Facet segmentation-based line segment extraction for large-scale point clouds. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4839–4854. [Google Scholar] [CrossRef]
- Zhang, J.; Zhao, X.; Chen, Z.; Lu, Z. A Review of Deep Learning-Based Semantic Segmentation for Point Cloud. IEEE Access 2019, 7, 179118–179133. [Google Scholar] [CrossRef]
- Wang, X.; Lyu, H.; Mao, T.; He, W.; Chen, Q. Point Cloud Segmentation from iPhone-Based LiDAR Sensors Using the Tensor Feature. Appl. Sci. 2022, 12, 1817. [Google Scholar] [CrossRef]
- Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; Bennamoun, M. Deep Learning for 3d Point Clouds: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 4338–4364. [Google Scholar] [CrossRef]
- Zheng, B.; Zhao, Y.; Yu, J.C.; Ikeuchi, K.; Zhu, S.C. Beyond Point Clouds: Scene Understanding by Reasoning Geometry and Physics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 3127–3134. [Google Scholar]
- Yang, B.; Dong, Z. A shape-based segmentation method for mobile laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2013, 81, 19–30. [Google Scholar] [CrossRef]
- Dittrich, A.; Weinmann, M.; Hinz, S. Analytical and numerical investigations on the accuracy and robustness of geometric features extracted from 3D point cloud data. ISPRS J. Photogramm. Remote Sens. 2017, 126, 195–208. [Google Scholar] [CrossRef]
- Su, H.; Maji, S.; Kalogerakis, E.; Learned-Miller, E. Multi-View Convolutional Neural Networks for 3d Shape Recognition. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 11–18 December 2015; pp. 945–953. [Google Scholar]
- Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for Point-Cloud Shape Detection. In Computer Graphics Forum; Blackwell Publishing Ltd.: Oxford, UK, 2007; Volume 26, pp. 214–226. [Google Scholar]
- Hulik, R.; Spanel, M.; Smrz, P.; Materna, Z. Continuous plane detection in point-cloud data based on 3D Hough Transform. J. Vis. Commun. Image Represent. 2014, 25, 86–97. [Google Scholar] [CrossRef]
- Bazazian, D.; Casas Pla, J.R.; Ruiz Hidalgo, J. Segmentation-based multi-scale edge extraction to measure the persistence of features in unorganized point clouds. In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Porto, Portugal, 27 February–1 March 2017; pp. 317–325. [Google Scholar]
- Park, M.K.; Lee, S.J.; Lee, K.H. Multi-scale tensor voting for feature extraction from unstructured point clouds. Graph. Model. 2012, 74, 197–208. [Google Scholar] [CrossRef]
- King, B.J. Range Data Analysis by Free-Space Modeling and Tensor Voting; Rensselaer Polytechnic Institute: Troy, NY, USA, 2008. [Google Scholar]
- Tang, C.-K.; Medioni, G. Curvature-augmented tensor voting for shape inference from noisy 3D data. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 858–864. [Google Scholar] [CrossRef]
- Zomorodian, A.; Carlsson, G. Computing persistent homology. Discret. Comput. Geom. 2005, 33, 249–274. [Google Scholar] [CrossRef]
- Carlsson, G. Topological pattern recognition for point cloud data. Acta Numer. 2014, 23, 289–368. [Google Scholar] [CrossRef]
- Zhou, C.; Dong, Z.; Lin, H. Learning persistent homology of 3D point clouds. Comput. Graph. 2022, 102, 269–279. [Google Scholar] [CrossRef]
- Chen, T.; Wang, Q. 3d line segment detection for unorganized point clouds from multi-view stereo. In Proceedings of the Asian Conference on Computer Vision, Queenstown, New Zealand, 8–12 November 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 400–411. [Google Scholar]
- Lu, X.; Liu, Y.; Li, K. Fast 3D line segment detection from unorganized point cloud. arXiv 2019, arXiv:1901.02532. [Google Scholar]
- Dewez, T.J.B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J. Facets: A Cloudcompare Plugin to Extract Geological Planes from Unstructed 3D Point Clouds. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, Prague, Czech Republic, 12–19 June 2016; pp. 799–804. [Google Scholar]
- Gankhuyag, U.; Han, J.H. Automatic 2d floorplan cad generation from 3d point clouds. Appl. Sci. 2020, 10, 2817. [Google Scholar] [CrossRef]
- Guo, J.; Wu, L.; Zhang, M.; Liu, S.; Sun, X. Towards automatic discontinuity trace extraction from rock mass point cloud without triangulation. Int. J. Rock Mech. Min. Sci. 2018, 112, 226–237. [Google Scholar] [CrossRef]
- Guo, B.; Li, Q.; Huang, X.; Wang, C. An improved method for power-line reconstruction from point cloud data. Remote Sens. 2016, 8, 36. [Google Scholar] [CrossRef]
- Demantké, J.; Mallet, C.; David, N.; Vallet, B. Dimensionality Based Scale Selection in 3D Lidar Point Clouds. In Proceedings of the ISPRS Workshop on Laser Scanning, Calgary, AB, Canada, 29–31 August 2011; pp. 29–31. [Google Scholar]
- Wei, M.; Liang, L.; Pang, W.M.; Wang, J.; Li, W.; Wu, H. Tensor voting guided mesh denoising. IEEE Trans. Autom. Sci. Eng. 2016, 14, 931–945. [Google Scholar] [CrossRef]
- Schuster, H.F. Segmentation of LiDAR data using the tensor voting framework. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 1073–1078. [Google Scholar]
- Sousbie, T. The persistent cosmic web and its filamentary structure–I. Theory and implementation. Mon. Not. R. Astron. Soc. 2011, 414, 350–383. [Google Scholar] [CrossRef]
- Sousbie, T.; Pichon, C.; Kawahara, H. The persistent cosmic web and its filamentary structure–II. Illustrations. Mon. Not. R. Astron. Soc. 2011, 414, 384–403. [Google Scholar] [CrossRef] [Green Version]
- Tierny, J.; Favelier, G.; Levine, J.A.; Gueunet, C.; Michaux, M. The topology toolkit. IEEE Trans. Vis. Comput. Graph. 2017, 24, 832–842. [Google Scholar] [CrossRef] [PubMed]
- Wong, C.C.; Vong, C.M. Persistent Homology Based Graph Convolution Network for Fine-Grained 3D Shape Segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 7098–7107. [Google Scholar]
- Beksi, W.J.; Papanikolopoulos, N. A topology-based descriptor for 3D point cloud modeling: Theory and experiments. Image Vis. Comput. 2019, 88, 84–95. [Google Scholar] [CrossRef]
Different Geometric Feature | The Number of Points |
---|---|
LCS by ssurface | 86 |
LCS by sline | 3326 |
LCS by spoint | 982 |
LCS without duplication | 4155 |
ground truth LCS | 4006 |
Different Results | Total Length (m) | Effective Length (m) |
---|---|---|
Figure 7a | 17.16 | 17.16 |
Figure 7b | 186.94 | 33.50 |
Figure 7c | 30.47 | 23.79 |
Figure 7d | 24.49 | 21.39 |
Figure 7e | 24.46 | 21.36 |
Different Geometric Feature | The Number of Points |
---|---|
LCS by ssurface | 1225 |
LCS by sline | 20,137 |
LCS by spoint | 220 |
LCS without duplication | 21,106 |
ground truth LCS | 48,307 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, X.; Lyu, H.; He, W.; Chen, Q. Line Structure Extraction from LiDAR Point Cloud Based on the Persistence of Tensor Feature. Appl. Sci. 2022, 12, 9190. https://doi.org/10.3390/app12189190
Wang X, Lyu H, He W, Chen Q. Line Structure Extraction from LiDAR Point Cloud Based on the Persistence of Tensor Feature. Applied Sciences. 2022; 12(18):9190. https://doi.org/10.3390/app12189190
Chicago/Turabian StyleWang, Xuan, Haiyang Lyu, Weiji He, and Qian Chen. 2022. "Line Structure Extraction from LiDAR Point Cloud Based on the Persistence of Tensor Feature" Applied Sciences 12, no. 18: 9190. https://doi.org/10.3390/app12189190
APA StyleWang, X., Lyu, H., He, W., & Chen, Q. (2022). Line Structure Extraction from LiDAR Point Cloud Based on the Persistence of Tensor Feature. Applied Sciences, 12(18), 9190. https://doi.org/10.3390/app12189190