FEC: Fast Euclidean Clustering for Point Cloud Segmentation
Abstract
:1. Introduction
1.1. Related Works
1.1.1. Edge-Based Method
1.1.2. Region Growing Based Method
1.1.3. Clustering Based Method
1.1.4. Learning Based Method
1.2. Motivations
1.2.1. GPU vs. CPU
1.2.2. Pre-Knowledge vs. Unorganized
1.3. Contributions
- We present a new Euclidean clustering algorithm to the point could instance segmentation problem by using point-wise against the cluster-wise scheme applied in existing works.
- The proposed fast Euclidean clustering (FEC) algorithm can handle the general point cloud as input without relying on the pre-knowledge or GPU assistance.
- Extensive synthetic and real data experiments demonstrate that the proposed solution achieves similar instance segmentation quality but speedup over state-of-the-art approaches. The source code (implemented in C++, Matlab, and Python) is publicly available on https://github.com/YizhenLAO/FEC.
2. Materials and Methods
2.1. Ground Surface Removal
2.2. Fast Euclidean Clustering
Algorithm 1:Proposed FEC |
2.3. Efficiency Analysis
- In general setting, which makes and lifting efficiency of FEC over EC and RG.
- By substituting the lower bound of and upper bound of in Equation (2) into , and , FEC is faster than EC and RG even with the extreme setting.
3. Experiments and Results
3.1. Method Comparison
- SPGN: Recent learning-based method [34] which is designed for small-scale indoor scenes (https://github.com/laughtervv/SGPN, Salt Lake City, USA).
- VoxelNet: Recent learning-based method [35] which learns sparse point-wise features for voxels and use 3D convolutions for feature propagation (https://github.com/steph1793/Voxelnet, Salt Lake City, USA).
- LiDARSeg: State-of-the-art instance segmentation [37] for large-scale outdoor LiDAR point clouds (https://github.com/feihuzhang/LiDARSeg, Paris, France).
3.2. Metrics Evaluation
- Average precision: The average precision (AP) is a widely accepted point-based metric to evaluate the segmentation quality [37], as well as similar to criterion for COCO instance segmentation challenges [47]. The equation for AP can be presented as:
- Complexity: Both running time and memory consumption of real data experiments are designed to evaluate the time complexity and the space complexity. Our method was executed on a 32GB RAM, Intel Core I9 computer and compared to two other classical geometry-based methods: EC and RG. The NVIDIA 3090 GPU was utilized to evaluate both the learning-based methods SPGN and VoxelNet.
3.3. Synthetic Data
3.3.1. Setting
3.3.2. Varying Density of Cluster
3.3.3. Varying Number of Clusters
3.3.4. Varying Uniformity of Cluster
3.3.5. Segmentation Quality
3.4. Real Data
3.4.1. FEC vs. Geometry-Based Methods on KITTI
- Inter-class segmentation: the input for inter-class segmentation is a point cloud representing a single class, such as car, building, or tree, for example. Following the completion of the classification step, instance segmentation is carried out in a manner that is distinct for each of the classes.
- Intra-class segmentation: as input, intra-class segmentation utilizes multiple-class point clouds. In such a mode, the original LiDAR point cloud is utilized as input without classification.
3.4.2. FEC vs. Learning-Based Methods on KITTI
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Vo, A.V.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote. Sens. 2015, 104, 88–100. [Google Scholar] [CrossRef]
- Dewan, A.; Caselitz, T.; Tipaldi, G.D.; Burgard, W. Motion-based detection and tracking in 3D LiDAR scans. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 4508–4513. [Google Scholar]
- Zucker, S.W.; Hummel, R.A. A Three-Dimensional Edge Operator. IEEE Trans. Pattern Anal. Mach. Intell. 1981, PAMI-3, 324–331. [Google Scholar] [CrossRef]
- Monga, O.; Deriche, R. 3D edge detection using recursive filtering: Application to scanner images. In 1989 IEEE Computer Society Conference on Computer Vision and Pattern Recognition; IEEE Computer Society: Washington, DC, USA, 1989; pp. 28–35. [Google Scholar]
- Monga, O.; Deriche, R.; Rocchisani, J.M. 3D edge detection using recursive filtering: Application to scanner images. CVGIP Image Underst. 1991, 53, 76–87. [Google Scholar] [CrossRef] [Green Version]
- Wani, M.; Batchelor, B. Edge-region-based segmentation of range images. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 314–319. [Google Scholar] [CrossRef]
- Sappa, A.D.; Devy, M. Fast range image segmentation by an edge detection strategy. In Proceedings of the International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001. [Google Scholar]
- Wani, M.A.; Arabnia, H.R. Parallel Edge-Region-Based Segmentation Algorithm Targeted at Reconfigurable MultiRing Network. J. Supercomput. 2003, 25, 43–62. [Google Scholar] [CrossRef]
- Öztireli, A.C.; Guennebaud, G.; Gross, M. Feature Preserving Point Set Surfaces based on Non-Linear Kernel Regression. Comput. Graph. Forum 2009, 28, 493–501. [Google Scholar] [CrossRef] [Green Version]
- Rabbani, T.; Van Den Heuvel, F.; Vosselmann, G. Segmentation of point clouds using smoothness constraint. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2006, 36, 248–253. [Google Scholar]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the KITTI vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Rusu, R.B. Semantic 3D object maps for everyday manipulation in human living environments. KI-Künstliche Intelligenz 2010, 24, 345–348. [Google Scholar] [CrossRef] [Green Version]
- Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U. Voxel- and graph-based point cloud segmentation of 3D scenes using perceptual grouping laws. ISPRS Ann. Photogramm. Remote. Sens. Spat. Inf. Sci. 2017, 4, 43–50. [Google Scholar] [CrossRef] [Green Version]
- Huang, M.; Wei, P.; Liu, X. An Efficient Encoding Voxel-Based Segmentation (EVBS) Algorithm Based on Fast Adjacent Voxel Search for Point Cloud Plane Segmentation. Remote Sens. 2019, 11, 2727. [Google Scholar] [CrossRef] [Green Version]
- Jagannathan, A.; Miller, E.L. Three-dimensional surface mesh segmentation using curvedness-based region growing approach. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 2195–2204. [Google Scholar] [CrossRef] [PubMed]
- Klasing, K.; Althoff, D.; Wollherr, D.; Buss, M. Comparison of surface normal estimation methods for range sensing applications. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3206–3211. [Google Scholar]
- Che, E.; Olsen, M.J. Multi-scan segmentation of terrestrial laser scanning data based on normal variation analysis. ISPRS J. Photogramm. Remote. Sens. 2018, 143, 233–248. [Google Scholar] [CrossRef]
- Belton, D.; Lichti, D.D. Classification and segmentation of terrestrial laser scanner point clouds using local variance information. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2006, 36, 44–49. [Google Scholar]
- Li, D.; Cao, Y.; Tang, X.s.; Yan, S.; Cai, X. Leaf Segmentation on Dense Plant Point Clouds with Facet Region Growing. Sensors 2018, 18, 3625. [Google Scholar] [CrossRef] [Green Version]
- Habib, A.; Lin, Y.J. Multi-Class Simultaneous Adaptive Segmentation and Quality Control of Point Cloud Data. Remote Sens. 2016, 8, 104. [Google Scholar] [CrossRef] [Green Version]
- Dimitrov, A.; Golparvar-Fard, M. Segmentation of building point cloud models including detailed architectural/structural features and MEP systems. Autom. Constr. 2015, 51, 32–45. [Google Scholar] [CrossRef]
- Grilli, E.; Menna, F.; Remondino, F. A review of point clouds segmentation and classification algorithms. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2017, 42, 339. [Google Scholar] [CrossRef] [Green Version]
- Lavoué, G.; Dupont, F.; Baskurt, A. A new CAD mesh segmentation method, based on curvature tensor analysis. Comput.-Aided Des. 2005, 37, 975–987. [Google Scholar] [CrossRef]
- Yamauchi, H.; Gumhold, S.; Zayer, R.; Seidel, H.P. Mesh segmentation driven by Gaussian curvature. Vis. Comput. 2005, 21, 659–668. [Google Scholar] [CrossRef]
- Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. KDD-96 Proc. 1996, 96, 226–231. [Google Scholar]
- Xu, R.; Xu, J.; Wunsch, D.C. Clustering with differential evolution particle swarm optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar] [CrossRef]
- Shi, B.Q.; Liang, J.; Liu, Q. Adaptive simplification of point cloud using k-means clustering. Comput.-Aided Des. 2011, 43, 910–922. [Google Scholar] [CrossRef]
- Kong, D.; Xu, L.; Li, X.; Li, S. K-Plane-Based Classification of Airborne LiDAR Data for Accurate Building Roof Measurement. IEEE Trans. Instrum. Meas. 2014, 63, 1200–1214. [Google Scholar] [CrossRef]
- Chehata, N.; David, N.; Bretar, F. LIDAR data classification using hierarchical K-means clustering. In Proceedings of the ISPRS Congress Beijing 2008, Beijing, China, 2–10 September 2008; Volume 37, pp. 325–330. [Google Scholar]
- Sun, X.; Ma, H.; Sun, Y.; Liu, M. A Novel Point Cloud Compression Algorithm Based on Clustering. IEEE Robot. Autom. Lett. 2019, 4, 2132–2139. [Google Scholar] [CrossRef]
- Zhang, L.; Zhu, Z. Unsupervised Feature Learning for Point Cloud Understanding by Contrasting and Clustering Using Graph Convolutional Neural Networks. In Proceedings of the 2019 International Conference on 3D Vision (3DV), Quebec City, QC, Canada, 16–19 September 2019; pp. 395–404. [Google Scholar] [CrossRef]
- Xu, Y.; Arai, S.; Liu, D.; Lin, F.; Kosuge, K. FPCC: Fast point cloud clustering-based instance segmentation for industrial bin-picking. Neurocomputing 2022, 494, 255–268. [Google Scholar] [CrossRef]
- Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar] [CrossRef] [Green Version]
- Wang, W.; Yu, R.; Huang, Q.; Neumann, U. Sgpn: Similarity group proposal network for 3d point cloud instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2569–2578. [Google Scholar]
- Zhou, Y.; Tuzel, O. Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4490–4499. [Google Scholar]
- Lahoud, J.; Ghanem, B.; Pollefeys, M.; Oswald, M.R. 3D instance segmentation via multi-task metric learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 9256–9266. [Google Scholar]
- Zhang, F.; Guan, C.; Fang, J.; Bai, S.; Yang, R.; Torr, P.H.; Prisacariu, V. Instance segmentation of lidar point clouds. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 9448–9455. [Google Scholar]
- Wang, B.H.; Chao, W.L.; Wang, Y.; Hariharan, B.; Weinberger, K.Q.; Campbell, M. LDLS: 3-D object segmentation through label diffusion from 2-D images. IEEE Robot. Autom. Lett. 2019, 4, 2902–2909. [Google Scholar] [CrossRef] [Green Version]
- Rusu, R.B.; Blodow, N.; Marton, Z.C.; Beetz, M. Close-range scene segmentation and reconstruction of 3D point cloud maps for mobile manipulation in domestic environments. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MI, USA, 10–15 October 2009; pp. 1–6. [Google Scholar]
- Rusu, R.B.; Cousins, S. 3D is here: Point cloud library (pcl). In Proceedings of the 2011 IEEE iNternational Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
- Nguyen, A.; Cano, A.M.; Edahiro, M.; Kato, S. Fast Euclidean Cluster Extraction Using GPUs. J. Robot. Mechatron. 2020, 32, 548–560. [Google Scholar] [CrossRef]
- Zermas, D.; Izzat, I.; Papanikolopoulos, N. Fast segmentation of 3D point clouds: A paradigm on lidar data for autonomous vehicle applications. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5067–5073. [Google Scholar]
- Bogoslavskyi, I.; Stachniss, C. Fast range image-based segmentation of sparse 3D laser scans for online operation. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 163–169. [Google Scholar]
- Himmelsbach, M.; Hundelshausen, F.V.; Wuensche, H.J. Fast segmentation of 3D point clouds for ground vehicles. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 560–565. [Google Scholar]
- Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Stanley: The robot that won the DARPA Grand Challenge. J. Field Robot. 2006, 23, 661–692. [Google Scholar] [CrossRef]
- Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An easy-to-use airborne LiDAR data filtering method based on cloth simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
- Behley, J.; Garbade, M.; Milioto, A.; Quenzel, J.; Behnke, S.; Stachniss, C.; Gall, J. Semantickitti: A dataset for semantic scene understanding of lidar sequences. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 9297–9307. [Google Scholar]
Ref | [37] | [44] | [42] | [43] | [39] | [35] | [34] | [12] | [10] | [16] | [18] | [18] | FEC (Ours) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GPU-free | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
Unorganized point cloud | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Dataset (Point Number) | Mode | Class (Point Number) | EC [12] (s) | RG [10] (s) | FEC [Ours] (s) |
---|---|---|---|---|---|
#00 (1,498,667) | intra-class | - | 414 | 780 | 140 |
car (55,048) | 23.0 | 26.9 | 0.5 | ||
inter-class | building (131,351) | 13.0 | 58.1 | 1.3 | |
tree (351,878) | 51.9 | 167.0 | 9.9 | ||
#01 (1,249,590) | intra-class | - | 289.4 | 651.0 | 103.3 |
car (62,469) | 21.7 | 30.0 | 0.5 | ||
inter-class | building (204,911) | 26.3 | 95.9 | 2.2 | |
tree (455,519) | 150.3 | 232.7 | 6.3 | ||
#02 (1,406,316) | intra-class | - | 323.2 | 721.3 | 105.9 |
car (85,528) | 37.7 | 42.0 | 0.87 | ||
inter-class | building (209,704) | 21.1 | 98.3 | 2.4 | |
tree (476,077) | 118.1 | 242.2 | 23.3 | ||
#03 (1,714,700) | intra-class | - | 931.1 | 908.1 | 90.4 |
car (89,337) | 31.9 | 42.4 | 2.8 | ||
inter-class | building (210,300) | 21.4 | 97.8 | 2.39 | |
tree (582,929) | 167.1 | 298.6 | 17.2 | ||
#04 (1,324,991) | intra-class | - | 313.6 | 682.6 | 82.1 |
car (89,646) | 36.1 | 43.9 | 0.9 | ||
inter-class | building (212,195) | 98.2 | 99.3 | 2.33 | |
tree (653,342) | 292.4 | 335.9 | 18.0 | ||
#05 (1,239,293) | intra-class | - | 319.3 | 631.3 | 44.0 |
car (96,472) | 45.8 | 48.2 | 0.9 | ||
inter-class | building (241,977) | 173.6 | 122.7 | 2.6 | |
tree (658,529) | 127.3 | 335.1 | 21.0 | ||
#06 (1,276,859) | intra-class | - | 319.3 | 631.3 | 44.0 |
car (99,480) | 39.5 | 48.3 | 0.9 | ||
inter-class | building (241,977) | 36.0 | 124.0 | 3.1 | |
tree (679,665) | 158.9 | 345.8 | 25.5 | ||
#07 (1,556,851) | intra-class | - | 413.1 | 801.3 | 123.5 |
car (102,205) | 38.8 | 51.0 | 1.0 | ||
inter-class | building (274,741) | 195.1 | 134.6 | 3.4 | |
tree (751,354) | 156.6 | 383.3 | 32.2 | ||
#08 (1,828,399) | intra-class | - | 265.7 | 417.3 | 49.1 |
car (113,404) | 33.5 | 54.2 | 1.8 | ||
inter-class | building (307,844) | 55.4 | 156.8 | 3.9 | |
tree (754,161) | 189 | 395.9 | 26.3 | ||
#09 (1,116,058) | intra-class | - | 225.7 | 563.5 | 62.7 |
car (115,489) | 36.0 | 55.8 | 1.3 | ||
inter-class | building (296,751) | 55.9 | 153.53 | 3.8 | |
tree (828,874) | 229.3 | 428.6 | 35.1 | ||
#10 (1,172,405) | intra-class | - | 280.4 | 598.5 | 59.9 |
car (115,617) | 39.4 | 55.8 | 1.0 | ||
inter-class | building (311,047) | 43.0 | 152.9 | 3.9 | |
tree (841,600) | 230.4 | 431.2 | 24.8 | ||
#11 (2,002,769) | intra-class | - | 948.2 | 1062.9 | 120.9 |
car (127,417) | 51.7 | 61.6 | 1.3 | ||
inter-class | building (337,911) | 48.6 | 166.6 | 4.3 | |
tree (870,785) | 274.7 | 447.1 | 23.2 |
Method | Time | Memory (MB) | AP (%) | |||
---|---|---|---|---|---|---|
Intra-Class | Inter-Class | |||||
Car | Building | Tree | ||||
EC | 387.8 | 36.3 | 65.7 | 195.6 | 211 | 65.8 |
RG | 706.0 | 46.7 | 121.7 | 337.0 | 145 | 61.1 |
FEC | 87.8 | 1.2 | 3.0 | 21.9 | 89 | 65.5 |
Sequence | GPU: Nvidia 3090X1 | CPU: Intel I9 | ||||||
---|---|---|---|---|---|---|---|---|
SPGN [34] (ms) | VoxelNet [35] (ms) | LiDARSeg [37] (ms) | FEC [Ours] (ms) | RG [10] (ms) | EC [12] (ms) | |||
#00 | 1510.9 | 442.7 | 470.2 | 140.8 | 780.8 | 414.6 | ||
#01 | 1705.4 | 454.1 | 471.9 | 103.3 | 651.0 | 289.4 | ||
#02 | 1605.6 | 427.4 | 504.8 | 105.9 | 721.3 | 323.2 | ||
#03 | 1875.3 | 380.6 | 386.0 | 90.4 | 908.1 | 931.1 | ||
#04 | 1523.2 | 469.0 | 506.2 | 82.2 | 682.7 | 313.7 | ||
#05 | 1675.0 | 453.4 | 383.1 | 44.0 | 631.4 | 319.3 | ||
#06 | 1749.4 | 606.8 | 481.1 | 71.7 | 653.7 | 318.5 | ||
#07 | 1496.2 | 460.1 | 439.9 | 53.6 | 801.4 | 123.6 | ||
#08 | 1816.3 | 644.2 | 452.7 | 49.1 | 417.3 | 165.8 | ||
#09 | 1509.2 | 472.7 | 465.9 | 62.8 | 563.5 | 225.8 | ||
#10 | 1558.0 | 396.4 | 485.6 | 60.0 | 598.5 | 280.4 | ||
#11 | 1485.6 | 362.9 | 422.0 | 121.0 | 1062.9 | 948.3 | ||
Average time (ms) | 1625.8 | 464.2 | 455.8 | 87.9 | 706.0 | 387.8 | ||
AP (%) | 46.3 | 55.7 | 59.0 | 63.2 | 61.0 | 63.1 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cao, Y.; Wang, Y.; Xue, Y.; Zhang, H.; Lao, Y. FEC: Fast Euclidean Clustering for Point Cloud Segmentation. Drones 2022, 6, 325. https://doi.org/10.3390/drones6110325
Cao Y, Wang Y, Xue Y, Zhang H, Lao Y. FEC: Fast Euclidean Clustering for Point Cloud Segmentation. Drones. 2022; 6(11):325. https://doi.org/10.3390/drones6110325
Chicago/Turabian StyleCao, Yu, Yancheng Wang, Yifei Xue, Huiqing Zhang, and Yizhen Lao. 2022. "FEC: Fast Euclidean Clustering for Point Cloud Segmentation" Drones 6, no. 11: 325. https://doi.org/10.3390/drones6110325
APA StyleCao, Y., Wang, Y., Xue, Y., Zhang, H., & Lao, Y. (2022). FEC: Fast Euclidean Clustering for Point Cloud Segmentation. Drones, 6(11), 325. https://doi.org/10.3390/drones6110325