A 3D Point Cloud Feature Identification Method Based on Improved Point Feature Histogram Descriptor
Abstract
:1. Introduction
2. Related Work
2.1. General Process
2.2. Description Methods for Point Cloud Features
2.3. Identification Methods for Point Cloud Features
3. Methodology
3.1. Overview of the Proposed Framework
3.2. PFH and FPFH Descriptor
3.2.1. PFH Descriptor
3.2.2. Fast PFH Descriptor (FPFH)
3.3. Improved PFH Method
3.3.1. Disadvantage of PFH
3.3.2. Improvement of PFH
3.4. Identification Method
3.4.1. Point Cloud Feature Descriptor Database
- The Name field includes a feature’s name, ID, corresponding file name, and other descriptive information.
- The Class field provides category information for the feature, defining its place within the wider set of point cloud features.
- The Overall Descriptor provides an overview of the feature, including spatial boundary range, length, width, height, and volume—where the volume is calculated from the minimum bounding cube of point cloud features.
- The Detailed Descriptor contains the statistical histogram’s peak values calculated using the improved PFH or FPFH method, providing more specific information about the feature.
3.4.2. Identification Method
4. Materials and Experiments
4.1. Datasets
4.2. Experiments and Analysis
4.2.1. Labeling the Point Cloud Features
4.2.2. Point Set Sampling
4.3. Feature Description
4.4. Feature Identification Results
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Akel, A.; Kremeike, K.; Filin, S.; Sester, M.; Doytsher, Y. Dense DTM generalization aided by roads extracted from LiDAR data. ISPRS WG III/3 III 2005, 4, 54–59. [Google Scholar]
- Popescu, S.C.; Wynne, R.H. Seeing the trees in the forest: Using lidar and multispectral data fusion with local filtering and variable window size for estimating tree height. Photogramm. Eng. Remote Sens. 2004, 70, 589–604. [Google Scholar] [CrossRef]
- Bortolot, Z.J.; Wynne, R.H. Estimating forest biomass using small footprint LiDAR data: An individual tree-based approach that incorporates training data. ISPRS J. Photogramm. Remote Sens. 2005, 59, 342–360. [Google Scholar] [CrossRef]
- Hollaus, M.; Wagner, W.; Eberhöfer, C.; Karel, W. Accuracy of large-scale canopy heights derived from LiDAR data under operational constraints in a complex alpine environment. ISPRS J. Photogramm. Remote Sens. 2006, 60, 323–338. [Google Scholar] [CrossRef]
- Brzank, A.; Heipke, C. Classification of lidar data into water and land points in coastal areas. Int. Arch. Photogramm. Remote Sens. 2006, 36, 197–202. [Google Scholar]
- Axelsson, P. Processing of laser scanner data—Algorithms and applications. ISPRS J. Photogramm. Remote Sens. 1999, 54, 138–147. [Google Scholar] [CrossRef]
- Murakami, H.; Nakagawa, K.; Hasegawa, H.; Shibata, T.; Iwanami, E. Change detection of buildings using an airborne laser scanner. ISPRS J. Photogramm. Remote Sens. 1999, 54, 148–152. [Google Scholar] [CrossRef]
- Gomes Pereira, L.; Janssen, L. Suitability of laser data for DTM generation: A case study in the context of road planning and design. ISPRS J. Photogramm. Remote Sens. 1999, 54, 244–253. [Google Scholar] [CrossRef]
- Clode, S.; Rottensteiner, F.; Kootsookos, P.J.; Zelniker, E.E. Detection and vectorisation of roads from lidar data. Photogramm. Eng. Remote Sens. 2007, 73, 517–535. [Google Scholar] [CrossRef]
- García, F.; Jiménez, F.; Naranjo, J.E.; Zato, J.G.; Aparicio, F.; Armingol, J.M.; de la Escalera, A. Environment perception based on LIDAR sensors for real road applications. Robotica 2011, 30, 185–193. [Google Scholar] [CrossRef]
- Yan, Z.; Wang, H.; Ning, Q.; Lu, Y. Robust Image Matching Based on Image Feature and Depth Information Fusion. Machines 2022, 10, 456. [Google Scholar] [CrossRef]
- Wang, C.; Ji, M.; Wang, J.; Wen, W.; Li, T.; Sun, Y. An improved DBSCAN method for LiDAR data segmentation with automatic Eps estimation. Sensors 2019, 19, 172. [Google Scholar] [CrossRef]
- Hoffman, R.; Jain, A.K. Segmentation and classification of range images. IEEE Trans. Pattern Anal. Mach. Intell. 1987, 5, 608–620. [Google Scholar] [CrossRef] [PubMed]
- Grilli, E.; Menna, F.; Remondino, F. A review of point clouds segmentation and classification algorithms. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 339. [Google Scholar] [CrossRef]
- Tombari, F.; Salti, S.; Stefano, L.d. Unique Signatures of Histograms for Local Surface Description. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010. [Google Scholar]
- Tombari, F. How Does a Good Feature Look Like? In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013. [Google Scholar]
- Salti, S.; Tombari, F.; Di Stefano, L. SHOT: Unique signatures of histograms for surface and texture description. Comput. Vis. Image Underst. 2014, 125, 251–264. [Google Scholar] [CrossRef]
- Frome, A.; Huber, D.; Kolluri, R.; Bülow, T.; Malik, J. Recognizing objects in range data using regional point descriptors. In Computer Vision-ECCV 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 224–237. [Google Scholar]
- Rusu, R.; Marton, Z.; Blodow, N.; Beetz, M. Learning Informative Point Classes for the Acquisition of Object Model Maps. In Proceedings of the 2008 10th International Conference on Control, Automation, Robotics and Vision, Hanoi, Vietnam, 17–20 December 2008; pp. 643–650. [Google Scholar]
- Rusu, R.B. Semantic 3D object maps for everyday manipulation in human living environments. KI-Künstliche Intell. 2010, 24, 345–348. [Google Scholar] [CrossRef]
- Rusu, R.B.; Blodow, N.; Marton, Z.C.; Beetz, M. Aligning point cloud views using persistent feature histograms. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3384–3391. [Google Scholar]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (fpfh) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Ni, D.; Chui, Y.P.; Qu, Y.; Yang, X.; Qin, J.; Wong, T.-T.; Ho, S.S.; Heng, P.A. Reconstruction of volumetric ultrasound panorama based on improved 3D SIFT. Comput. Med. Imaging Graph. 2009, 33, 559–566. [Google Scholar] [CrossRef] [PubMed]
- Flitton, G.T.; Breckon, T.P.; Bouallagu, N.M. Object Recognition using 3D SIFT in Complex CT Volumes. In Proceedings of the British Machine Vision Conference, Aberystwyth, UK, 31 August 3–September 2010. [Google Scholar]
- Steder, B.; Rusu, R.B.; Konolige, K.; Burgard, W. Point feature extraction on 3D range scans taking into account object boundaries. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011. [Google Scholar]
- Högman, V. Building a 3D Map from RGB-D Sensors. Master’s Thesis, Computer Vision and Active Perception Laboratory Royal Institute of Technology (KTH), Stockholm, Sweden, 2012. [Google Scholar]
- Johnson, A.E.; Hebert, M. Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Onpattern Anal. Mach. Intell. 1999, 21, 433–449. [Google Scholar] [CrossRef]
- Rodríguez-Garlito, E.C.; Paz-Gallardo, A.; Plaza, A. Monitoring the Spatiotemporal Distribution of Invasive Aquatic Plants in the Guadiana River, Spain. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 228–241. [Google Scholar] [CrossRef]
- Zheng, Y.; Peng, J.; Chen, X.; Huang, C.; Chen, P.; Li, S.; Su, Y. Spatial and Temporal Evolution of Ground Subsidence in the Beijing Plain Area Using Long Time Series Interferometry. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 153–165. [Google Scholar] [CrossRef]
- Cheng, G.; Xie, X.; Han, J.; Guo, L.; Xia, G.S. Remote Sensing Image Scene Classification Meets Deep Learning: Challenges, Methods, Benchmarks, and Opportunities. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3735–3756. [Google Scholar] [CrossRef]
- Zhao, L.; Ji, S. CNN, RNN, or ViT? An Evaluation of Different Deep Learning Architectures for Spatio-Temporal Representation of Sentinel Time Series. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 44–56. [Google Scholar] [CrossRef]
- Li, J.; Sun, Q.; Chen, K.; Cui, H.; Huangfu, K.; Chen, X. 3D large-scale point cloud semantic segmentation using optimal feature description vector network: OFDV-Net. IEEE Access 2020, 8, 226285–226296. [Google Scholar] [CrossRef]
- Du, X.; He, S.; Yang, H.; Wang, C. Multi-Field Context Fusion Network for Semantic Segmentation of High-Spatial-Resolution Remote Sensing Images. Remote Sens. 2022, 14, 5830. [Google Scholar] [CrossRef]
- Anguelov, D.; Taskarf, B.; Chatalbashev, V.; Koller, D.; Gupta, D.; Heitz, G.; Ng, A. Discriminative learning of markov random fields for segmentation of 3D scan data. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005. [Google Scholar]
- Triebel, R.; Kersting, K.; Burgard, W. Robust 3D scan point classification using associative Markov networks. In Proceedings of the Robotics and Automation, 2006. ICRA 2006, Orlando, FL, USA, 15–19 May 2006. [Google Scholar]
- Munoz, D.; Bagnell, J.A.; Vandapel, N.; Hebert, M. Contextual classification with functional max-margin markov networks. In Proceedings of the Computer Vision and Pattern Recognition, 2009. CVPR 2009, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
- Munoz, D.; Vandapel, N.; Hebert, M. Directional Associative Markov Network for 3-D Point Cloud Classification; Carnegie Mellon University: Pittsburgh, PA, USA, 2008. [Google Scholar]
- Munoz, D.; Vandapel, N.; Hebert, M. Onboard contextual classification of 3-D point clouds with learned high-order markov random fields. In Proceedings of the Robotics and Automation, 2009. ICRA’09, Kobe, Japan, 12–17 May 2009. [Google Scholar]
- Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Alexandre, L.A. 3D descriptors for object and category recognition: A comparative evaluation. In Workshop on Color-Depth Camera Fusion in Robotics at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal; Citeseer: University Park, PA, USA, 2012. [Google Scholar]
- Singh, P.; Bose, S.S. A quantum-clustering optimization method for COVID-19 CT scan image segmentation. Expert Syst. Appl. 2021, 185, 115637. [Google Scholar] [CrossRef]
- Singh, P. A type-2 neutrosophic-entropy-fusion based multiple thresholding method for the brain tumor tissue structures segmentation. Appl. Soft Comput. 2021, 103, 107119. [Google Scholar] [CrossRef]
- Liao, L.; Tang, S.; Liao, J.; Li, X.; Wang, W.; Li, Y.; Guo, R. A Supervoxel-Based Random Forest Method for Robust and Effective Airborne LiDAR Point Cloud Classification. Remote Sens. 2022, 14, 1516. [Google Scholar] [CrossRef]
- Point Cloud Library. Point Feature Histograms (PFH) Descriptors. 2023. Available online: https://pcl.readthedocs.io/projects/tutorials/en/master/pfh_estimation.html (accessed on 30 April 2023).
- Point Cloud Library. Fast Point Feature Histograms (FPFH) Descriptors. 2023. Available online: https://pcl.readthedocs.io/projects/tutorials/en/master/fpfh_estimation.html (accessed on 30 April 2023).
Type | Original Point Set | Sampled Point Set |
---|---|---|
Street Lamp | ||
Tree | ||
Building |
ID | Point Cloud Feature | Improved PFH Descriptor | Improved FPFH Descriptor |
---|---|---|---|
S1 | |||
S2 | |||
S3 | |||
T1 | |||
T2 | |||
T3 | |||
B1 | |||
B2 | |||
B3 |
Descriptor | Class | Total | Original Method | Improved Method | ||
---|---|---|---|---|---|---|
Correct | Accuracy | Correct | Accuracy | |||
PFH | Street lamp | 83 | 65 | 78.31% | 74 | 89.10% |
Tree | 190 | 143 | 75.26% | 167 | 87.90% | |
Building | 13 | 8 | 61.54% | 10 | 84.60% | |
Average | - | - | 71.70% | - | 87.20% | |
FPFH | Street lamp | 71 | 51 | 71.83% | 58 | 81.70% |
Tree | 177 | 125 | 70.62% | 140 | 79.10% | |
Building | 10 | 6 | 60.00% | 8 | 80.00% | |
Average | - | - | 67.48% | - | 80.27% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, C.; Xiong, X.; Zhang, X.; Liu, L.; Tan, W.; Liu, X.; Yang, H. A 3D Point Cloud Feature Identification Method Based on Improved Point Feature Histogram Descriptor. Electronics 2023, 12, 3736. https://doi.org/10.3390/electronics12173736
Wang C, Xiong X, Zhang X, Liu L, Tan W, Liu X, Yang H. A 3D Point Cloud Feature Identification Method Based on Improved Point Feature Histogram Descriptor. Electronics. 2023; 12(17):3736. https://doi.org/10.3390/electronics12173736
Chicago/Turabian StyleWang, Chunxiao, Xiaoqing Xiong, Xiaoying Zhang, Lu Liu, Wu Tan, Xiaojuan Liu, and Houqun Yang. 2023. "A 3D Point Cloud Feature Identification Method Based on Improved Point Feature Histogram Descriptor" Electronics 12, no. 17: 3736. https://doi.org/10.3390/electronics12173736
APA StyleWang, C., Xiong, X., Zhang, X., Liu, L., Tan, W., Liu, X., & Yang, H. (2023). A 3D Point Cloud Feature Identification Method Based on Improved Point Feature Histogram Descriptor. Electronics, 12(17), 3736. https://doi.org/10.3390/electronics12173736