Next Article in Journal
Wand-Based Calibration of Unsynchronized Multiple Cameras for 3D Localization
Previous Article in Journal
Prediction of Ground Wave Propagation Delay for MF R-Mode
Previous Article in Special Issue
Discrete Geodesic Distribution-Based Graph Kernel for 3D Point Clouds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Intelligent Point Cloud Processing, Sensing, and Understanding

1
Guangdong Key Laboratory of Intelligent Information Processing, College of Electronics and Information Engineering, Shenzhen University, Shenzhen 518052, China
2
School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen 518037, China
3
School of Communications and Information Engineering, Nanjing University of Post and Telecommunications, Nanjing 210042, China
4
School of Stomatology, Peking University, Beijing 100081, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(1), 283; https://doi.org/10.3390/s24010283
Submission received: 25 December 2023 / Accepted: 28 December 2023 / Published: 3 January 2024
(This article belongs to the Special Issue Intelligent Point Cloud Processing, Sensing and Understanding)

1. Introduction

Point clouds are considered one of the fundamental pillars for representing the 3D digital landscape [1], despite the irregular topology between discrete data points. Recent advances in sensor technology [2] that acquire point cloud data to enable flexible and scalable geometric representations have paved the way for the development of new ideas, methodologies, and solutions in ubiquitous sensing and understanding applications. Existing sensor technologies, such as LiDAR, stereo cameras, and laser scanners [3], can be used from a variety of platforms (e.g., satellites, aerial, drones, vehicle-mounted, backpacks, handheld, and static terrestrial) [4,5], viewpoints (e.g., nadir, oblique, and side view) [6], spectra (e.g., multispectral) [7], and granularities (e.g., point density and completeness) [8]. Meanwhile, many promising methods have been developed based on computer vision and deep learning to process the point cloud data [9,10]. However, the expanding applications of point clouds in complex and diverse scenarios, such as autonomous driving [11], robotics [12], augmented reality [13], and urban planning [14], pose new challenges [15] to existing intelligent point cloud approaches.
Recently, artificial intelligence has greatly facilitated the extraction of valuable information from complex point cloud data [16]. Deep learning-based models [16] have shown impressive performance in various point cloud tasks, such as completion [17], compression [18], 3D reconstruction [19], semantic segmentation [19], and object detection [20]. However, as we face increasingly complex and dynamic 3D application scenarios, more accurate, efficient, and effective methods are becoming more and more urgent [21]. Therefore, further investigation on improving intelligent point cloud processing, sensing, and understanding capabilities is of great significance.
This Special Issue collects promising approaches that develop innovative technologies for generating, processing, and analyzing various formats of point cloud data. A total of ten contributions (nine regular articles and one survey) from China, Turkey, Romania, Portugal, the USA, Italy, and the Republic of Korea have been ultimately accepted for publication. These contributions delve into diverse aspects of point clouds, including structural analysis, instance segmentation, registration, texture mapping of 3D meshes, model acceleration and deployment, 3D modeling, up-sampling, plant part segmentation, image-to-point-cloud reconstruction, and LiDAR point cloud (LPC) object detection. The next section provides a concise introduction to each contribution collected in this Special Issue.

2. Overview of Contributions

Contribution 1 explored the application of graph kernels in the structural analysis of point clouds, emphasizing their effectiveness in preserving topological structures and enabling machine learning methods on evolving vector data represented as graphs. Specifically, a unique kernel function was introduced to tailor for similarity determination in the point cloud data. To reflect the underlying discrete geometry, the kernel was further formulated based on the proximity of geodesic route distributions in graphs. By demonstrating the effectiveness of the kernel function in supervised classification using a convolutional neural network (CNN), experimental results validated the efficiency of the proposed kernel function for understanding the geometric and topological aspects of 3D point clouds.
Contribution 2 presented a weakly supervised instance segmentation approach for point clouds, addressing the challenge of inaccurate bounding box annotations. To avoid labor-intensive point-level annotations, they first developed a self-distillation architecture that leveraged the consistency regularization, and then utilized data perturbation and historical predictions to enhance generalization, as well as prevent over-fitting to noisy labels. Later, they selected reliable samples and corrected labels based on historical consistency. Experimental results on the benchmark dataset demonstrated the effectiveness and robustness of their approach, achieving comparable performance to existing supervised methods and outperforming recent weakly supervised methods.
Contribution 3 proposed a robust alignment scheme for point clouds, where the rotation and translation coefficients were calculated using the angle of the normal vector of the building facade and the distance between outer endpoints. Experimental results demonstrated the feasibility and robustness of their alignment method on homologous and cross-source point clouds. In addition, they also pointed out that the future work can further optimize the efficiency of parameter-dependent building facade point extraction and explore applications to point cloud registration with varying sensor qualities.
Contribution 4 developed a novel sequential pairwise color-correction approach to mitigate texture seams generated from multiple images. By selecting a reference image and computing the color correction paths through a weighted graph, this approach could effectively enhance the color similarities among different images, resulting in high-quality textured meshes. Experimental results show that the proposed method outperforms existing schemes in both qualitative and quantitative evaluations on an indoor dataset, especially in scenarios with high triangle transitions.
Contribution 5 designed a light-weight CNN model for moving object segmentation in LPCs, addressing the challenge of real-time processing on embedded platforms. The proposed network achieved a reduction in parameters compared to the state of the art, demonstrating efficient processing on the RTX 3090 GPU. In addition, it has been also successfully implemented on an FPGA platform, achieving 32 fps for moving-object segmentation, meeting the real-time requirements in autonomous driving. Despite its comparable error performance with significantly fewer parameters, this light-weight model faced potential challenges, such as simplifying the network structure without compromising performance and addressing the sacrifice of low-level details for computational acceleration.
Contribution 6 addressed the challenge of accurately representing cultural heritage objects for finite element analysis (FEA) to understand their mechanical behavior. Unlike the use of traditional CAD 3D models and non-uniform rational B-spline surfaces (NURBS), they employed an alternative method utilizing the re-topology procedure to create simplified yet accurate 3D models for FEA. This study emphasized the importance of retaining the formal definition compatible with FEA software, demonstrating its effectiveness for morphologically complex objects. Experimental results demonstrate that the proposed method can reduce the mesh size, while maintaining high accuracy compared to high-resolution reality-based models. Future work can be developed to improve interoperability, material segmentation, and detailed parameterization for a more comprehensive understanding of the structural behavior of cultural heritage objects.
Contribution 7 proposed a point cloud up-sampling via multi-scale features attention (PU-MFA) method, leveraging the U-Net structure to combine multi-scale features and cross-attention mechanism. PU-MFA was developed to adaptively and effectively use multi-scale features, demonstrating superior performance in generating high-quality dense points. Experimental validations on synthetic and real-scanned datasets show the effectiveness of PU-MFA. It is worth noting that PU-MFA currently has limitations in addressing arbitrary up-sampling ratios.
Contribution 8 introduced the MASPC_Transform, a segmentation network for plant point clouds designed to address the challenges posed by the intricate and small-scale nature of plant organs. Leveraging a multi-head attention separation and a spatially grounded attention separation loss, MASPC_Transform established connections for similar point clouds scattered across different areas in the point cloud space. Additionally, a position-coding method was proposed to enhance the feature extraction in the presence of disordering point clouds. Experimental results demonstrated that MASPC_Transform outperformed existing approaches on the plant segmentation. Finally, they also emphasized the need for further testing on new open-source datasets to validate the generalizability of the MASPC_Transform.
Contribution 9 presented a novel 3D-SSRecNet network for efficient 3D point cloud reconstruction from a single image. 3D-SSRecNet was composed of a 2D image feature extraction network based on a backbone network for object detection, and a point cloud prediction network for minimizing the reconstruction loss. The specially chosen activation function was then employed for better shape prediction and lower reconstructed error. Experimental results on two datasets demonstrated the promising performance of 3D-SSRecNet. Although 3D-SSRecNet can be considered as a computationally effective solution for point cloud reconstruction, future work can be investigated to further improve local reconstruction effects while maintaining computational efficiency.
Contribution 10 provided a comprehensive survey on deep learning-based LiDAR 3D object detection for autonomous driving. It summarized the commonly used feature extraction and processing techniques for LPCs, the coordinate systems in LiDAR object detection, and the stages of autonomous driving. Furthermore, a deep learning-based LPC object detection methods were classified into three categories: projection, voxel, and raw point clouds. They have also conducted in-depth analyses, comparisons, and summaries of the advantages and disadvantages of existing LPC object detection methods. Finally, they pointed out that there are still many open issues in improving model speed and accuracy to achieve real-time processing for level-4 to level-5 autonomous driving.

3. Conclusions

This Special Issue serves as a portfolio, bringing together a wide range of contributions that address crucial challenges and advancements in the region of point cloud processing, sensing, and understanding. The selected papers represent a collective endeavor to push the boundaries of point cloud knowledge, offering intelligent solutions to existing challenges, while also unlocking new applications for 3D point clouds. We believe that the above papers will provide valuable insights for researchers and practitioners in this field, stimulating ongoing evolution towards academic and industrial solutions that are not only more accurate, but also more efficient and effective.

Author Contributions

Original draft preparation, M.W.; review and editing, G.Y., J.X. and S.T. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

The related datasets can be referred to each contribution in this Editorial.

Acknowledgments

The authors express their sincere gratitude to Runnan Huang for his extensive support and assistance in the preparation of this Editorial.

Conflicts of Interest

The authors declare no conflicts of interest.

List of Contributions

  • Balcı, M.A.; Akgüller, Ö.; Batrancea, L.M.; Gaban, L. Discrete Geodesic Distribution-Based Graph Kernel for 3D Point Clouds. Sensors 2023, 23, 2398.
  • Peng, Y.; Feng, H.; Chen, T.; Hu, B. Point Cloud Instance Segmentation with Inaccurate Bounding-Box Annotations. Sensors 2023, 23, 2343.
  • Pang, L.; Liu, D.; Li, C.; Zhang, F. Automatic Registration of Homogeneous and Cross-Source TomoSAR Point Clouds in Urban Areas. Sensors 2023, 23, 852.
  • Dal’Col, L.; Coelho, D.; Madeira, T.; Dias, P.; Oliveira, M. A Sequential Color Correction Approach for Texture Mapping of 3D Meshes. Sensors 2023, 23, 607.
  • Xie, X.; Wei, H.; Yang, Y. Real-Time LiDAR Point-Cloud Moving Object Segmentation for Autonomous Driving. Sensors 2023, 23, 547.
  • Gonizzi Barsanti, S.; Guagliano, M.; Rossi, A. 3D Reality-Based Survey and Retopology for Structural Analysis of Cultural Heritage. Sensors 2022, 22, 9593.
  • Lee, H.; Lim, S. PU-MFA: Point Cloud Up-Sampling via Multi-Scale Features Attention. Sensors 2022, 22, 9308.
  • Li, B.; Guo, C. MASPC_Transform: A Plant Point Cloud Segmentation Network Based on Multi-Head Attention Separation and Position Code. Sensors 2022, 22, 9225.
  • Li, B.; Zhu, S.; Lu, Y. A single stage and single view 3D point cloud reconstruction network based on DetNet. Sensors 2022, 22, 8235.
  • Alaba, S.Y.; Ball, J.E. A survey on deep-learning-based lidar 3d object detection for autonomous driving. Sensors 2022, 22, 9577.

References

  1. Xu, H.; Li, C.; Hu, Y.; Li, S.; Kong, R.; Zhang, Z. Quantifying the effects of 2D/3D urban landscape patterns on land surface temperature: A perspective from cities of different sizes. Build. Environ. 2023, 233, 110085. [Google Scholar] [CrossRef]
  2. Rogers, C.; Piggott, A.Y.; Thomson, D.J.; Wiser, R.F.; Opris, I.E.; Fortune, S.A.; Compston, A.J.; Gondarenko, A.; Meng, F.; Chen, X.; et al. A universal 3D imaging sensor on a silicon photonics platform. Nature 2021, 590, 256–261. [Google Scholar] [CrossRef] [PubMed]
  3. Schwarz, S.; Preda, M.; Baroncini, V.; Budagavi, M.; Cesar, P.; Chou, P.A.; Cohen, R.A.; Krivokuća, M.; Lasserre, S.; Li, Z.; et al. Emerging MPEG standards for point cloud compression. IEEE J. Emerg. Sel. Top. Circuits Syst. 2018, 9, 133–148. [Google Scholar] [CrossRef]
  4. Wang, Q.; Tan, Y.; Mei, Z. Computational methods of acquisition and processing of 3D point cloud data for construction applications. Arch. Comput. Methods Eng. 2020, 27, 479–499. [Google Scholar] [CrossRef]
  5. Amarasingam, N.; Salgadoe, A.S.A.; Powell, K.; Gonzalez, L.F.; Natarajan, S. A review of UAV platforms, sensors, and applications for monitoring of sugarcane crops. Remote Sens. Appl. Soc. Environ. 2022, 26, 100712. [Google Scholar] [CrossRef]
  6. Lei, J.; Song, J.; Peng, B.; Li, W.; Pan, Z.; Huang, Q. C2FNet: A coarse-to-fine network for multi-view 3D point cloud generation. IEEE Trans. Image Process. 2022, 31, 6707–6718. [Google Scholar] [CrossRef] [PubMed]
  7. Wang, C.; Gu, Y.; Li, X. A Robust Multispectral Point Cloud Generation Method Based on 3D Reconstruction from Multispectral Images. IEEE Trans. Geosci. Remote Sens. 2023, 62, 5407612. [Google Scholar]
  8. Rebolj, D.; Pučko, Z.; Babič, N.Č.; Bizjak, M.; Mongus, D. Point cloud quality requirements for Scan-vs-BIM based automated construction progress monitoring. Autom. Constr. 2017, 84, 323–334. [Google Scholar] [CrossRef]
  9. Xie, W.; Wang, M.; Lin, D.; Shi, B.; Jiang, J. Surface Geometry Processing: An Efficient Normal-based Detail Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 13749–13765. [Google Scholar] [CrossRef] [PubMed]
  10. Mirzaei, K.; Arashpour, M.; Asadi, E.; Masoumi, H.; Bai, Y.; Behnood, A. 3D point cloud data processing with machine learning for construction and infrastructure applications: A comprehensive review. Adv. Eng. Inform. 2022, 51, 101501. [Google Scholar] [CrossRef]
  11. Sun, X.; Wang, M.; Du, J.; Sun, Y.; Cheng, S.S.; Xie, W. A Task-Driven Scene-Aware LiDAR Point Cloud Coding Framework for Autonomous Vehicles. IEEE Trans. Ind. Inform. 2022, 1, 1–11. [Google Scholar] [CrossRef]
  12. Pomerleau, F.; Colas, F.; Siegwart, R. A Review of Point Cloud Registration Algorithms for Mobile Robotics; Now Publishers Foundations and Trends® in Robotics: Hanover, MA, USA, 2015; Volume 4, pp. 1–104. [Google Scholar]
  13. Chen, Y.; Wang, Q.; Chen, H.; Song, X.; Tang, H.; Tian, M. An overview of augmented reality technology. J. Phys. Conf. Ser. 2019, 1237, 022082. [Google Scholar] [CrossRef]
  14. Alexander, C.; Tansey, K.; Kaduk, J.; Holland, D.; Tate, N.J. An approach to classification of airborne laser scanning point cloud data in an urban environment. Int. J. Remote Sens. 2011, 32, 9151–9169. [Google Scholar] [CrossRef]
  15. Li, Y.; Ma, L.; Zhong, Z.; Liu, F.; Chapman, M.A.; Cao, D.; Li, J. Deep learning for LiDAR point clouds in autonomous driving: A review. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 3412–3432. [Google Scholar] [CrossRef]
  16. Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; Bennamoun, M. Deep learning for 3D point clouds: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 4338–4364. [Google Scholar] [CrossRef] [PubMed]
  17. Huang, Z.; Yu, Y.; Xu, J.; Ni, F.; Le, X. Pf-net: Point fractal network for 3d point cloud completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 7662–7670. [Google Scholar]
  18. Huang, R.; Wang, M. Patch-Wise LiDAR Point Cloud Geometry Compression Based on Autoencoder. In Proceedings of the International Conference on Image and Graphics (ICIG), Nanjing, China, 22–24 September 2023; pp. 299–310. [Google Scholar]
  19. Ma, B.; Liu, Y.S.; Zwicker, M.; Han, Z. Surface reconstruction from point clouds by learning predictive context priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 6326–6337. [Google Scholar]
  20. Zhou, Y.; Sun, P.; Zhang, Y.; Anguelov, D.; Gao, J.; Ouyang, T.; Guo, J.; Ngiam, J.; Vasudevan, V. End-to-end multi-view fusion for 3D object detection in lidar point clouds. In Proceedings of the Conference on Robot Learning (CoRL), Virtual, 16–18 November 2020; pp. 923–932. [Google Scholar]
  21. Yang, B.; Haala, N.; Dong, Z. Progress and perspectives of point cloud intelligence. Geo-Spat. Inf. Sci. 2023, 26, 189–205. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, M.; Yue, G.; Xiong, J.; Tian, S. Intelligent Point Cloud Processing, Sensing, and Understanding. Sensors 2024, 24, 283. https://doi.org/10.3390/s24010283

AMA Style

Wang M, Yue G, Xiong J, Tian S. Intelligent Point Cloud Processing, Sensing, and Understanding. Sensors. 2024; 24(1):283. https://doi.org/10.3390/s24010283

Chicago/Turabian Style

Wang, Miaohui, Guanghui Yue, Jian Xiong, and Sukun Tian. 2024. "Intelligent Point Cloud Processing, Sensing, and Understanding" Sensors 24, no. 1: 283. https://doi.org/10.3390/s24010283

APA Style

Wang, M., Yue, G., Xiong, J., & Tian, S. (2024). Intelligent Point Cloud Processing, Sensing, and Understanding. Sensors, 24(1), 283. https://doi.org/10.3390/s24010283

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop