remotesensing-logo

Journal Browser

Journal Browser

Point Cloud Processing in Remote Sensing Technology

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 33656

Special Issue Editors


E-Mail Website
Guest Editor
TU Wien, Department of Geodesy and Geoinformation, Wiedner Hauptstraße 8/E120, 1040 Vienna, Austria
Interests: point cloud processing; laser scanning; spatial indices; efficient processing concepts; least-squares; point cloud orientation and strip adjustment

E-Mail Website
Guest Editor
Institute of Photogrammetry and Remote Sensing, Karlsruhe Institute of Technology (KIT), Englerstra_e 7, 76131 Karlsruhe, Germany
Interests: computer vision; pattern recognition; machine learning; photogrammetry; remote sensing
Special Issues, Collections and Topics in MDPI journals
Department of Remote Sensing Science and Technology, School of Electronic Engineering, Xidian University, South Taibai Road 2, Xi'an 710071, China
Interests: LiDAR remote sensing; point cloud processing; 3D reconstruction; tree modeling; vegetation structure analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Infrastructure Engineering, The University of Melbourne, Melbourne, VIC 3010, Australia
Interests: photogrammetry; 3D computer vision; remote sensing; machine learning; deep learning; automated interpretation of imagery and point clouds
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Modern data acquisition with active or passive remote sensing techniques often results in 3D point clouds. While point clouds were long regarded as an intermediate product for deriving 2.5D or 3D models, they are nowadays accepted as a primary data product that plays a central role in a huge variety of applications.

The technological advances and the miniaturisation of remote sensing hardware led to the development of a large number of distinctive devices for capturing 3D point clouds at different scales, resolutions and precisions. For instance, laser scanners, single-camera systems and multi-camera systems (in conjunction with image matching), RGBD cameras, time-of-flight sensors, synthetic aperture radar systems, ground-penetrating radar systems, echo sounding systems, index arms with tactile tip or scanning heads, etc., are used on static (e.g., tripod) or kinematic platforms (e.g., robot, car, boat, UAV, helicopter, airplane, or satellite) to capture objects or scenes of different scale via close-range, mid-range or far-range measurements.

Although the capturing procedure is the starting point for many applications, the processing of 3D point clouds is essential to visualise, enrich, analyse, quantify, evaluate, model, and to understand the measured object or scene. A processing pipeline typically consists of multiple stages, such as point cloud orientation, co-registration, quality control, feature extraction, semantic segmentation and classification, object detection and recognition, change detection, and object modelling. This Special Issue will report cutting-edge methods, algorithms, and data structures of certain stages or comprehensive processing pipelines for specific applications or sensors.

The Special Issue invites authors to submit contributions in (but not limited to) the following topics:

  • Point cloud generation and quality analyses for new or improved sensors, such as miniaturised cameras and laser scanners, integrated sensors, Geigermode and single-photon LiDAR systems, UAV-based laser scanning systems, mobile mapping systems, multi-beam echo sounding systems, tomographic synthetic aperture radar systems, etc.;
  • Deep learning methods, specific network designs, transfer learning, and data organisation strategies for realising new or improved classification and object detection tasks as required for self-driving cars, indoor navigation, object modelling, etc.;
  • Classical semantic segmentation and classification methods are still relevant for many tasks, especially when processing huge point clouds due to the computational burden of deep learning and the lack of a sufficient amount of training data;
  • Innovative 2.5D and 3D modelling algorithms, as often used in mobile and corridor mapping, but also for traditional topographic point clouds, such as terrain, surface, building and tree modelling;
  • Data fusion of point clouds acquired from different sensors, scales, and accuracies. Today's sensor heads typically combine multiple sensor elements, such as differently oriented cameras (forward, nadir, backward and oblique views), laser scanners with multiple channels or different wavelengths (infrared and green laser diodes), etc.;
  • Multi-temporal analyses, which are used, for example, for change detection, updating inventory databases, land slide monitoring, or disaster management;
  • Methods and algorithms for interacting with point clouds to visualise, inspect, and highlight specific aspects of the dataset;
  • Optimised algorithms, strategies and data structures for efficiently processing huge point clouds.

Dr. Johannes Otepka
Dr. Martin Weinmann
Dr. Di Wang
Prof. Kourosh Khoshelham
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Point cloud generation
  • Mobile mapping
  • LiDAR
  • Photogrammetric point clouds
  • Dense image matching
  • 3D modelling
  • Point cloud analysis
  • Quality and accuracy estimation
  • Feature extraction
  • Semantic segmentation
  • Supervised and unsupervised machine learning
  • Deep learning
  • Data fusion
  • Change detection

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 21179 KiB  
Article
Deep Ground Filtering of Large-Scale ALS Point Clouds via Iterative Sequential Ground Prediction
by Hengming Dai, Xiangyun Hu, Zhen Shu, Nannan Qin and Jinming Zhang
Remote Sens. 2023, 15(4), 961; https://doi.org/10.3390/rs15040961 - 9 Feb 2023
Cited by 6 | Viewed by 2137
Abstract
Ground filtering (GF) is a fundamental step for airborne laser scanning (ALS) data processing. The advent of deep learning techniques provides new solutions to this problem. Existing deep-learning-based methods utilize a segmentation or classification framework to extract ground/non-ground points, which suffers from a [...] Read more.
Ground filtering (GF) is a fundamental step for airborne laser scanning (ALS) data processing. The advent of deep learning techniques provides new solutions to this problem. Existing deep-learning-based methods utilize a segmentation or classification framework to extract ground/non-ground points, which suffers from a dilemma in keeping high spatial resolution while acquiring rich contextual information when dealing with large-scale ALS data due to the computing resource limits. To this end, we propose SeqGP, a novel deep-learning-based GF pipeline that explicitly converts the GF task into an iterative sequential ground prediction (SeqGP) problem using points-profiles. The proposed SeqGP utilizes deep reinforcement learning (DRL) to optimize the prediction sequence and retrieve the bare terrain gradually. The 3D sparse convolution is integrated with the SeqGP strategy to generate high-precision classification results with memory efficiency. Extensive experiments on two challenging test sets demonstrate the state-of-the-art filtering performance and universality of the proposed method in dealing with large-scale ALS data. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing Technology)
Show Figures

Figure 1

30 pages, 14431 KiB  
Article
AdaSplats: Adaptive Splatting of Point Clouds for Accurate 3D Modeling and Real-Time High-Fidelity LiDAR Simulation
by Jean Pierre Richa, Jean-Emmanuel Deschaud, François Goulette and Nicolas Dalmasso
Remote Sens. 2022, 14(24), 6262; https://doi.org/10.3390/rs14246262 - 10 Dec 2022
Cited by 8 | Viewed by 5538
Abstract
LiDAR sensors provide rich 3D information about their surroundings and are becoming increasingly important for autonomous vehicles tasks such as localization, semantic segmentation, object detection, and tracking. Simulation accelerates the testing, validation, and deployment of autonomous vehicles while also reducing cost and eliminating [...] Read more.
LiDAR sensors provide rich 3D information about their surroundings and are becoming increasingly important for autonomous vehicles tasks such as localization, semantic segmentation, object detection, and tracking. Simulation accelerates the testing, validation, and deployment of autonomous vehicles while also reducing cost and eliminating the risks of testing in real-world scenarios. We address the problem of high-fidelity LiDAR simulation and present a pipeline that leverages real-world point clouds acquired by mobile mapping systems. Point-based geometry representations, more specifically splats (2D oriented disks with normals), have proven their ability to accurately model the underlying surface in large point clouds, mainly with uniform density. We introduce an adaptive splat generation method that accurately models the underlying 3D geometry to handle real-world point clouds with variable densities, especially for thin structures. Moreover, we introduce a fast LiDAR sensor simulator, working in the splatted model, that leverages the GPU parallel architecture with an acceleration structure while focusing on efficiently handling large point clouds. We test our LiDAR simulation in real-world conditions, showing qualitative and quantitative results compared to basic splatting and meshing techniques, demonstrating the interest of our modeling technique. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing Technology)
Show Figures

Figure 1

18 pages, 6264 KiB  
Article
NanoMap: A GPU-Accelerated OpenVDB-Based Mapping and Simulation Package for Robotic Agents
by Violet Walker, Fernando Vanegas and Felipe Gonzalez
Remote Sens. 2022, 14(21), 5463; https://doi.org/10.3390/rs14215463 - 30 Oct 2022
Cited by 4 | Viewed by 3104
Abstract
Encoding sensor data into a map is a problem that must be undertaken by any robotic agent operating in unknown or uncertain environments, and real-time updates are crucial to safe planning and control. Most modern robotic sensors produce some form of depth data [...] Read more.
Encoding sensor data into a map is a problem that must be undertaken by any robotic agent operating in unknown or uncertain environments, and real-time updates are crucial to safe planning and control. Most modern robotic sensors produce some form of depth data or point cloud information that is only useful to the agent after being processed into the appropriate data structure, oftentimes an occupancy map. However, as the quality of sensor technology improves, so does the magnitude of the input data, which can creates a problem when trying to construct occupancy maps in real-time. Populating such an occupancy map using these dense point clouds can quickly become an expensive process, and many robotic agents have limited onboard computational bandwidth and memory. This results in delayed map updates and reduced operational performance in dynamic environments where real-time information is crucial for safe operation. However, while many modern robotic agents are still relatively limited by the power of onboard central processing units (CPUs), many platforms are gaining access to onboard graphics processing units (GPUs), and these resources remain underutilised with respect to the problem of occupancy mapping. We propose a novel probabilistic mapping solution that leverages a combination of OpenVDB, NanoVDB, and Nvidia’s Compute Unified Device Architecture (CUDA) to encode dense point clouds into OpenVDB data structures, leveraging the parallel compute strength of GPUs to provide significant speed advantages and further free up resources for tasks that cannot as easily be performed in parallel. An evaluation of our solution is provided, with performance benchmarks provided for both a laptop and a low power single board computer with onboard GPU. Similar performance improvements should be accessible on any system with access to a CUDA-compatible GPU. Additionally, our library provides the means to simulate one or more sensors on an agent operating within a randomly generated 3D-grid environment and create a live map for the purposes of evaluating planning and control techniques and for training agents via deep reinforcement learning. We also provide interface packages for the Robotic Operating System (ROS1) and the Robotic Operating System 2 (ROS2), and a ROS2 visualisation (RVIZ2) plugin for the underlying OpenVDB data structure. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing Technology)
Show Figures

Figure 1

22 pages, 2189 KiB  
Article
Robust Cuboid Modeling from Noisy and Incomplete 3D Point Clouds Using Gaussian Mixture Model
by Woonhyung Jung, Janghun Hyeon and Nakju Doh
Remote Sens. 2022, 14(19), 5035; https://doi.org/10.3390/rs14195035 - 9 Oct 2022
Cited by 2 | Viewed by 1994
Abstract
A cuboid is a geometric primitive characterized by six planes with spatial constraints, such as orthogonality and parallelism. These characteristics uniquely define a cuboid. Therefore, previous modeling schemes have used these characteristics as hard constraints, which narrowed the solution space for estimating the [...] Read more.
A cuboid is a geometric primitive characterized by six planes with spatial constraints, such as orthogonality and parallelism. These characteristics uniquely define a cuboid. Therefore, previous modeling schemes have used these characteristics as hard constraints, which narrowed the solution space for estimating the parameters of a cuboid. However, under high noise and occlusion conditions, a narrowed solution space may contain only false or no solutions, which is called an over-constraint. In this paper, we propose a robust cuboid modeling method for point clouds under high noise and occlusion conditions. The proposed method estimates the parameters of a cuboid using soft constraints, which, unlike hard constraints, do not limit the solution space. For this purpose, a cuboid is represented as a Gaussian mixture model (GMM). The point distribution of each cuboid surface owing to noise is assumed to be a Gaussian model. Because each Gaussian model is a face of a cuboid, the GMM shares the cuboid parameters and satisfies the spatial constraints, regardless of the occlusion. To avoid an over-constraint in the optimization, only soft constraints are employed, which is the expectation of the GMM. Subsequently, the soft constraints are maximized using analytic partial derivatives. The proposed method was evaluated using both synthetic and real data. The synthetic data were hierarchically designed to test the performance under various noise and occlusion conditions. Subsequently, we used real data, which are more dynamic than synthetic data and may not follow the Gaussian assumption. The real data are acquired by light detection and ranging-based simultaneous localization and mapping with actual boxes arbitrarily located in an indoor space. The experimental results indicated that the proposed method outperforms a previous cuboid modeling method in terms of robustness. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing Technology)
Show Figures

Graphical abstract

20 pages, 8133 KiB  
Article
MSIDA-Net: Point Cloud Semantic Segmentation via Multi-Spatial Information and Dual Adaptive Blocks
by Feng Shuang, Pei Li, Yong Li, Zhenxin Zhang and Xu Li
Remote Sens. 2022, 14(9), 2187; https://doi.org/10.3390/rs14092187 - 3 May 2022
Cited by 8 | Viewed by 2580
Abstract
Large-scale 3D point clouds are rich in geometric shape and scale information but they are also scattered, disordered and unevenly distributed. These characteristics lead to difficulties in learning point cloud semantic segmentations. Although many works have performed well in this task, most of [...] Read more.
Large-scale 3D point clouds are rich in geometric shape and scale information but they are also scattered, disordered and unevenly distributed. These characteristics lead to difficulties in learning point cloud semantic segmentations. Although many works have performed well in this task, most of them lack research on spatial information, which limits the ability to learn and understand the complex geometric structure of point cloud scenes. To this end, we propose the multispatial information and dual adaptive (MSIDA) module, which consists of a multispatial information encoding (MSI) block and dual adaptive (DA) blocks. The MSI block transforms the information of the relative position of each centre point and its neighbouring points into a cylindrical coordinate system and spherical coordinate system. Then the spatial information among the points can be re-represented and encoded. The DA blocks include a Coordinate System Attention Pooling Fusion (CSAPF) block and a Local Aggregated Feature Attention (LAFA) block. The CSAPF block weights and fuses the local features in the three coordinate systems to further learn local features, while the LAFA block weights the local aggregated features in the three coordinate systems to better understand the scene in the local region. To test the performance of the proposed method, we conducted experiments on the S3DIS, Semantic3D and SemanticKITTI datasets and compared the proposed method with other networks. The proposed method achieved 73%, 77.8% and 59.8% mean Intersection over Union (mIoU) on the S3DIS, Semantic3D and SemanticKITTI datasets, respectively. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing Technology)
Show Figures

Figure 1

19 pages, 3619 KiB  
Article
Single-Stage Adaptive Multi-Scale Point Cloud Noise Filtering Algorithm Based on Feature Information
by Zhen Zheng, Bingting Zha, Yu Zhou, Jinbo Huang, Youshi Xuchen and He Zhang
Remote Sens. 2022, 14(2), 367; https://doi.org/10.3390/rs14020367 - 13 Jan 2022
Cited by 26 | Viewed by 3184
Abstract
This paper proposes a single-stage adaptive multi-scale noise filtering algorithm for point clouds, based on feature information, which aims to mitigate the fact that the current laser point cloud noise filtering algorithm has difficulty quickly completing the single-stage adaptive filtering of multi-scale noise. [...] Read more.
This paper proposes a single-stage adaptive multi-scale noise filtering algorithm for point clouds, based on feature information, which aims to mitigate the fact that the current laser point cloud noise filtering algorithm has difficulty quickly completing the single-stage adaptive filtering of multi-scale noise. The feature information from each point of the point cloud is obtained based on the efficient k-dimensional (k-d) tree data structure and amended normal vector estimation methods, and the adaptive threshold is used to divide the point cloud into large-scale noise, a feature-rich region, and a flat region to reduce the computational time. The large-scale noise is removed directly, the feature-rich and flat regions are filtered via improved bilateral filtering algorithm and weighted average filtering algorithm based on grey relational analysis, respectively. Simulation results show that the proposed algorithm performs better than the state-of-art comparison algorithms. It was, thus, verified that the algorithm proposed in this paper can quickly and adaptively (i) filter out large-scale noise, (ii) smooth small-scale noise, and (iii) effectively maintain the geometric features of the point cloud. The developed algorithm provides research thought for filtering pre-processing methods applicable in 3D measurements, remote sensing, and target recognition based on point clouds. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing Technology)
Show Figures

Figure 1

29 pages, 9759 KiB  
Article
A LiDAR/Visual SLAM Backend with Loop Closure Detection and Graph Optimization
by Shoubin Chen, Baoding Zhou, Changhui Jiang, Weixing Xue and Qingquan Li
Remote Sens. 2021, 13(14), 2720; https://doi.org/10.3390/rs13142720 - 10 Jul 2021
Cited by 49 | Viewed by 7961
Abstract
LiDAR (light detection and ranging), as an active sensor, is investigated in the simultaneous localization and mapping (SLAM) system. Typically, a LiDAR SLAM system consists of front-end odometry and back-end optimization modules. Loop closure detection and pose graph optimization are the key factors [...] Read more.
LiDAR (light detection and ranging), as an active sensor, is investigated in the simultaneous localization and mapping (SLAM) system. Typically, a LiDAR SLAM system consists of front-end odometry and back-end optimization modules. Loop closure detection and pose graph optimization are the key factors determining the performance of the LiDAR SLAM system. However, the LiDAR works at a single wavelength (905 nm), and few textures or visual features are extracted, which restricts the performance of point clouds matching based loop closure detection and graph optimization. With the aim of improving LiDAR SLAM performance, in this paper, we proposed a LiDAR and visual SLAM backend, which utilizes LiDAR geometry features and visual features to accomplish loop closure detection. Firstly, the bag of word (BoW) model, describing the visual similarities, was constructed to assist in the loop closure detection and, secondly, point clouds re-matching was conducted to verify the loop closure detection and accomplish graph optimization. Experiments with different datasets were carried out for assessing the proposed method, and the results demonstrated that the inclusion of the visual features effectively helped with the loop closure detection and improved LiDAR SLAM performance. In addition, the source code, which is open source, is available for download once you contact the corresponding author. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing Technology)
Show Figures

Graphical abstract

22 pages, 8823 KiB  
Article
Hyperspectral LiDAR-Based Plant Spectral Profiles Acquisition: Performance Assessment and Results Analysis
by Jianxin Jia, Changhui Jiang, Wei Li, Haohao Wu, Yuwei Chen, Peilun Hu, Hui Shao, Shaowei Wang, Fan Yang, Eetu Puttonen and Juha Hyyppä
Remote Sens. 2021, 13(13), 2521; https://doi.org/10.3390/rs13132521 - 28 Jun 2021
Cited by 4 | Viewed by 3573
Abstract
In precision agriculture, efficient fertilization is one of the most important pursued goals. Vegetation spectral profiles and the corresponding spectral parameters are usually employed for vegetation growth status indication, i.e., vegetation classification, bio-chemical content mapping, and efficient fertilization guiding. In view of the [...] Read more.
In precision agriculture, efficient fertilization is one of the most important pursued goals. Vegetation spectral profiles and the corresponding spectral parameters are usually employed for vegetation growth status indication, i.e., vegetation classification, bio-chemical content mapping, and efficient fertilization guiding. In view of the fact that the spectrometer works by relying on ambient lighting condition, hyperspectral/multi-spectral LiDAR (HSL/MSL) was invented to collect the spectral profiles actively. However, most of the HSL/MSL works with the wavelength specially selected for specific applications. For precision agriculture applications, a more feasible HSL capable of collecting spectral profiles at wide-range spectral wavelength is necessary to extract various spectral parameters. Inspired by this, in this paper, we developed a hyperspectral LiDAR (HSL) with 10 nm spectral resolution covering 500~1000 nm. Different vegetation leaf samples were scanned by the HSL, and it was comprehensively assessed for wide-range wavelength spectral profiles acquirement, spectral parameters extraction, vegetation classification, and the laser incident angle effect. Specifically, three experiments were carried out: (1) spectral profiles results were compared with that from a SVC spectrometer (HR-1024, Spectra Vista Corporation); (2) the extracted spectral parameters from the HSL were assessed, and they were employed as the input features of a support vector machine (SVM) classifier with multiple labels to classify the vegetation; (3) in view of the influence of the laser incident angle on the HSL reflected laser intensities, we analyzed the laser incident angle effect on the spectral parameters values. The experimental results demonstrated the developed HSL was more feasible for acquiring spectral profiles with wide-range wavelength, and spectral parameters and vegetation classification results also indicated its great potentials in precision agriculture application. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing Technology)
Show Figures

Graphical abstract

Back to TopTop