remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing with Geodetic Laser Scanning: Technologies and Methods for Data Acquisition

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: closed (30 June 2022) | Viewed by 20344

Special Issue Editors


E-Mail Website
Guest Editor
Department of Surveying and Civil Engineering, National Institute of Applied Sciences of Strasbourg, 67084 Strasbourg, France
Interests: architectural laser scanning; mobile mapping systems; remote sensing; accuracy of data and 3D models

E-Mail Website
Guest Editor
Department of Surveying and Civil Engineering, National Institute of Applied Sciences of Strasbourg, 67084 Strasbourg, France
Interests: close-range photogrammetry; architectural photogrammetry & laser scanning; mobile mapping systems and photogrammetric computer systems; integration and accuracy of data in 3D city and building models
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Dep. of Civil and Environmental Engineering (DICEA), University of Florence, 3 – 50139 Firenze, Italy
Interests: geomatics; laser scanner; photogrammetry; GIS/BIM; landscape; Built Heritage
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Geomatics Engineering, FHNW University of Applied Sciences and Arts Northwestern Switzerland, Hofackerstrasse 30, 4132 Muttenz, Switzerland
Interests: photogrammetry; 3D imaging; mobile mapping; unmanned aerial vehicles; remote sensing; cloud computing; infrastructure management; smart cities

Special Issue Information

Dear Colleagues,

Progress in geodetic laser scanning technologies has led to sensors and methods with growing capabilities in automated acquisition and processing. These topics deserve to be deepened in a Special Issue entitled “Remote Sensing with Geodetic Laser Scanning: Technologies and Methods for Data Acquisition”.

Recently, geodetic scanning systems have been developed in the way to perform registration in real time during the scanning of more or less complex areas. New trends in real-time registration systems and georeferencing solutions will be of concern for this Special Issue, as will innovative solutions based on localization and positioning using LiDAR SLAM.

An important aspect is also the question of assessing the accuracy of recent laser scanning systems. Accuracy assessment of acquired data in relation to the network design, as well as the accuracy of produced models and accurate multisensor calibration methods, remains a big challenge in the era of digital transition.

The fusion of laser scanners and multispectral sensors in static or mobile mapping systems opens a wide area of applications. Their efficiency for vegetation discrimination, cultural heritage documentation, climatological challenges, and real-time monitoring will be emphasized in this Special Issue.

Moreover, point cloud processing is evolving to be of even greater assistance to users in the production of final models. The trend of using deep learning methods for improving semantic segmentation impacts the whole “scan-to-BIM” workflow. The advantages of simultaneous object detection and recognition in the acquisition process will be highlighted.

Dr. Tania Landes
Prof. Dr. Pierre Grussenmeyer
Prof. Dr. Grazia Tucci
Prof. Dr. Stephan Nebiker
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • real-time registration
  • LiDAR SLAM
  • multi-sensor fusion
  • multi-sensor calibration
  • object detection
  • semantic segmentation
  • deep learning
  • accuracy assessment

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 13974 KiB  
Article
Online Extrinsic Calibration on LiDAR-Camera System with LiDAR Intensity Attention and Structural Consistency Loss
by Pei An, Yingshuo Gao, Liheng Wang, Yanfei Chen and Jie Ma
Remote Sens. 2022, 14(11), 2525; https://doi.org/10.3390/rs14112525 - 25 May 2022
Cited by 7 | Viewed by 3777
Abstract
Extrinsic calibration on a LiDAR-camera system is an essential task for the advanced perception application for the intelligent vehicle. In the offline situation, a calibration object based method can estimate the extrinsic parameters in high precision. However, during the long time application of [...] Read more.
Extrinsic calibration on a LiDAR-camera system is an essential task for the advanced perception application for the intelligent vehicle. In the offline situation, a calibration object based method can estimate the extrinsic parameters in high precision. However, during the long time application of LiDAR-camera system in the actual scenario, the relative pose of LiDAR and camera has small and accumulated drift, so that the offline calibration result is not accurate. To correct the extrinsic parameter conveniently, we present a deep learning based online extrinsic calibration method in this paper. From Lambertian reflection model, it is found that an object with higher LiDAR intensity has the higher possibility to have salient RGB features. Based on this fact, we present a LiDAR intensity attention based backbone network (LIA-Net) to extract the significant co-observed calibration features from LiDAR data and RGB image. In the later stage of training, the loss of extrinsic parameters changes slowly, causing the risk of vanishing gradient and limiting the training efficiency. To deal with this issue, we present the structural consistency (SC) loss to minimize the difference between projected LiDAR image (i.e., LiDAR depth image, LiDAR intensity image) and its ground truth (GT) LiDAR image. It aims to accurately align the LiDAR point and RGB pixel. With LIA-Net and SC loss, we present the convolution neural network (CNN) based calibration network LIA-SC-Net. Comparison experiments on a KITTI dataset demonstrate that LIA-SC-Net has achieved more accurate calibration results than state-of-the-art learning based methods. The proposed method has both accurate and real-time performance. Ablation studies also show the effectiveness of proposed modules. Full article
Show Figures

Graphical abstract

28 pages, 38603 KiB  
Article
Point Cloud Validation: On the Impact of Laser Scanning Technologies on the Semantic Segmentation for BIM Modeling and Evaluation
by Sam De Geyter, Jelle Vermandere, Heinder De Winter, Maarten Bassier and Maarten Vergauwen
Remote Sens. 2022, 14(3), 582; https://doi.org/10.3390/rs14030582 - 26 Jan 2022
Cited by 26 | Viewed by 5448
Abstract
Building Information models created from laser scanning inputs are becoming increasingly commonplace, but the automation of the modeling and evaluation is still a subject of ongoing research. Current advancements mainly target the data interpretation steps, i.e., the instance and semantic segmentation by developing [...] Read more.
Building Information models created from laser scanning inputs are becoming increasingly commonplace, but the automation of the modeling and evaluation is still a subject of ongoing research. Current advancements mainly target the data interpretation steps, i.e., the instance and semantic segmentation by developing advanced deep learning models. However, these steps are highly influenced by the characteristics of the laser scanning technologies themselves, which also impact the reconstruction/evaluation potential. In this work, the impact of different data acquisition techniques and technologies on these procedures is studied. More specifically, we quantify the capacity of static, trolley, backpack, and head-worn mapping solutions and their semantic segmentation results such as for BIM modeling and analyses procedures. For the analysis, international standards and specifications are used wherever possible. From the experiments, the suitability of each platform is established, along with the pros and cons of each system. Overall, this work provides a much needed update on point cloud validation that is needed to further fuel BIM automation. Full article
Show Figures

Graphical abstract

24 pages, 9300 KiB  
Article
Automated Storey Separation and Door and Window Extraction for Building Models from Complete Laser Scans
by Kate Pexman, Derek D. Lichti and Peter Dawson
Remote Sens. 2021, 13(17), 3384; https://doi.org/10.3390/rs13173384 - 26 Aug 2021
Cited by 11 | Viewed by 2692
Abstract
Heritage buildings are often lost without being adequately documented. Significant research has gone into automated building modelling from point clouds, challenged by irregularities in building design and the presence of occlusion-causing clutter and non-Manhattan World features. Previous work has been largely focused on [...] Read more.
Heritage buildings are often lost without being adequately documented. Significant research has gone into automated building modelling from point clouds, challenged by irregularities in building design and the presence of occlusion-causing clutter and non-Manhattan World features. Previous work has been largely focused on the extraction and representation of walls, floors, and ceilings from either interior or exterior single storey scans. Significantly less effort has been concentrated on the automated extraction of smaller features such as windows and doors from complete (interior and exterior) scans. In addition, the majority of the work done on automated building reconstruction pertains to the new-build and construction industries, rather than for heritage buildings. This work presents a novel multi-level storey separation technique as well as a novel door and window detection strategy within an end-to-end modelling software for the automated creation of 2D floor plans and 3D building models from complete terrestrial laser scans of heritage buildings. The methods are demonstrated on three heritage sites of varying size and complexity, achieving overall accuracies of 94.74% for multi-level storey separation and 92.75% for the building model creation. Additionally, the automated door and window detection methodology achieved absolute mean dimensional errors of 6.3 cm. Full article
Show Figures

Graphical abstract

23 pages, 7367 KiB  
Article
Airborne Laser Scanning Point Cloud Classification Using the DGCNN Deep Learning Method
by Elyta Widyaningrum, Qian Bai, Marda K. Fajari and Roderik C. Lindenbergh
Remote Sens. 2021, 13(5), 859; https://doi.org/10.3390/rs13050859 - 25 Feb 2021
Cited by 25 | Viewed by 6346
Abstract
Classification of aerial point clouds with high accuracy is significant for many geographical applications, but not trivial as the data are massive and unstructured. In recent years, deep learning for 3D point cloud classification has been actively developed and applied, but notably for [...] Read more.
Classification of aerial point clouds with high accuracy is significant for many geographical applications, but not trivial as the data are massive and unstructured. In recent years, deep learning for 3D point cloud classification has been actively developed and applied, but notably for indoor scenes. In this study, we implement the point-wise deep learning method Dynamic Graph Convolutional Neural Network (DGCNN) and extend its classification application from indoor scenes to airborne point clouds. This study proposes an approach to provide cheap training samples for point-wise deep learning using an existing 2D base map. Furthermore, essential features and spatial contexts to effectively classify airborne point clouds colored by an orthophoto are also investigated, in particularly to deal with class imbalance and relief displacement in urban areas. Two airborne point cloud datasets of different areas are used: Area-1 (city of Surabaya—Indonesia) and Area-2 (cities of Utrecht and Delft—the Netherlands). Area-1 is used to investigate different input feature combinations and loss functions. The point-wise classification for four classes achieves a remarkable result with 91.8% overall accuracy when using the full combination of spectral color and LiDAR features. For Area-2, different block size settings (30, 50, and 70 m) are investigated. It is found that using an appropriate block size of, in this case, 50 m helps to improve the classification until 93% overall accuracy but does not necessarily ensure better classification results for each class. Based on the experiments on both areas, we conclude that using DGCNN with proper settings is able to provide results close to production. Full article
Show Figures

Figure 1

Back to TopTop