Deep Learning for Simultaneous Localization and Mapping (SLAM)

Special Issue Editors


E-Mail Website
Guest Editor
Australian Institute for Machine Learning (AIML), School of Computer Science, University of Adelaide, SA 5005, Australia
Interests: localization; mapping; deep learning; robust inference; place recognition; visual localization

E-Mail Website
Guest Editor
Australian Institute for Machine Learning (AIML), School of Computer Science, University of Adelaide, Adelaide, SA 5005, Australia
Interests: Structure from Motion (SfM); deep learning; robust statistics; inverse problems

Special Issue Information

Simultaneous Localization and Mapping (SLAM) is a fundamental problem in mobile robotics that allows a robot to localize itself against a previously unseen environment while simultaneously constructing a representation of it. With the recent resurgence in deep learning techniques, challenges in the traditional-geometry-based SLAM have been addressed with learning-based techniques. Similarly, the multiview localization capability of SLAM has been exploited to learn better models. While progress is being made in traditional-geometry-based SLAM techniques, deep learning introduces a new set of tools that can be leveraged to further improve the performance of SLAM systems.

This Special Issue focuses on Simultaneous Localization and Mapping in general and encourages submissions that further the state of the art of both geometry-based SLAM and methods that focus on how deep learning can help SLAM. Topics of interest for the Special Issue include, but are not limited to:

  • localization;
  • mapping;
  • place recognition under appearance change;
  • loop closure detection;
  • topological and metric relocalization;
  • Deep Learning for localization;
  • learned methods for mapping ;
  • end-to-end deeply learned Simultaneous Localization and Mapping (SLAM);
  • outlier-robust SLAM;
  • SLAM with novel sensors; and
  • deep learned priors for SLAM.
Dr. Yasir Latif
Dr. Pulak Purkait
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. ISPRS International Journal of Geo-Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • SLAM
  • Deep Learning
  • localization
  • mapping
  • robust inference
  • place recognition.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 12287 KiB  
Article
EnvSLAM: Combining SLAM Systems and Neural Networks to Improve the Environment Fusion in AR Applications
by Giulia Marchesi, Christian Eichhorn, David A. Plecher, Yuta Itoh and Gudrun Klinker
ISPRS Int. J. Geo-Inf. 2021, 10(11), 772; https://doi.org/10.3390/ijgi10110772 - 12 Nov 2021
Cited by 11 | Viewed by 4387
Abstract
Augmented Reality (AR) has increasingly benefited from the use of Simultaneous Localization and Mapping (SLAM) systems. This technology has enabled developers to create AR markerless applications, but lack semantic understanding of their environment. The inclusion of this information would empower AR applications to [...] Read more.
Augmented Reality (AR) has increasingly benefited from the use of Simultaneous Localization and Mapping (SLAM) systems. This technology has enabled developers to create AR markerless applications, but lack semantic understanding of their environment. The inclusion of this information would empower AR applications to better react to the surroundings more realistically. To gain semantic knowledge, in recent years, focus has shifted toward fusing SLAM systems with neural networks, giving birth to the field of Semantic SLAM. Building on existing research, this paper aimed to create a SLAM system that generates a 3D map using ORB-SLAM2 and enriches it with semantic knowledge originated from the Fast-SCNN network. The key novelty of our approach is a new method for improving the predictions of neural networks, employed to balance the loss of accuracy introduced by efficient real-time models. Exploiting sensor information provided by a smartphone, GPS coordinates are utilized to query the OpenStreetMap database. The returned information is used to understand which classes are currently absent in the environment, so that they can be removed from the network’s prediction with the goal of improving its accuracy. We achieved 87.40% Pixel Accuracy with Fast-SCNN on our custom version of COCO-Stuff and showed an improvement by involving GPS data for our self-made smartphone dataset resulting in 90.24% Pixel Accuracy. Having in mind the use on smartphones, the implementation aimed to find a trade-off between accuracy and efficiency, making the system achieve an unprecedented speed. To this end, the system was carefully designed and a strong focus on lightweight neural networks is also fundamental. This enabled the creation of an above real-time Semantic SLAM system that we called EnvSLAM (Environment SLAM). Our extensive evaluation reveals the efficiency of the system features and the operability in above real-time (48.1 frames per second with an input image resolution of 640 × 360 pixels). Moreover, the GPS integration indicates an effective improvement of the network’s prediction accuracy. Full article
(This article belongs to the Special Issue Deep Learning for Simultaneous Localization and Mapping (SLAM))
Show Figures

Figure 1

16 pages, 23165 KiB  
Article
A Visual SLAM Robust against Dynamic Objects Based on Hybrid Semantic-Geometry Information
by Sheng Miao, Xiaoxiong Liu, Dazheng Wei and Changze Li
ISPRS Int. J. Geo-Inf. 2021, 10(10), 673; https://doi.org/10.3390/ijgi10100673 - 4 Oct 2021
Cited by 5 | Viewed by 2370
Abstract
A visual localization approach for dynamic objects based on hybrid semantic-geometry information is presented. Due to the interference of moving objects in the real environment, the traditional simultaneous localization and mapping (SLAM) system can be corrupted. To address this problem, we propose a [...] Read more.
A visual localization approach for dynamic objects based on hybrid semantic-geometry information is presented. Due to the interference of moving objects in the real environment, the traditional simultaneous localization and mapping (SLAM) system can be corrupted. To address this problem, we propose a method for static/dynamic image segmentation that leverages semantic and geometric modules, including optical flow residual clustering, epipolar constraint checks, semantic segmentation, and outlier elimination. We integrated the proposed approach into the state-of-the-art ORB-SLAM2 and evaluated its performance on both public datasets and a quadcopter platform. Experimental results demonstrated that the root-mean-square error of the absolute trajectory error improved, on average, by 93.63% in highly dynamic benchmarks when compared with ORB-SLAM2. Thus, the proposed method can improve the performance of state-of-the-art SLAM systems in challenging scenarios. Full article
(This article belongs to the Special Issue Deep Learning for Simultaneous Localization and Mapping (SLAM))
Show Figures

Figure 1

16 pages, 4179 KiB  
Article
A Wireless Fingerprint Positioning Method Based on Wavelet Transform and Deep Learning
by Da Li and Zhao Niu
ISPRS Int. J. Geo-Inf. 2021, 10(7), 442; https://doi.org/10.3390/ijgi10070442 - 29 Jun 2021
Cited by 3 | Viewed by 2005
Abstract
As the demand for location services increases, research on location technology has aroused great interest. In particular, signal-based fingerprint location positioning technology has become a research hotspot owing to its high positioning performance. In general, the received signal strength indicator (RSSI) will be [...] Read more.
As the demand for location services increases, research on location technology has aroused great interest. In particular, signal-based fingerprint location positioning technology has become a research hotspot owing to its high positioning performance. In general, the received signal strength indicator (RSSI) will be used as a location feature to build a fingerprint database. However, at different locations, this feature distinction may not be obvious, resulting in low positioning accuracy. Considering the wavelet transform can get valuable features from the signals, the long-term evolution (LTE) signals were converted into wavelet feature images to construct the fingerprint database. To fully extract the signal features, a two-level hierarchical structure positioning system is proposed to achieve satisfactory positioning accuracy. A deep residual network (ResNet) rough locator is used to learn useful features from the wavelet feature fingerprint image database. Then, inspired by the transfer learning idea, a fine locator based on multilayer perceptron (MLP) is leveraged to further learn the features of the wavelet fingerprint image to obtain better localization performance. Additionally, multiple data enhancement techniques were adopted to increase the richness of the fingerprint dataset, thereby enhancing the robustness of the positioning system. Experimental results indicate that the proposed system leads to improved positioning performance in outdoor environments. Full article
(This article belongs to the Special Issue Deep Learning for Simultaneous Localization and Mapping (SLAM))
Show Figures

Figure 1

21 pages, 8570 KiB  
Article
PLD-SLAM: A New RGB-D SLAM Method with Point and Line Features for Indoor Dynamic Scene
by Chenyang Zhang, Teng Huang, Rongchun Zhang and Xuefeng Yi
ISPRS Int. J. Geo-Inf. 2021, 10(3), 163; https://doi.org/10.3390/ijgi10030163 - 13 Mar 2021
Cited by 26 | Viewed by 4067
Abstract
RGB-D SLAM (Simultaneous Localization and Mapping) generally performs smoothly in a static environment. However, in dynamic scenes, dynamic features often cause wrong data associations, which degrade accuracy and robustness. To address this problem, in this paper, a new RGB-D dynamic SLAM method, PLD-SLAM, [...] Read more.
RGB-D SLAM (Simultaneous Localization and Mapping) generally performs smoothly in a static environment. However, in dynamic scenes, dynamic features often cause wrong data associations, which degrade accuracy and robustness. To address this problem, in this paper, a new RGB-D dynamic SLAM method, PLD-SLAM, which is based on point and line features for dynamic scenes, is proposed. First, to avoid under-over segmentation caused by deep learning, PLD-SLAM combines deep learning for semantic information segmentation with the K-Means clustering algorithm considering depth information to detect the underlying dynamic features. Next, two consistency check strategies are utilized to check and filter out the dynamic features more reasonably. Then, to obtain a better practical performance, point and line features are utilized to calculate camera pose in the dynamic SLAM, which is also different from most published dynamic SLAM algorithms based merely on point features. The optimization model with point and line features is constructed and utilized to calculate the camera pose with higher accuracy. Third, enough experiments on the public TUM RGB-D dataset and the real-world scenes are conducted to verify the location accuracy and performance of PLD-SLAM. We compare our experimental results with several state-of-the-art dynamic SLAM methods in terms of average localization errors and the visual difference between the estimation trajectories and the ground-truth trajectories. Through the comprehensive comparisons with these dynamic SLAM schemes, it can be fully demonstrated that PLD-SLAM can achieve comparable or better performances in dynamic scenes. Moreover, the feasibility of camera pose estimation based on both point features and line features has been proven by the corresponding experiments from a comparison with our proposed PLD-SLAM only based on point features. Full article
(This article belongs to the Special Issue Deep Learning for Simultaneous Localization and Mapping (SLAM))
Show Figures

Figure 1

15 pages, 8034 KiB  
Article
Deep Learning for Fingerprint Localization in Indoor and Outdoor Environments
by Da Li, Yingke Lei, Xin Li and Haichuan Zhang
ISPRS Int. J. Geo-Inf. 2020, 9(4), 267; https://doi.org/10.3390/ijgi9040267 - 20 Apr 2020
Cited by 17 | Viewed by 3332
Abstract
Wi-Fi and magnetic field fingerprinting-based localization have gained increased attention owing to their satisfactory accuracy and global availability. The common signal-based fingerprint localization deteriorates due to well-known signal fluctuations. In this paper, we proposed a Wi-Fi and magnetic field-based localization system based on [...] Read more.
Wi-Fi and magnetic field fingerprinting-based localization have gained increased attention owing to their satisfactory accuracy and global availability. The common signal-based fingerprint localization deteriorates due to well-known signal fluctuations. In this paper, we proposed a Wi-Fi and magnetic field-based localization system based on deep learning. Owing to the low discernibility of magnetic field strength (MFS) in large areas, the unsupervised learning density peak clustering algorithm based on the comparison distance (CDPC) algorithm is first used to pick up several center points of MFS as the geotagged features to assist localization. Considering the state-of-the-art application of deep learning in image classification, we design a location fingerprint image using Wi-Fi and magnetic field fingerprints for localization. Localization is casted in a proposed deep residual network (Resnet) that is capable of learning key features from a massive fingerprint image database. To further enhance localization accuracy, by leveraging the prior information of the pre-trained Resnet coarse localizer, an MLP-based transfer learning fine localizer is introduced to fine-tune the coarse localizer. Additionally, we dynamically adjusted the learning rate (LR) and adopted several data enhancement methods to increase the robustness of our localization system. Experimental results show that the proposed system leads to satisfactory localization performance both in indoor and outdoor environments. Full article
(This article belongs to the Special Issue Deep Learning for Simultaneous Localization and Mapping (SLAM))
Show Figures

Figure 1

Back to TopTop