Next Article in Journal
Closed-Form Capacity Reliability Analysis of Multiuser MIMO System in the Presence of Generalized Multipath Fading
Next Article in Special Issue
Anomaly Detection and Concept Drift Adaptation for Dynamic Systems: A General Method with Practical Implementation Using an Industrial Collaborative Robot
Previous Article in Journal
Movie Scene Event Extraction with Graph Attention Network Based on Argument Correlation Information
Previous Article in Special Issue
YOLO-S: A Lightweight and Accurate YOLO-like Network for Small Target Detection in Aerial Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Sparse Learning-Based Feature Detectors into Simultaneous Localization and Mapping—A Benchmark Study

1
Dipartimento di Ingegneria, Università degli Studi di Perugia, 06125 Perugia, Italy
2
ART S.p.A Company, 06065 Perugia, Italy
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(4), 2286; https://doi.org/10.3390/s23042286
Submission received: 19 January 2023 / Revised: 14 February 2023 / Accepted: 17 February 2023 / Published: 18 February 2023
(This article belongs to the Special Issue Artificial Intelligence for Sensing and Robotic Systems)

Abstract

:
Simultaneous localization and mapping (SLAM) is one of the cornerstones of autonomous navigation systems in robotics and the automotive industry. Visual SLAM (V-SLAM), which relies on image features, such as keypoints and descriptors to estimate the pose transformation between consecutive frames, is a highly efficient and effective approach for gathering environmental information. With the rise of representation learning, feature detectors based on deep neural networks (DNNs) have emerged as an alternative to handcrafted solutions. This work examines the integration of sparse learned features into a state-of-the-art SLAM framework and benchmarks handcrafted and learning-based approaches by comparing the two methods through in-depth experiments. Specifically, we replace the ORB detector and BRIEF descriptor of the ORBSLAM3 pipeline with those provided by Superpoint, a DNN model that jointly computes keypoints and descriptors. Experiments on three publicly available datasets from different application domains were conducted to evaluate the pose estimation performance and resource usage of both solutions.

1. Introduction

Autonomous driving is certainly one of the most popular research directions in the robotics and intelligent transportation communities. The core capabilities of an autonomous driving agent are grounded on the navigation stack, which is composed of the following components: i.e., localization, mapping, path planning and collision avoidance. Among these, the localization module is probably the most crucial one, being the prerequisite for the proper functioning of all the others. Therefore, its performance is of utmost importance for the success of the overall navigation pipeline.
Localization aims to estimate and describe the agent pose, and i.e., its position and orientation in 3D space. This information is extracted from the input data provided by the available sensors of the agent, such as LIDAR, lasers, monocular or stereo cameras, and IMUs. Cameras are particularly attractive, due to their low cost, weight, and energy consumption, and the significant amount of information about the surrounding environment contained in the images collected. Stereo vision is probably the most common configuration, and several works and commercial products have proven its effectiveness. Nonetheless, its accuracy is tightly linked to the correctness of the stereo calibration, the estimated parameters of which might drift over time, due to physical modification of the rig (e.g., thermal expansion and cool contraction). The monocular setup considered in this paper does not suffer from this limitation. This advantage comes at the cost of more challenging image processing algorithms (He et al. [1]), due to the well-known scale drift and the single-point-of-failure (SPOF) problems (Aqel et al. [2], Yang et al. [3]).
From a methodological point of view, localization is achieved by relying on Visual Odometry (VO) or Simultaneous Localization And Mapping (SLAM), as presented in numerous literature works e.g., Yousif et al. [4], Agostinho et al. [5] and Chen et al. [6]. Most of these approaches leverage image features (i.e., keypoints) tracking across multiple frames to estimate the camera ego-motion.
For decades, feature detection and description methods were hand-engineered to guarantee invariance to geometrical or photometric changes and robustness concerning matching procedures. Many of the state-of-the-art feature extraction algorithms have shown impressive results in many applications (Chen et al. [7], Macario et al. [8]). Starting from the pioneering work by Moravec [9], we have witnessed a rapid improvement of feature extractors, from the Harris detector by Harris et al. [10] and blob detector by Matas et al. [11], to the famous SIFT by Lowe et al. [12], FAST by Rosten et al. [13] and SURF by Bay et al. [14].
More recently, the ORB extractor by Rublee et al. [15] has become quite popular as the pillar of the famous ORB–SLAM framework by Mur-Artal et al. [16], which has been further improved in its subsequent versions, i.e., ORBSLAM2 (Mur et al. [17]) and ORBSLAM3 (Campos et al. [18]).
Choosing the feature extractor that best suits the application at hand is, in general, not trivial. Each method might be appropriate for a specific condition (e.g., illumination or blur degree) or scenarios (e.g., automotive, indoor, aerial). In addition, these approaches rely on many hyper-parameters having optimal values which are hardly a priori known and could change significantly from one context to another.
Viable alternatives to hand-engineered feature extractors have recently been proposed through the exploitation of data-driven representation learning approaches (Jing et al. [19]). Deep learning is widely utilized for this objective, and new techniques based on both supervised and self-supervised approaches are continuously being developed. Models based on Convolutional Neural Networks (CNNs), in particular, are capable of computing features that exhibit robust invariance to geometric and photometric changes, including illumination, background, viewpoint, and scale (Wang et al. [20]).
A pioneering contribution to the field of learned features and descriptors was made by Verdie et al. [21], which lay the foundations for learning-based keypoints and descriptors detection by proposing a simple method to identify potentially stable points in training images, and a regression model that detects keypoints invariant to weather and illumination changes. Lenc et al. [22], instead, proposed DetNetm, which learns to compute local covariant features. This approach was improved by Zhang et al. [23] with TCDET, which ensures the detection of discriminative image features and guarantees repetitiveness. Finally, Barroso-Laguna et al. [24] fused hand-engineered and learned filters to obtain a lightweight and efficient detector with real-time performance.
The aforementioned methods treat feature detection and description as separate problems. In the last few years, strategies that jointly detect features and compute the associated descriptors are spreading. LIFT by Yi et al. [25] was among the first attempts in this direction and was followed by other successful models, including Superpoint by Detone et al. [26], LF-Net by Ono et al. [27] and D2-Net by Dusmanu et al. [28].
While the previous works propose general purpose algorithms, recent researchers have steered attention toward the integration of learning-based feature extractors into VO and SLAM systems. DXSLAM by Li et al. [29] exploits CNN-based features to obtain scene representations that are robust to environmental changes and combines them into the SLAM pipeline. To meet the high demand for neural network models for graph data, GCN (Graph Convolutional Neural Networks) was introduced by Derr et al. [30]. GCN approaches bring significant improvement in a variety of network analysis tasks, one of which is learning node representations. In particular, they were successfully used by Tang et al. in [31] and in its improved version, GCNv2 (Tang et al. [32]), for the task of keypoint and descriptor extraction.
DROID–SLAM by Teed et al. [33], relies, instead, on the end-to-end paradigm and is characterized by a neural architecture that can be easily adapted to different setups (monocular, stereo and RGB-D).
Inspired by the discussions above, this work intended to provide a benchmark study for researchers and practitioners that highlights the differences between learning-based descriptors and hand-engineered ones in visual SLAM algorithms. For this purpose, we selected one of the most popular visual SLAM algorithms, i.e., ORBSLAM3 by Campos et al. [18], and studied how the localization performance changed when its standard feature extraction pipeline was replaced with learning-based feature extractors. Namely, we employed Superpoint as the CNN-based keypoint detector and descriptor extractor. The two versions of ORBSLAM3 were compared on three different reference datasets widely employed in the robotics community. Different metrics were considered to provide an extensive quantitative analysis.
An attempt to combine Superpoint with ORBSLAM2 was also made by Deng et al. [34]. However, their analysis was limited to a single dataset, without studying the impact that the different hyper-parameters of ORBSLAM2 had on the overall performance. Instead, we propose an in-depth benchmark that includes and discusses localization performance and memory\computational resources consumption and compares the two ORBSLAM3 versions, (the first with Superpoint and the second with standard hand-engineered ORB features), under several hyper-parameter configurations.

Contribution

As highlighted by the literature review, in recent years learned features have emerged as a promising alternative to traditional handcrafted features. Despite their demonstrated robustness to image non-idealities and generalization capabilities, to the best of our knowledge, there has been limited research directly comparing standard and handcrafted features in visual odometry applications. Thus, we believed it was crucial to conduct a thorough benchmark between learned and handcrafted features to assess their relative strengths and limitations in the context of visual odometry and SLAM.
This study could be beneficial to inform future development efforts, guiding the design and implementation of more effective and efficient algorithms. Moreover, comparing the performance of learned and handcrafted features on a diverse range of datasets provides a better understanding of the generalization capabilities of the approaches and the applicability of the algorithms to real-world scenarios.
To summarize, our contributions are as follows:
  • A study on the integration of learned features into the ORBSLAM3 pipeline.
  • A thorough evaluation of both learned and hand-crafted features across three diverse datasets, and considering different application domains.
  • A performance comparison between learned and handcrafted features in terms of computational efficiency and execution timing.
The present study is structured as follows. Section 2 provides a comprehensive overview of the algorithms employed in the work and summarizes the main contribution of our work. The methodology and implementation process are explained in detail in Section 3. The results of the experiments carried out in this study are presented in Section 4, and their implications are discussed and analyzed in Section 5.
Finally, Section 6 concludes the study by offering insights that are valuable for future research and development.

2. Background

The high-level pipeline of a SLAM system can be divided into two main blocks: front-end and back-end. The front-end block is responsible for feature extraction, tracking, and 3D triangulation. The back-end, on the other hand, integrates the information provided by the front-end and fuses the IMU measurements (in the case of VIO approaches) to update the current and past pose estimates.
The aim of this work was to evaluate the performance of the overall SLAM pipeline when the hand-engineered feature extractors of the front-end were replaced with learning-based ones. More specifically, as mentioned in the previous section, we integrated Superpoint in the front-end of the ORBSLAM3 framework and compared this configuration against the standard one. Therefore, in the following, we briefly summarize the main characteristics of the two methods, while in Section 3 we describe the changes made to the ORBSLAM3 pipeline in order to perform the integration with Superpoint.

2.1. ORBSLAM3

ORBSLAM3 (Campos et al. [18]) has become one of the most popular modern keyframe-based SLAM systems, thanks to the impressive results shown in different scenarios. It can work with monocular, stereo, and RGB-D camera setups and can be used in visual, visual–inertial, and multi-map SLAM settings.
When ORBSLAM3 is used in the monocular setup-based configuration, we can identify three main threads that run in parallel:
  • The Tracking thread receives, as input, the current camera frame and outputs an estimated pose. If the incoming frame is selected as a new keyframe, this information is forwarded to the back-end for global optimization purposes. In this stage, the algorithm extracts keypoints and descriptors from the input images using the ORB feature detector and the BRIEF feature descriptor. Moreover, the algorithm matches the keypoints and descriptors from the current image to those from previous images. This stage uses the global descriptor index, a data structure that allows the efficient matching of features across multiple images. A prior motion estimation is also performed in this thread.
  • The Local Mapping thread handles the insertion and removal of map points and performs map optimization. The local mapping thread is responsible for incorporating new keyframes and points, pruning any unnecessary information, and improving the accuracy of the map through the visual (or visual–inertial) bundle adjustment (BA) optimization process. This is accomplished by focusing on a small group of keyframes near the current frame.
  • The Loop & Map Matching thread identifies loop closings. If a loop is detected, a global optimization procedure is triggered to update the map points and poses to achieve global consistency.
ORBSLAM3 employs ORB (Rublee et al. [15]) as a feature extractor, which relies on FAST (Rosten et al. [13]) for keypoint detection and BRIEF by Calendor et al. [35] to compute a rotation invariant 256-bit descriptor vector. ORB is a fast detector that extracts features that exhibit different invariances, such as viewpoint and illumination, and has high repeatability. This makes it well-suited for challenging environments, where the camera motion is fast and erratic. On the other hand, ORB features are not completely scale-invariant and are sensitive to orientation changes.
To overcome these limitations, in the ORBSLAM3 implementation, the ORB feature extraction process leverages an image pyramid strategy: multiple scaled versions of the same frame are used to compute features at different image resolutions. While this improves the robustness with respect to scale variations, it entails a higher computational cost.
In this work, we replaced the ORB feature extractor with the Superpoint one in the ORBSLAM3 pipeline to assess the benefits and the limitations of sparse learning-based features for pose estimation applications. Therefore, in the following, we describe the characteristics of Superpoint before providing details on its integration with ORBSLAM3.

2.2. Superpoint

The Superpoint learning model is based on a self-supervised strategy able to jointly extract keypoints and relative descriptors. It exploits a convolutional encoder–decoder architecture to extract features and descriptors from two different learning pipelines. Specifically, the Superpoint pipeline (Figure 1) receives a colored 3-channels image, which is then converted into a 1-channel grayscale image with dimensions H × W × 1 (where H and W are the height and the width of the image in pixels, respectively) and outputs a dense H × W map, having pixels values which express the probability of being a Superpoint. The descriptor decoder pipeline, instead, computes a unique 256 element descriptor for each detected keypoint.
Superpoint takes advantage of a homographic adaptation strategy during the learning phase to achieve robustness to scale and geometric scene structure changes without the need to have ground truth keypoints and descriptors. In addition, by exploiting GPU parallelism, the evaluation phase of the algorithm is fast and compatible with the real-time constraints of most applications.

3. Superpoint Integration with ORBSLAM3

This section details the integration of the Superpoint feature extraction pipeline with the front-end of the ORBSLAM3 framework. Specifically, we replace the ORB extraction module of ORBSLAM3 with the keypoints and descriptors computed by feeding input frames in the Superpoint network.
In Figure 1 the implemented pipeline is represented; specifically, the learned encoder sub-block of the Superpoint convolutional encoder–decoder architecture (sub-block A in the Figure 1) is used to extract robust visual cues from the image, and consists of convolutional layers, spatial downsampling via pooling and non-linear activation functions. The encoder input, represented by the H × W -sized image, is then converted to a H 8 × W 8 feature map after the network convolutional layers.
Features and descriptors are then extracted through two different non-learned decoder pipelines, namely the interest point decoder and the descriptor decoder, respectively. Both decoders receive as input M × N × 1 images. The interest point decoder outputs a M × N × 1 matrix of “Superpointness” probability. On the other hand, the descriptor decoder outputs the associated M × N × D (with D = 256 ) matrix of keypoint descriptors.
Both decoders operate on a shared, and spatially reduced, representation of the input and use non-learned up-sampling to bring the representation back to H × W .
The integration of Superpoint into the ORBSLAM3 back-end is not a direct process. This is due to the fact that the FAST detector in ORBSLAM3 was specifically designed to operate across multiple pyramid levels and various image sub-blocks, to ensure an evenly distributed keypoint arrangement. Superpoint, instead, processes the full-resolution image and outputs a dense map of keypoint probabilities and associated descriptors. Therefore, we modified the ORBSLAM3 sub-blocks extraction methods to meet this specification.
To allow Superpoint to process the image at multiple pyramid levels similarly to ORB, we scaled the input frame according to the number of levels required and forwarded each scaled image through the neural network independently. Concerning the image sub-block division, we noticed that Superpoint could intrinsically extract features over the entire image. Therefore, differing from ORBSLAM3, feature sparsity could be achieved without additional image processing strategies, such as sub-cell division. Instead, since Superpoint computes a dense keypoint probability map, we thresholded it to select the K features with the highest probability. This procedure enabled the control of the number of keypoints extracted and imposed constraints on both the Superpoint and ORB extractors by ensuring a consistent number of features.
Differing from the ORB descriptors, which are characterized by a 256-bit vector, Superpoint provides vectors of float values that cannot be matched with the Hamming-based bit-a-bit distance of ORBSLAM3. Instead, for Superpoint features, we adopted the L2-norm between descriptors vectors and tuned the matching thresholds accordingly.
Finally, we also adapted the loop & map matching thread of ORBSLAM3 to use Superpoint features. In ORBSLAM3, the BoW representation is used to match local features between images, allowing the system to identify common locations across different frames. This approach is efficient and effective for place recognition in ORBSLAM3, as it allows for quick matching between images, and robust recognition of similar environments, despite changes in lighting, viewpoint, and other factors. By utilizing the BoW representation, ORBSLAM3 can maintain an accurate and consistent mapping of environments, even in challenging conditions. For this purpose, we used the Bag-of-Words (BoW) vocabulary made available by Deng et al. [34]. The Bag-of-words vocabulary is trained with the DBoW3 library (https://github.com/rmsalinas/DBow3, [36] accessed on 15 October 2022) on Bovisa_2008-09-01 (Bonarini et al. [37], Ceriani et al. [38]) with both outdoor and mixed sequences.
The full pipeline from input frames to output pose estimation can be summarized as follows:
1.
Image pre-processing: the image is resized and, if needed, rectified to remove lens distortion. The resulting frame is then converted into a single-channel grayscale format.
2.
Feature detection and description: the pipeline detects and describes salient features in the image using the Superpoint feature detector and descriptor. The feature extraction operation is performed by taking into account several scale levels of the input image to enhance scale invariance and increase the number of detected features.
3.
Keyframe selection: the system selects keyframes from the input images, which are frames deemed to be representative of the environment.
4.
Map initialization and update: the system uses the information from the keyframes to initialize a map of the environment and continually updates this map as new frames are processed.
5.
Motion estimation: the pipeline uses monocular visual odometry motion estimation algorithms to estimate the relative motion between the current and the previous frame.
6.
Pose estimation: the system integrates the motion estimates to compute the absolute pose of the camera in the environment.
7.
Loop closure detection: the pipeline regularly checks for loop closures, which occur when the camera revisits a previously visited location. If a loop closure is detected, the pipeline updates the map and refines the pose estimates.
In the following, to distinguish the standard ORBSLAM3 pipeline from the one that integrates Superpoint, we refer to the latter as SUPERSLAM3.

4. Experiments

In this section, we describe the experimental setup and discuss the results. Performance and resource utilization comparisons were designed to investigate the different effects of the Superpoint and standard ORB feature extraction pipelines on the ORBSLAM3 back-end.
We performed two different types of analysis: a performance analysis suited to comparing the localization accuracy achieved with ORB and Superpoint feature extractors, and a computational analysis that examined the memory and resource usage of the GPU and CPU. Finally, we provide and discuss the impact that the standard ORB extractor and the Superpoint network had on the execution times of the ORBSLAM3 feature extraction and feature matching blocks.
This work mainly focused on the comparison of deep feature extractor against ORB. Therefore, we used the monocular configuration of ORBSLAM3, although the considerations we drew could be extended to the stereo, visual–inertial, and RGB-D cases.
To provide an extensive and in-depth analysis, multiple datasets from different domains (aerial, automotive, and hand-held) and scenarios (outdoor, indoor) were selected. To compare ORBSLAM3 and SUPERSLAM3 in hand-held camera scenarios, we considered the TUM-VI by Schubert et al. [39] dataset. It offered numerous challenges, including blur due to camera shaking, six degrees of freedom (6-DoF) motions, and rapid camera movements. To evaluate the algorithms in automotive and aerial scenarios, we chose the KITTI by Geiger et al. [40] and the EuRoC by Burri et al. [41] datasets, which, in general, were characterized by sudden appearances and photometric changes, and were subject to various motion constraints. Specifically, we ran tests on all the six Vicon Room and five Machine Hall sequences of EuRoC (for a total of 27,049 frames), and on ten sequences (from 00 to 10) of the KITTI dataset (for a total of 23,201 frames). On the other hand, for the TUM-VI we selected only six Room sequences (from Room_1 to Room_6) (for a total of 16,235 frames), where ground truth poses estimated by the motion capture system were available for the entire trajectory.

4.1. Parameter Exploration

The original ORBSLAM3 implementation relies on different parameters that control tracking, loop closure, and matching behaviors. Among all of these, we focused on the two with the highest impact on the system performance, i.e., nFeatures and nLevels. Specifically, we considered several combinations of the latter parameters to assess how they affected the performance of the ORBSLAM3 and SUPERSLAM3 pipelines.
For both the algorithms analyzed, nFeatures defines the number N of features extracted from an image. In SUPERSLAM3, this aspect is controlled by extracting the N features with the highest Superpoint probability value. Instead, in ORBSLAM3, N represents the maximum number of features that are extracted for each level of the image pyramid scale. nLevels describes, for both the approaches, the numbers of pyramid scale levels processed in the feature extraction pipeline, andi.e., the number of times a frame is scaled before computing features and descriptors.
Other parameters introduced in the original ORBSLAM3 implementation are mostly dataset-dependant and, therefore, for both SUPERSLAM3 and ORBSLAM3 we experimentally found the best possible values and kept them unchanged during all tests. We also decided not to change the scaleFactor parameter.
Similar considerations were made for the (init/min)ThFAST FAST threshold parameters, with the exception that they were meaningless in SUPERSLAM3 and, therefore, considered only in the ORBSLAM3 evaluations.

4.2. Experimental Setup

The comparisons were performed by running ORBSLAM3 and SUPERSLAM3 on the sequences previously listed. For each sequence, multiple parameter configurations were analyzed by changing the number of pyramid scales used for image scaling (i.e., nLevels) within the range of [1, 4, 8] and the number of keypoints extracted (i.e., nFeatures) within the range of [500, 750, 900, 1000].
The intervals for the parameters nLevels and nFeatures were established with the consideration that both ORBSLAM3 and SUPERSLAM3 often encounter initialization problems in most sequences when the number of features is below 500. On the other hand, increasing the number of features beyond 1000 does not result in improved performance for the algorithms and instead leads to a decrease in computational efficiency.
Additionally, we conducted experiments on a subset of sequences, the images of which were deliberately blurred to evaluate the performance of the algorithms on non-ideal inputs.
In particular, we selected the MH_02 sequence from the EuRoC dataset, the room_6 sequence from the TUM-VI dataset, and the 07 sequence from the KITTI dataset and applied a Gaussian blur filter to their images with a patch size of 12 pixels and a standard deviation of 2.15 pixels in both directions (as depicted in Figure 2). The parameters for this set of experiments were based on the analysis performed on the standard sequences.
For MH_02, we set nFeatures to 700 and nLevels to 8. For room_6 and 07, nFeatures is set to 900 and nLevels to 4.

4.3. Evaluation Metrics

4.3.1. Pose Estimation Metrics

To quantitatively assess the accuracy of both approaches, we utilized various commonly used metrics.
In particular, we considered the absolute pose error (APE), which is composed of rotational (expressed in degree) and positional (expressed in meters) parts. The absolute pose error E i is a metric for investigating the global consistency of a trajectory. Given the ground-truth poses P r e f and the aligned estimation P e s t , we can define as X r e f , i and X ^ e s t , i , respectively, the i t h pose point of the ground truth and the estimated trajectories. The APE error can then be evaluated as the absolute relative pose between the estimated pose (1) and the real one (2) at timestamp i.
X ^ e s t , i P e s t 0 i N
X r e f , i P r e f 0 i M and M N
where N and M are respectively the numbers of poses in the ground truth and the estimated trajectories. The APE statistic can be easily calculated from the relation (3).
E i = P e s t , i P r e f , i = P r e f , i 1 P e s t , i S E ( 3 )
It can be decomposed into its translational (4) and rotational components (5).
A P E t r , i = | | t r a n s ( E i ) | |
A P E r o t , i = | ( a n g l e ( log S O ( 3 ) ( r o t ( E i ) ) ) ) |
where E i belongs to the special Euclidean group of 3D rotations and translations, t r a n s ( . ) is a function that extracts the translation component of a pose, r o t ( . ) extracts the rotational matrix from the E i pose matrix, and a n g l e ( . ) is a function that extracts the rotation angle from a rotation matrix. The exact form of the a n g l e ( . ) function may vary, depending on the convention used for the rotational part representation.
Specifically, the logarithm of r o t ( E i ) gives the axis and angle of the relative rotation, which can then be converted into a scalar angle using the a n g l e ( . ) function.
| . | and | | . | | are the absolute value and the Euclidean norm operators, respectively.
To measure the overall quality of the trajectory, we used the Root Mean Square Error (RMSE) of the Absolute Pose Error (APE), which was further divided into the Absolute Trajectory Error (ATE, as described in Equation (6)) and the Absolute Rotational Error (ARE, as described in Equation (7)). For simplicity, we refer to the RMSE values of ATE and ARE as ATE and ARE, respectively.
A T E = 1 N i = 0 N 1 | | A P E t r , i | | 2
A R E = 1 N i = 0 N 1 | | A P E r o t , i | | 2
ATE and ARE are compact metrics to evaluate the position and rotation estimations and provided an immediate and quantitative measure to compare the tracking algorithms.
To generate the evaluation metrics, we used EVO (Grupp et al. [42]). However, EVO only considers the correspondence between the estimated trajectory and ground truth based on the timestamps, which may result in inaccurate outcomes and incorrect conclusions. This is because both SUPERSLAM3 and ORBSLAM3 may lose tracking and produce fewer poses than those provided by the ground truth.
Therefore, in our analysis, we chose to also include the Trajectory Ratio metric (TR, as defined in Equation (8)), along with ATE and ARE, to evaluate the proportion of estimated poses relative to the total number of ground truth poses:
T R = ( M / N ) 100 .

4.3.2. Memory Resource and Computational Metrics

We analyzed the computational statistics of both ORBSLAM3 and SUPERSLAM3, based on two main aspects:
  • Resource analysis, including the average allocated memory for CPU and GPU (both expressed in MB) and the utilization of computational resources (expressed as a percentage of the total available resources).
  • Time analysis (in milliseconds ms) for the main functional blocks of SUPERSLAM3, including the feature extraction module and the descriptor matching module.
We evaluated the average CPU and GPU memory allocation and computational resource utilization for all combinations of parameters. In addition, we conducted the time analysis by considering the average extraction and matching times for a 512 × 512 image as a reference. We evaluated the matching time, based on an average of 200 matched features. On the other hand, the extraction time depended on the values selected for nFeatures and nLevels. Hence, we provided time statistics for nFeatures = 1000 and nLevels = 1 both for SUPERSLAM3 and ORBSLAM3.

4.4. Implementation and Training Details

For the SUPERSLAM3 tests, we used a set of pre-trained network weights, i.e., the original COCO-based training weight file provided by the Magic Leap research team (Detone et al. [43]). As the author states in the original paper (Detone et al. [26]), the SuperPoint model is first trained using the Synthetic Shapes Dataset and then refined using the 240 × 320 grayscale COCO generic image dataset (Lin et al. [44]) by exploiting the homographic adaptation process.
We ran all our tests using a Nvidia GeForce RTX 2080 Ti with 12.0 GB of dedicated RAM and an Intel(R) Core(TM) i7-9800X CPU @ 3.80 GHz 3.79 GHz with 64.0 GB of RAM.

5. Results and Discussion

For the purpose of clarity in presenting the results, we adopted the following compact notation to represent the dataset and its corresponding set of parameter configurations: dataset-name_sequence-name_nFeatures-value-nLevels-value.
Based on the performance results, presented in Table 1, Table 2 and Table 3, we observed varying trends in the performance of ORBSLAM3 and SUPERSLAM3. While some sequences showed good tracking performance, in terms of ATE, ARE, and TR for both algorithms, there were others in which SUPERSLAM3 failed to initialize (indicated by fail in Table 1, Table 2 and Table 3). Conversely, there were also sequences where ORBSLAM3 was outperformed by SUPERSLAM3.
More specifically, in EuRoC (see Table 1) the results of both algorithms were comparable for all of the medium and low complexity sequences (MH_01, MH_02, MH_03, V1_01). However, in complex scenes, the performance of SUPERSLAM3 dropped, while ORBSLAM3 maintained a reasonable estimation accuracy. In our opinion, to explain this different behavior, it is important to note that EuRoC included sequences recorded with flying drones into indoor environments, which resulted in images affected by non-idealities specifically related to fast motions and poor illumination. While in some instances, SUPERSLAM3 performed slightly better than ORBSLAM3 (e.g., for EuRoC_MH_05_700_8), upon visual inspection we noticed that Superpoint failed to detect keypoints in scenes with high levels of blur (e.g., due to rapid motion or rapid panning).
The KITTI dataset (Table 2) included outdoor automotive scenarios and had the highest number of sequences in which the algorithms tended to experience initialization failures. This was the case, for example, for SUPERSLAM3 when configured with nFeatures = 500 and nLevels = 1. We believe that most of the difficulties were related to the low amount of texture and distinctive cues in the sequences, e.g., due to the high portion of the images characterized by sky or asphalt, which reduced the number of informative features that could be detected. In general, from our understanding, this caused the results on KITTI to be worse than those obtained with the other two datasets, both for ORBSLAM3 and for SUPERSLAM3.
The TUM-VI dataset (see Table 3) included handheld scenes from an indoor environment and was the only dataset where both the algorithms provided reasonable performance across all sequences and parameter configurations. From a quantitative point of view, performance was generally comparable, with ORBSLAM3 achieving slightly higher metrics.
In Figure 3, we present three qualitative plots depicting the estimated trajectories of three sample sequences. By comparing these plots with the quantitative results in Table 1, Table 2 and Table 3, we observe that, on EuRoC_MH_05_700_8, SUPERSLAM3 outperformed ORBSLAM3 in terms of ATE and ARE. The trajectory ratio was comparable, indicating that both algorithms never lost the position tracking. On the other hand, in TUM_room_4_1000_8, SUPERSLAM3 performance was significantly worse than ORBSLAM3, with ATE and ARE values of 1.01 and 26.70 for SUPERSLAM3, and 0.08 and 2.48 for ORBSLAM3, respectively. The third trajectory from the KITTI dataset showed poor performance for both algorithms, particularly with regard to the ATE metric.
As expected, the results suggested that, as the number of features and pyramid levels increased, the values of TR increased, and ATE and ARE values decreased. In general, a larger number of features and pyramid levels could potentially improve the accuracy of the estimated trajectories, although the computational cost increased. It is worth noting that none of the trajectories estimated by SUPERSLAM3 had notably higher accuracy than those provided by ORBSLAM3. However, to better understand these results it is important to note that the set of Superpoint weights used in SUPERSLAM3 was trained on the COCO dataset. Hence, the detected keypoints were not specifically designed for automotive, handheld, or aerial applications. Therefore, the fact that SUPERSLAM3 could achieve performance comparable to ORBSLAM3 in most of the sequences was remarkable and showed its robustness and generalization capabilities.
On the blurred sequences, SUPERSLAM3 often lost feature tracking. This was particularly evident in the KITTI dataset, where turnings and curves were, in most cases, poorly estimated due to directional blur effects. This can also be observed in Table 4 which shows how ORBSLAM3 achieved low errors, even on blurred sequences, while SUPERSLAM3’s performance dropped significantly. We believe that this phenomenon was related to the absence of blurred images in the homographic adaptation technique used in the training process of the Superpoint network.
Computational analysis results are presented in Table 5 and Table 6 for resource and time analysis, respectively. It can be observed that the average CPU memory utilization was higher for SUPERSLAM3 compared to ORBSLAM3. This could be ascribed to the larger number of detected features stored during the tracking process. In particular, both Superpoints and ORB features were represented by 256-element vectors. However, while each element of the ORB vector was represented by a binary value, the Superpoint descriptor contained 64-bit floats, which led to higher memory usage. The average GPU utilization of SUPERSLAM3 was mainly dependent on the number of pyramid stages that needed to be forwarded through the network and remained almost constant when changing the values of both the nFactor and nLevel parameters. Indeed, Superpoint computed keypoints and descriptors for the entire image in a single forward pass, repeated for each level of the pyramid ladder. On the other hand, the average GPU memory was mainly used to store network weights.
According to a time statistics analysis, the computation of the Superpoint feature and descriptor was faster than that of the ORB keypoint and descriptor. Specifically, Table 6 shows that the average extraction time for ORBSLAM3 was approximately double that of the Superpoint descriptor. In contrast, the feature matching time for Superpoint features was higher than that of ORBSLAM3. This was expected since the feature matching process in SUPERSLAM3, which utilized the L2 norm, was slower than ORBSLAM3’s bit-wise descriptor matching method.

6. Conclusions

State estimation and tracking are fundamental in robotics and automotive applications, as they enable high-precision real-time SLAM systems for navigation tasks. These tasks require the selection of high-quality keyframes across images for accurate tracking, which can be achieved through the use of single-camera applications.
In addition to traditional methods, there has been a surge in the use of learning-based methods, which can automatically learn robust feature representations from large datasets and simultaneously estimate feature keypoints and descriptors with low computational costs and strong generalization capabilities.
In this study, we integrated Superpoint features within the ORBSLAM3 pipeline. We then presented a quantitative evaluation of the tracking and computational performance of the integration of Superpoint learned features into the ORBSLAM3 pipeline (i.e., SUPERSLAM3). We tested both SUPERSLAM3 and ORBSLAM3 on three datasets from different domains, including aerial, automotive, and handheld.
Our findings indicated that learned features could achieve good pose estimation results. However, by analyzing the results obtained, we hypothesized that including blurry image patterns and rotations in the training phase could enhance the system’s robustness and reliability. Training on a larger dataset could also enhance the extraction of robust Superpoint features, while increasing the generalization capabilities of the overall algorithm.
In our computational analysis, we observed that SUPERSLAM3 had faster performance for keypoints and descriptors extraction compared to ORBSLAM3. However, it was slower in the features matching phase.
Future work could consider the use of learned features trained on a large dataset to improve generalization capabilities and overall performance in terms of tracking estimation. The Superpoint matching phase could be enhanced through the integration of a GPU-based matching process, such as SuperGLUE (Sarlin et al. [45]). Finally, based on our experimental results, we concluded that incorporating artificially blurred and non-ideal images into the training process of the network could enhance the robustness and consistency of the detector.

Author Contributions

Conceptualization, G.C. and P.V.; methodology, G.M., M.L. and A.D.; software, G.M. and M.L.; validation, G.M., M.L. and A.D.; formal analysis G.M., M.L. and G.C.; investigation, G.M., M.L. and A.D.; resources, P.V. and G.C.; data curation, G.M., M.L. and A.D.; writing—original draft preparation, G.M. and M.L.; writing—review and editing, G.M., M.L. and G.C.; visualization, G.M. and M.L.; supervision, P.V. and G.C.; project administration, P.V. and G.C. All authors have read and agreed to the published version of the manuscript.

Funding

Work in part supported by the University of Perugia, Fondi di Ricerca di Ateneo 2021, Project “AIDMIX—Artificial Intelligence for Decision Making: Methods for Interpretability and eXplainability”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Source code will be made available on request.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
VOVisual Odometry
VIOVisual Inertial Odometry
CNNConvolutional Neural Network
GCNGraph Convolutional Network
IMUInertial Measurement Unit
LIDARLight Detection and Ranging o Laser Imaging Detection and Ranging
APEAbsolute Positioning Error
ATEAbsolute Translational Error
AREAbsolute Rotational Error

References

  1. He, M.; Zhu, C.; Huang, Q.; Ren, B.; Liu, J. A review of monocular visual odometry. Vis. Comput. 2020, 36, 1053–1065. [Google Scholar] [CrossRef]
  2. Aqel, M.O.; Marhaban, M.H.; Saripan, M.I.; Ismail, N.B. Review of visual odometry: Types, approaches, challenges, and applications. SpringerPlus 2016, 5, 1897. [Google Scholar] [CrossRef] [Green Version]
  3. Yang, N.; Wang, R.; Gao, X.; Cremers, D. Challenges in monocular visual odometry: Photometric calibration, motion bias, and rolling shutter effect. IEEE Robot. Autom. Lett. 2018, 3, 2878–2885. [Google Scholar] [CrossRef] [Green Version]
  4. Yousif, K.; Bab-Hadiashar, A.; Hoseinnezhad, R. An overview to visual odometry and visual SLAM: Applications to mobile robotics. Intell. Ind. Syst. 2015, 1, 289–311. [Google Scholar] [CrossRef]
  5. Agostinho, L.R.; Ricardo, N.M.; Pereira, M.I.; Hiolle, A.; Pinto, A.M. A Practical Survey on Visual Odometry for Autonomous Driving in Challenging Scenarios and Conditions. IEEE Access 2022, 10, 72182–72205. [Google Scholar] [CrossRef]
  6. Chen, W.; Shang, G.; Ji, A.; Zhou, C.; Wang, X.; Xu, C.; Li, Z.; Hu, K. An overview on visual slam: From tradition to semantic. Remote Sens. 2022, 14, 3010. [Google Scholar] [CrossRef]
  7. Chen, F.; Wang, X.; Zhao, Y.; Lv, S.; Niu, X. Visual object tracking: A survey. Comput. Vis. Image Underst. 2022, 222, 103508. [Google Scholar] [CrossRef]
  8. Macario Barros, A.; Michel, M.; Moline, Y.; Corre, G.; Carrel, F. A comprehensive survey of visual slam algorithms. Robotics 2022, 11, 24. [Google Scholar] [CrossRef]
  9. Moravec, H.P. Techniques towards automatic visual obstacle avoidancee. In Proceedings of the 5th International Joint Conference on Artificial Intelligence (IJCAI’77), Cambridge, UK, 22–25 August 1977. [Google Scholar]
  10. Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the Alvey vision conference, Manchester, UK, 31 August–2 September 1988; Citeseer: Forest Grove, OR, USA, 1988; Volume 15, pp. 10–5244. [Google Scholar]
  11. Matas, J.; Chum, O.; Urban, M.; Pajdla, T. Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 2004, 22, 761–767. [Google Scholar] [CrossRef]
  12. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  13. Rosten, E.; Drummond, T. Machine learning for high-speed corner detection. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 430–443. [Google Scholar]
  14. Bay, H.; Tuytelaars, T.; Gool, L.V. Surf: Speeded up robust features. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
  15. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 IEEE International Conference on Computer Vision, Washington, DC, USA, 20–25 June 2011; pp. 2564–2571. [Google Scholar]
  16. Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
  17. Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
  18. Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
  19. Jing, L.; Tian, Y. Self-supervised visual feature learning with deep neural networks: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 4037–4058. [Google Scholar] [CrossRef]
  20. Wang, K.; Ma, S.; Chen, J.; Ren, F.; Lu, J. Approaches challenges and applications for deep visual odometry toward to complicated and emerging areas. IEEE Trans. Cogn. Dev. Syst. 2020, 14, 35–49. [Google Scholar] [CrossRef]
  21. Verdie, Y.; Yi, K.; Fua, P.; Lepetit, V. Tilde: A temporally invariant learned detector. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5279–5288. [Google Scholar]
  22. Lenc, K.; Vedaldi, A. Learning covariant feature detectors. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 100–117. [Google Scholar]
  23. Zhang, X.; Yu, F.X.; Karaman, S.; Chang, S.F. Learning discriminative and transformation covariant local feature detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6818–6826. [Google Scholar]
  24. Barroso-Laguna, A.; Riba, E.; Ponsa, D.; Mikolajczyk, K. Key. net: Keypoint detection by handcrafted and learned cnn filters. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 5836–5844.
  25. Yi, K.M.; Trulls, E.; Lepetit, V.; Fua, P. Lift: Learned invariant feature transform. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 467–483. [Google Scholar]
  26. DeTone, D.; Malisiewicz, T.; Rabinovich, A. Superpoint: Self-supervised interest point detection and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 19–25 June 2018; pp. 224–236. [Google Scholar]
  27. Ono, Y.; Trulls, E.; Fua, P.; Yi, K.M. LF-Net: Learning local features from images. Adv. Neural Inf. Process. Syst. 2018, 31, 6237–6247. [Google Scholar]
  28. Dusmanu, M.; Rocco, I.; Pajdla, T.; Pollefeys, M.; Sivic, J.; Torii, A.; Sattler, T. D2-net: A trainable cnn for joint detection and description of local features. arXiv 2019, arXiv:1905.03561. [Google Scholar]
  29. Li, D.; Shi, X.; Long, Q.; Liu, S.; Yang, W.; Wang, F.; Wei, Q.; Qiao, F. DXSLAM: A robust and efficient visual SLAM system with deep features. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 4958–4965. [Google Scholar]
  30. Derr, T.; Ma, Y.; Tang, J. Signed graph convolutional networks. In Proceedings of the 2018 IEEE International Conference on Data Mining (ICDM), Singapore, 17–20 November 2018; pp. 929–934. [Google Scholar]
  31. Tang, J.; Folkesson, J.; Jensfelt, P. Geometric correspondence network for camera motion estimation. IEEE Robot. Autom. Lett. 2018, 3, 1010–1017. [Google Scholar] [CrossRef]
  32. Tang, J.; Ericson, L.; Folkesson, J.; Jensfelt, P. GCNv2: Efficient correspondence prediction for real-time SLAM. IEEE Robot. Autom. Lett. 2019, 4, 3505–3512. [Google Scholar] [CrossRef] [Green Version]
  33. Teed, Z.; Deng, J. Droid-slam: Deep visual slam for monocular, stereo, and rgb-d cameras. Adv. Neural Inf. Process. Syst. 2021, 34, 16558–16569. [Google Scholar]
  34. Deng, C.; Qiu, K.; Xiong, R.; Zhou, C. Comparative study of deep learning based features in SLAM. In Proceedings of the 2019 4th IEEE Asia-Pacific Conference on Intelligent Robot Systems (ACIRS), Nagoya, Japan, 13–15 July 2019; pp. 250–254. [Google Scholar]
  35. Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. Brief: Binary robust independent elementary features. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 778–792. [Google Scholar]
  36. Bags of Binary Words for Fast Place Recognition in Image Sequences. 2017. Available online: https://github.com/rmsalinas/DBow3 (accessed on 15 October 2022).
  37. Andrea Bonarini, W.B.; Giulio Fontana, M.M.; Sorrenti, D.G.; Tardos, J.D. RAWSEEDS: Robotics Advancement through Web-publishing of Sensorial and Elaborated Extensive Data Sets. In Proceedings of the IROS’06 Workshop on Benchmarks in Robotics Research, Beijing, China, 9–15 October 2006. [Google Scholar]
  38. Ceriani, S.; Fontana, G.; Giusti, A.; Marzorati, D.; Matteucci, M.; Migliore, D.; Rizzi, D.; Sorrenti, D.G.; Taddei, P. Rawseeds ground truth collection systems for indoor self-localization and mapping. Auton. Robot. 2009, 27, 353–371. [Google Scholar] [CrossRef]
  39. Schubert, D.; Goll, T.; Demmel, N.; Usenko, V.; Stückler, J.; Cremers, D. The TUM VI benchmark for evaluating visual-inertial odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1680–1687. [Google Scholar]
  40. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets Robotics: The KITTI Dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef] [Green Version]
  41. Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
  42. Grupp, M. evo: Python Package for the Evaluation of Odometry and SLAM. 2017. Available online: https://github.com/MichaelGrupp/evo (accessed on 30 October 2022).
  43. DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperPoint: Self-Supervised Interest Point Detection and Description Implementation and Pretrained Network. 2022. Available online: https://github.com/magicleap/SuperPointPretrainedNetwork (accessed on 6 July 2022).
  44. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar]
  45. Sarlin, P.E.; DeTone, D.; Malisiewicz, T.; Rabinovich, A. Superglue: Learning feature matching with graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 4938–4947. [Google Scholar]
Figure 1. The figure depicts the integration of the Superpoint feature detector and descriptor into the ORB-SLAM3 pipeline. The traditional ORB and BRIEF feature detection and description methods have been replaced with the Superpoint pipeline. The input images are transformed into the grayscale format and fed into the Superpoint detector pipeline (A). The Superpoint encoder–decoder pipeline is composed of a learned encoder, based on several convolutional layers, and two non-learned decoders for the joint features and descriptors extraction. The detected features are then processed by the ORBSLAM3 back-end, which consists of three main components that operate as three parallel threads: the Tracking, Local Mapping, and Loop & Map Merging threads (B). The back-end extracts keyframes, initializes and updates the map, and performs motion and pose estimation, both locally within the Local Mapping Thread and globally within the Loop & Map Merging thread. If a loop closure is detected, the pose estimation is further refined.
Figure 1. The figure depicts the integration of the Superpoint feature detector and descriptor into the ORB-SLAM3 pipeline. The traditional ORB and BRIEF feature detection and description methods have been replaced with the Superpoint pipeline. The input images are transformed into the grayscale format and fed into the Superpoint detector pipeline (A). The Superpoint encoder–decoder pipeline is composed of a learned encoder, based on several convolutional layers, and two non-learned decoders for the joint features and descriptors extraction. The detected features are then processed by the ORBSLAM3 back-end, which consists of three main components that operate as three parallel threads: the Tracking, Local Mapping, and Loop & Map Merging threads (B). The back-end extracts keyframes, initializes and updates the map, and performs motion and pose estimation, both locally within the Local Mapping Thread and globally within the Loop & Map Merging thread. If a loop closure is detected, the pose estimation is further refined.
Sensors 23 02286 g001
Figure 2. Sample image from the original (A) and blurred (B) KITTI dataset (sequence 07). The original image was blurred using a gaussian filter with a patch size of 12 pixels and a standard deviation of 2.15 pixels 241 in both directions.
Figure 2. Sample image from the original (A) and blurred (B) KITTI dataset (sequence 07). The original image was blurred using a gaussian filter with a patch size of 12 pixels and a standard deviation of 2.15 pixels 241 in both directions.
Sensors 23 02286 g002
Figure 3. Sample trajectories comparisons using EVO tools shows that, while SUPERSLAM3 outperformed ORBSLAM3 in terms of ATE and ARE for the EuRoC MH_05 sequence with nFeatures and nLevels set to 700 and 8, respectively, it performed significantly worse than ORBSLAM3 for the room_4 sequence (TUM-VI) with nFeatures and nLevels set to 1000 and 8, respectively. Both algorithms also showed poor performance for the KITTI dataset, particularly in terms of ATE.
Figure 3. Sample trajectories comparisons using EVO tools shows that, while SUPERSLAM3 outperformed ORBSLAM3 in terms of ATE and ARE for the EuRoC MH_05 sequence with nFeatures and nLevels set to 700 and 8, respectively, it performed significantly worse than ORBSLAM3 for the room_4 sequence (TUM-VI) with nFeatures and nLevels set to 1000 and 8, respectively. Both algorithms also showed poor performance for the KITTI dataset, particularly in terms of ATE.
Sensors 23 02286 g003
Table 1. Quantitative results on EuRoC. For each sequence and for each performance metric, we highlight in bold the parameter configuration that attained the best result (for both ORBSLAM3 and SUPERSLAM3) individually for each analyzed metric (as described in the table’s right header). To indicate the one with the highest performance between the two, we color the corresponding cell and highlight the algorithm name in gray. In the table’s top header the parameters set are represented in the form ‘nFeatures_nLevels’.
Table 1. Quantitative results on EuRoC. For each sequence and for each performance metric, we highlight in bold the parameter configuration that attained the best result (for both ORBSLAM3 and SUPERSLAM3) individually for each analyzed metric (as described in the table’s right header). To indicate the one with the highest performance between the two, we color the corresponding cell and highlight the algorithm name in gray. In the table’s top header the parameters set are represented in the form ‘nFeatures_nLevels’.
SequenceAlgorithm500_1500_4500_8700_1700_4700_8900_1900_4900_81000_11000_41000_8
MH_01SUPERSLAM3fail0.060.040.301.520.050.560.080.090.240.100.18ATE (m)
ORBSLAM30.030.030.060.030.030.040.090.040.040.040.040.04
SUPERSLAM3fail2.571.892.1112.322.152.652.472.802.122.322.00ARE (deg)
ORBSLAM32.061.91.932.072.022.192.391.992.202.051.942.09
SUPERSLAM30.0055.3893.8984.5589.6388.6586.0999.2799.9785.0988.1087.53TR (%)
ORBSLAM399.9799.9599.9599.9799.9599.9599.9799.9599.9599.9799.9299.97
MH_02SUPERSLAM30.040.070.060.130.040.030.330.060.030.322.730.06ATE (m)
ORBSLAM30.040.040.040.030.040.030.030.030.030.030.030.03
SUPERSLAM31.811.871.981.891.761.712.031.791.764.0517.031.82ARE (deg)
ORBSLAM31.431.641.541.371.681.511.621.661.801.661.771.73
SUPERSLAM338.4538.2639.4173.6899.2499.7081.0290.7291.6483.8589.8791.74TR (%)
ORBSLAM399.8499.9399.7799.9099.9399.9099.9099.9099.8799.9099.9099.90
MH_03SUPERSLAM3fail0.030.030.490.060.050.220.110.060.080.140.05ATE (m)
ORBSLAM30.040.030.040.040.040.040.040.040.040.040.040.04
SUPERSLAM3fail1.41.421.891.631.722.051.931.681.721.561.70ARE (deg)
ORBSLAM31.741.621.681.591.581.641.661.631.641.641.691.65
SUPERSLAM30.0025.6726.9683.2698.0097.9684.3398.0497.8196.7497.8597.85TR (%)
ORBSLAM398.0098.0098.0098.0097.9697.8998.0098.0498.0098.0098.0097.96
MH_04SUPERSLAM30.030.550.050.100.080.060.120.080.444.000.220.22ATE (m)
ORBSLAM30.160.130.100.120.150.060.110.050.090.160.060.53
SUPERSLAM31.411.481.331.451.531.451.631.511.638.461.451.67ARE (deg)
ORBSLAM31.551.531.471.521.661.381.691.421.381.681.452.12
SUPERSLAM38.0285.1536.6055.8392.5776.5465.9177.7297.4471.1395.1395.47TR (%)
ORBSLAM397.6497.9898.0397.6497.9397.5497.6497.5997.9397.6497.5997.93
MH_05SUPERSLAM30.080.120.110.160.140.150.160.210.160.270.920.18ATE (m)
ORBSLAM30.070.060.100.060.052.370.060.080.050.060.050.05
SUPERSLAM31.801.51.621.811.782.322.151.821.932.439.262.05ARE (deg)
ORBSLAM31.841.791.861.831.799.751.851.711.751.771.761.77
SUPERSLAM339.7343.1142.1963.2295.1695.0762.8295.0394.6866.2694.5096.13TR (%)
ORBSLAM396.1796.2295.7896.1778.1896.3996.1796.3596.2296.1796.3596.35
V1_01SUPERSLAM30.120.100.090.090.090.090.200.090.090.200.290.28ATE (m)
ORBSLAM30.100.090.090.090.090.090.100.090.090.090.090.09
SUPERSLAM36.075.615.626.605.705.626.465.595.637.799.237.42ARE (deg)
ORBSLAM35.955.565.545.805.545.555.525.555.605.525.575.50
SUPERSLAM328.5093.5894.7159.6592.5595.6054.6495.4794.6457.6290.6395.23TR (%)
ORBSLAM396.2695.9596.0596.1296.2696.2696.1996.1996.1996.2696.2696.19
V1_02SUPERSLAM30.030.080.070.060.110.170.030.130.130.020.110.26ATE (m)
ORBSLAM30.070.060.060.100.060.060.100.060.070.090.060.09
SUPERSLAM31.661.471.432.991.771.643.651.492.543.082.243.71ARE (deg)
ORBSLAM31.431.281.351.521.261.302.161.271.291.411.281.90
SUPERSLAM312.2267.1980.2919.8887.5490.5314.3388.6592.118.5480.5885.67TR (%)
ORBSLAM388.7194.7483.2293.8094.7489.1893.8694.1594.7494.6294.8594.80
V1_03SUPERSLAM30.050.330.040.020.110.090.020.100.500.030.040.17ATE (m)
ORBSLAM30.110.120.070.140.110.070.160.600.120.130.080.08
SUPERSLAM32.1610.851.993.533.783.252.933.3824.472.742.0210.67ARE (deg)
ORBSLAM35.592.671.784.132.841.884.2416.512.693.162.042.07
SUPERSLAM37.5822.6223.226.2837.1324.487.2125.9736.117.0719.8718.47TR (%)
ORBSLAM331.2278.0893.0258.3193.7693.2574.5093.8188.5164.2294.1493.86
V2_01SUPERSLAM30.04fail0.640.030.100.11fail0.090.190.030.400.11ATE (m)
ORBSLAM30.070.060.060.850.060.060.060.060.060.240.110.06
SUPERSLAM32.50fail2.370.941.181.24fail1.451.280.813.181.46ARE (deg)
ORBSLAM31.271.081.155.571.191.221.061.101.128.771.731.26
SUPERSLAM317.810.0073.5528.6492.5092.890.0092.7292.6822.7292.5992.81TR (%)
ORBSLAM393.2993.6493.6493.3393.6493.6493.4293.6893.6893.6493.6893.68
V2_02SUPERSLAM30.020.090.130.040.13fail0.060.330.190.030.170.20ATE (m)
ORBSLAM30.060.090.070.060.060.110.630.070.060.050.080.06
SUPERSLAM30.711.261.470.951.63fail1.394.361.421.252.534.16ARE (deg)
ORBSLAM30.832.090.810.800.761.681.731.070.900.781.320.84
SUPERSLAM312.3173.0466.1015.6790.330.0033.2270.7493.2327.7781.1384.16TR (%)
ORBSLAM393.6196.5996.8593.6196.9396.7296.8996.9397.0293.6597.2397.02
V2_03SUPERSLAM30.010.510.040.010.010.060.010.010.08fail0.010.08ATE (m)
ORBSLAM30.060.080.080.060.090.320.080.090.340.060.270.11
SUPERSLAM32.0525.632.473.212.223.484.763.562.79fail0.662.82ARE (deg)
ORBSLAM33.202.011.793.322.285.152.922.107.772.985.292.40
SUPERSLAM34.8426.1222.795.315.1028.565.315.1522.630.005.6223.52TR (%)
ORBSLAM346.3183.5188.4047.4587.0491.7352.1388.9290.4351.8781.7492.92
Table 2. Quantitative results on KITTI. For each sequence and for each performance metric, we highlight in bold the parameter configuration that attained the best result (for both ORBSLAM3 and SUPERSLAM3) individually for each analyzed metric (as described in the table’s right header). To indicate the one with the highest performance between the two, we color the corresponding cell and highlight the algorithm name in gray. In the table’s top header the parameters set are represented in the form ‘nFeatures_nLevels’.
Table 2. Quantitative results on KITTI. For each sequence and for each performance metric, we highlight in bold the parameter configuration that attained the best result (for both ORBSLAM3 and SUPERSLAM3) individually for each analyzed metric (as described in the table’s right header). To indicate the one with the highest performance between the two, we color the corresponding cell and highlight the algorithm name in gray. In the table’s top header the parameters set are represented in the form ‘nFeatures_nLevels’.
SequenceAlgorithm500_1500_4500_8700_1700_4700_8900_1900_4900_81000_11000_41000_8
00SUPERSLAM3fail0.040.041.5014.220.594.8613.8016.774.1114.2810.39ATE (m)
ORBSLAM30.980.801.071.068.4511.140.4119.5122.411.8720.0718.66
SUPERSLAM3fail156.21158.091.181.790.460.881.911.700.892.061.65ARE (deg)
ORBSLAM31.711.841.540.701.825.950.881.220.981.181.171.11
SUPERSLAM30.020.880.8810.5535.215.8123.3643.5178.2222.3560.8057.59TR (%)
ORBSLAM36.567.998.8113.3227.1331.257.2259.6168.1614.8960.6782.01
01SUPERSLAM3failfailfailfailfail0.35fail3.340.57fail1.080.52ATE (m)
ORBSLAM31.722.32fail0.290.807.4125.2251.72462.5996.409.81317.25
SUPERSLAM3failfailfailfailfail2.01fail12.21158.17fail129.342.25ARE (deg)
ORBSLAM3153.02163.07fail0.952.050.5522.02176.3821.4819.231.104.04
SUPERSLAM30.000.000.000.000.006.360.004.725.360.004.816.36TR (%)
ORBSLAM36.908.45fail9.3612.7247.9631.4337.1596.5533.1540.9696.82
02SUPERSLAM3failfailfail0.243.153.390.255.033.040.490.383.00ATE (m)
ORBSLAM30.030.270.490.160.720.630.033.8116.360.338.1129.13
SUPERSLAM3failfailfail3.142.891.391.292.251.871.510.711.43ARE (deg)
ORBSLAM31.760.851.020.820.770.510.581.850.981.400.993.76
SUPERSLAM30.000.000.001.678.268.902.627.535.172.306.845.45TR (%)
ORBSLAM31.332.534.481.824.035.261.938.4137.442.0619.9545.98
03SUPERSLAM3fail0.010.010.080.020.170.030.911.000.021.001.02ATE (m)
ORBSLAM30.090.090.100.080.311.030.342.191.310.191.561.33
SUPERSLAM3fail6.4594.106.492.001.734.500.590.364.440.630.74ARE (deg)
ORBSLAM37.304.954.753.300.500.440.460.320.361.810.290.39
SUPERSLAM30.005.125.1216.8513.9818.7311.9927.0940.959.1130.4627.47TR (%)
ORBSLAM315.8621.3521.7220.4732.2159.1817.8576.0360.4227.4759.9360.42
04SUPERSLAM3failfail0.09fail0.010.03fail0.100.05fail0.130.07ATE (m)
ORBSLAM30.010.120.080.010.100.460.220.101.080.580.360.27
SUPERSLAM3failfail11.64fail16.72140.35fail0.4536.27fail5.2718.13ARE (deg)
ORBSLAM3100.4746.2013.8780.30129.055.573.4810.0331.96135.8223.7513.99
SUPERSLAM30.000.0014.760.0016.2421.778.1231.7330.638.1241.731.73TR (%)
ORBSLAM317.3447.2345.3917.7143.9199.6338.0140.9699.6352.0386.7299.63
05SUPERSLAM3failfailfailfail11.4622.770.519.2524.730.576.1710.48ATE (m)
ORBSLAM30.250.800.261.008.865.840.3710.598.962.6111.008.84
SUPERSLAM3failfailfailfail0.994.180.331.883.830.681.851.35ARE (deg)
ORBSLAM31.151.075.880.881.860.631.870.981.610.871.061.44
SUPERSLAM30.001.410.000.0450.5350.5317.6761.2875.1915.8632.2064.90TR (%)
ORBSLAM36.417.2812.4210.5040.2053.3918.7351.2195.3625.2477.5899.82
06SUPERSLAM3fail0.180.03fail0.719.971.059.617.440.0710.6520.19ATE (m)
ORBSLAM30.021.710.040.832.2314.711.250.4725.150.1528.9025.65
SUPERSLAM3fail58.88156.07fail0.881.741.612.151.711.473.282.39ARE (deg)
ORBSLAM30.714.880.352.672.241.602.270.712.632.954.382.03
SUPERSLAM30.003.633.630.0011.4445.5910.5434.1556.4014.9072.3953.86TR (%)
ORBSLAM35.2710.817.908.6315.0860.9410.8138.6084.4712.7299.5599.46
07SUPERSLAM3failfailfail0.037.729.602.132.9819.553.1712.9823.93ATE (m)
ORBSLAM30.080.900.980.899.985.884.4623.0920.505.1914.582.15
SUPERSLAM3failfailfail0.783.574.451.591.146.401.965.068.37ARE (deg)
ORBSLAM32.071.971.442.264.442.042.988.877.452.125.790.88
SUPERSLAM30.000.000.006.2773.3976.3931.9799.6498.6443.4298.9198.82TR (%)
ORBSLAM315.1718.5320.6218.7173.1247.3241.1499.4699.0944.2379.2099.55
08SUPERSLAM3fail0.02fail0.145.3711.130.233.566.810.164.122.42ATE (m)
ORBSLAM30.160.160.180.110.613.250.183.763.960.186.2511.45
SUPERSLAM3fail137.49fail0.382.621.817.442.532.895.561.081.38ARE (deg)
ORBSLAM33.221.5738.844.052.721.659.031.651.908.911.222.07
SUPERSLAM30.020.980.022.4611.9421.594.699.8312.114.2013.9012.31TR (%)
ORBSLAM32.283.273.242.908.1613.764.6214.9816.704.6220.8129.92
09SUPERSLAM3failfailfail0.041.280.660.4011.146.260.776.033.96ATE (m)
ORBSLAM30.010.180.210.090.591.030.491.637.200.622.616.74
SUPERSLAM3failfailfail1.445.820.470.872.841.490.821.802.54ARE (deg)
ORBSLAM30.461.251.031.230.660.751.440.491.521.170.421.66
SUPERSLAM30.000.000.003.398.6116.728.6138.5941.8610.3729.6732.24TR (%)
ORBSLAM32.837.546.795.789.4319.048.1721.1256.699.3739.0357.07
10SUPERSLAM3failfailfail0.050.420.040.065.740.230.060.090.28ATE (m)
ORBSLAM30.040.240.050.220.950.420.261.231.560.251.110.54
SUPERSLAM3failfailfail1.260.734.273.002.643.432.593.012.06ARE (deg)
ORBSLAM31.271.306.921.500.860.681.340.731.131.470.690.66
SUPERSLAM30.000.000.005.0817.747.5810.0756.5414.249.3314.6521.15TR (%)
ORBSLAM37.168.839.089.6621.3218.3210.2424.7333.569.9122.2330.47
Table 3. Quantitative results on TUM-VI. For each sequence and for each performance metric, we highlight in bold the parameter configuration that attained the best result (for both ORBSLAM3 and SUPERSLAM3) individually for each analyzed metric (as described in the table’s right header). To indicate the one with the highest performance between the two, we color the corresponding cell and highlight the algorithm name in gray. In the table’s top header the parameters set are represented in the form ‘nFeatures_nLevels’.
Table 3. Quantitative results on TUM-VI. For each sequence and for each performance metric, we highlight in bold the parameter configuration that attained the best result (for both ORBSLAM3 and SUPERSLAM3) individually for each analyzed metric (as described in the table’s right header). To indicate the one with the highest performance between the two, we color the corresponding cell and highlight the algorithm name in gray. In the table’s top header the parameters set are represented in the form ‘nFeatures_nLevels’.
SequenceAlgorithm500_1500_4500_8700_1700_4700_8900_1900_4900_81000_11000_41000_8
room_1SUPERSLAM30.070.190.680.090.800.980.330.800.810.090.930.92ATE (m)
ORBSLAM30.070.070.070.070.180.070.070.210.210.070.140.07
SUPERSLAM31.972.255.513.272.7315.602.052.472.873.146.754.54ARE (deg)
ORBSLAM32.372.312.32.385.272.332.385.986.202.384.522.3
SUPERSLAM316.4595.0095.8545.5996.2496.1790.7596.2196.2182.7795.8296.24TR (%)
ORBSLAM396.696.3596.4696.696.696.5396.696.4296.696.696.696.6
room_2SUPERSLAM30.020.080.130.020.540.130.421.170.880.031.131.09ATE (m)
ORBSLAM30.080.050.060.060.050.050.050.050.050.050.050.05
SUPERSLAM35.345.225.095.184.625.238.4943.139.145.0718.818.15ARE (deg)
ORBSLAM35.335.105.024.925.075.065.095.065.045.105.075.07
SUPERSLAM338.2096.5696.3973.0098.5498.4793.1098.5498.5873.7798.6198.85TR (%)
ORBSLAM398.5898.4098.5198.5898.7598.5498.5898.7998.7298.5898.6898.75
room_3SUPERSLAM30.040.770.170.240.360.350.340.540.480.821.151.00ATE (m)
ORBSLAM30.040.040.040.041.180.040.040.140.050.040.110.16
SUPERSLAM33.799.194.224.273.694.673.363.769.393.564.314.14ARE (deg)
ORBSLAM33.893.873.753.9214.253.763.884.793.763.945.095.58
SUPERSLAM37.6283.9891.4676.9997.5986.6781.2186.8898.7974.2686.6098.83TR (%)
ORBSLAM398.9798.9498.9798.9799.0498.9098.9799.3398.9798.9799.2999.54
room_4SUPERSLAM30.030.330.040.030.750.570.111.090.870.041.001.02ATE (m)
ORBSLAM30.080.080.100.080.080.080.110.080.080.080.080.08
SUPERSLAM32.182.572.192.019.766.373.6617.605.952.0322.9426.70ARE (deg)
ORBSLAM32.542.492.392.552.492.463.832.482.482.562.502.48
SUPERSLAM311.9860.4622.6212.4895.7883.4430.4386.1889.6316.8891.4391.07TR (%)
ORBSLAM397.2297.9897.0497.2297.4997.4497.0898.2597.8997.2298.2597.98
room_5SUPERSLAM30.040.060.180.080.130.890.500.890.930.830.910.94ATE (m)
ORBSLAM30.080.080.090.080.150.450.070.080.080.080.080.09
SUPERSLAM33.095.293.922.644.5921.349.575.6327.0721.5217.039.70ARE (deg)
ORBSLAM34.274.294.214.214.774.684.234.194.124.264.224.26
SUPERSLAM37.9026.3868.0053.0769.2797.3787.8598.3188.1372.5784.1294.03TR (%)
ORBSLAM398.4591.6497.6898.4598.4997.7598.4598.0098.0098.4598.0098.14
room_6SUPERSLAM30.050.090.600.180.870.960.050.950.980.480.980.98ATE (m)
ORBSLAM30.070.080.080.080.080.080.080.080.070.080.080.07
SUPERSLAM33.112.953.062.483.802.462.5015.493.194.807.725.22ARE (deg)
ORBSLAM32.952.912.912.922.922.922.882.922.882.882.902.90
SUPERSLAM325.9185.3296.2477.1698.5698.6341.9299.0998.5667.6898.5698.63TR (%)
ORBSLAM398.6098.6097.8098.6098.3798.3798.6098.6098.9898.6098.6798.56
Table 4. Results of ORBSLAM3 and SUPERSLAM3 on a subset of blurred sequences.
Table 4. Results of ORBSLAM3 and SUPERSLAM3 on a subset of blurred sequences.
AlgorithmSequenceParamsATE (m)ARE (deg)TR (%)
SUPERSLAM3EUROC MH_02700_80.272.500.40
ORBSLAM3EUROC MH_02700_80.081.710.99
SUPERSLAM3TUM room_6900_40.094.320.37
ORBSLAM3TUM room_6900_40.072.910.98
SUPERSLAM3KITTI 07900_47.004.070.5
ORBSLAM3KITTI 07900_40.071.810.15
Table 5. Analysis of the CPU and GPU performance statistics in relation to the parameter set used. AM: Average Memory, AU: Average Utilization. In the table’s top header the parameters set are represented in the form ‘nFeatures_nLevels’.
Table 5. Analysis of the CPU and GPU performance statistics in relation to the parameter set used. AM: Average Memory, AU: Average Utilization. In the table’s top header the parameters set are represented in the form ‘nFeatures_nLevels’.
500 _ 1 500 _ 4 500 _ 8 700 _ 1 700 _ 4 700 _ 8 900 _ 1 900 _ 4 900 _ 8 1000 _ 1 1000 _ 4 1000 _ 8
ORBSLAM3 CpuAM (MB)748.8780780811.2904.8873.6904.8936999.4904.8967.2967.2
SUPERSLAM3 CpuAM (MB)5522.476449391.26021.67706.410,046.46177.68361.610,639.26614.48049.611,575.2
SUPERSLAM3 GpuAM (MB)269526282647263426822633266927032578262526172712
SUPERSLAM3 GpuAU (%)17%19%20%16%18%21%17%18%21%16%18%19%
Table 6. Time statistics: comparison of maximum and average time performance for feature extraction and descriptors matching in ORBSLAM3 and SUPERSLAM3.
Table 6. Time statistics: comparison of maximum and average time performance for feature extraction and descriptors matching in ORBSLAM3 and SUPERSLAM3.
FeatureExtraction (ms)FeatureMatching (ms)
MAX SUPERSLAM32.1341.45
MAX ORBSLAM34.3512.45
AVG SUPERSLAM31.7533.45
AVG ORBSLAM33.929.23
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mollica, G.; Legittimo, M.; Dionigi, A.; Costante, G.; Valigi, P. Integrating Sparse Learning-Based Feature Detectors into Simultaneous Localization and Mapping—A Benchmark Study. Sensors 2023, 23, 2286. https://doi.org/10.3390/s23042286

AMA Style

Mollica G, Legittimo M, Dionigi A, Costante G, Valigi P. Integrating Sparse Learning-Based Feature Detectors into Simultaneous Localization and Mapping—A Benchmark Study. Sensors. 2023; 23(4):2286. https://doi.org/10.3390/s23042286

Chicago/Turabian Style

Mollica, Giuseppe, Marco Legittimo, Alberto Dionigi, Gabriele Costante, and Paolo Valigi. 2023. "Integrating Sparse Learning-Based Feature Detectors into Simultaneous Localization and Mapping—A Benchmark Study" Sensors 23, no. 4: 2286. https://doi.org/10.3390/s23042286

APA Style

Mollica, G., Legittimo, M., Dionigi, A., Costante, G., & Valigi, P. (2023). Integrating Sparse Learning-Based Feature Detectors into Simultaneous Localization and Mapping—A Benchmark Study. Sensors, 23(4), 2286. https://doi.org/10.3390/s23042286

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop