sensors-logo

Journal Browser

Journal Browser

Simultaneous Localization and Mapping (SLAM) and Artificial Intelligence (AI) Based Localization for Positioning Applications and Mobile Robot Navigation—Second Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 15157

Special Issue Editors


E-Mail Website
Guest Editor
James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK
Interests: robotics; unmanned systems; sensor fusion; perception; artificial intelligence; GPS-denied localization; simultaneous localization and mapping
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computing Science, University of Glasgow, Glasgow G12 8RZ, UK
Interests: cyber-physical security; localization/navigation with wireless communication system; Internet of Things (IoT) using Machine Learning (ML) or Artificial Intelligence methodology (AI)
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the proliferation of 5G technologies and the Internet of Things (IoT), there has been a surge of mobile robot technologies and location-based services entering our daily lives. This trend accelerated during the COVID-19 pandemic, amplifying the need for automated solutions, which require knowledge of the sensor/robot location and perception of the dynamic environment, e.g., robots/drones in indoor and outdoor environments for delivery, surveillance, inspection, or mapping applications. Simultaneous Localization and Mapping (SLAM) and Artificial Intelligence (AI) are seen as key enablers for precise localization and mobile robot navigation. Despite the popularity of these methods, it remains a challenge for them to work robustly in dynamic, poorly lit, or unknown environments with possible multipath effects. Hence, data from computer vision, inertial, LiDAR, and other time-of-flight sensors are typically coupled with the latest AI and Machine Learning techniques to meet the challenging requirements of high precision in location accuracy, especially in dynamic indoor environments.

This Special Issue explores novel techniques in SLAM and AI for high-precision localization to enable applications of intelligent mobile robots in realistic indoor and outdoor environments. It provides the opportunity to uncover new ground and applications for precise localization and mobile robot navigation. 

Dr. Henrik Hesse
Dr. Chee Kiat Seow
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • applications of SLAM for mobile robot navigation
  • AI and machine learning algorithms for precise localization
  • location-based AI applications for (mobile) robots
  • data fusion for localization/ navigation using vision, inertial, LiDAR, UWB, or other time-of-flight sensors
  • fast SLAM and localization for edge deployment
  • map-based or landmark-based navigation
  • 3D SLAM for indoor mapping
  • algorithms and methods for mobile robot navigation
  • co-operative localization and SLAM
  • ultrawide-band (UWB)-based and other GPS-denied localization approaches
  • AI for non-line-of-sight (NLOS) detection and mitigation
  • Wi-Fi, 5G technology, and Bluetooth low-energy (BLE) applications for localization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issues

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 8780 KiB  
Article
A Lightweight, Centralized, Collaborative, Truncated Signed Distance Function-Based Dense Simultaneous Localization and Mapping System for Multiple Mobile Vehicles
by Haohua Que, Haojia Gao, Weihao Shan, Xinghua Yang and Rong Zhao
Sensors 2024, 24(22), 7297; https://doi.org/10.3390/s24227297 - 15 Nov 2024
Viewed by 363
Abstract
Simultaneous Localization And Mapping (SLAM) algorithms play a critical role in autonomous exploration tasks requiring mobile robots to autonomously explore and gather information in unknown or hazardous environments where human access may be difficult or dangerous. However, due to the resource-constrained nature of [...] Read more.
Simultaneous Localization And Mapping (SLAM) algorithms play a critical role in autonomous exploration tasks requiring mobile robots to autonomously explore and gather information in unknown or hazardous environments where human access may be difficult or dangerous. However, due to the resource-constrained nature of mobile robots, they are hindered from performing long-term and large-scale tasks. In this paper, we propose an efficient multi-robot dense SLAM system that utilizes a centralized structure to alleviate the computational and memory burdens on the agents (i.e. mobile robots). To enable real-time dense mapping of the agent, we design a lightweight and accurate dense mapping method. On the server, to find correct loop closure inliers, we design a novel loop closure detection method based on both visual and dense geometric information. To correct the drifted poses of the agents, we integrate the dense geometric information along with the trajectory information into a multi-robot pose graph optimization problem. Experiments based on pre-recorded datasets have demonstrated our system’s efficiency and accuracy. Real-world online deployment of our system on the mobile vehicles achieved a dense mapping update rate of ∼14 frames per second (fps), a onboard mapping RAM usage of ∼3.4%, and a bandwidth usage of ∼302 KB/s with a Jetson Xavier NX. Full article
Show Figures

Figure 1

17 pages, 6898 KiB  
Article
SLAM Algorithm for Mobile Robots Based on Improved LVI-SAM in Complex Environments
by Wenfeng Wang, Haiyuan Li, Haiming Yu, Qiuju Xie, Jie Dong, Xiaofei Sun, Honggui Liu, Congcong Sun, Bin Li and Fang Zheng
Sensors 2024, 24(22), 7214; https://doi.org/10.3390/s24227214 - 11 Nov 2024
Viewed by 580
Abstract
The foundation of robot autonomous movement is to quickly grasp the position and surroundings of the robot, which SLAM technology provides important support for. Due to the complex and dynamic environments, single-sensor SLAM methods often have the problem of degeneracy. In this paper, [...] Read more.
The foundation of robot autonomous movement is to quickly grasp the position and surroundings of the robot, which SLAM technology provides important support for. Due to the complex and dynamic environments, single-sensor SLAM methods often have the problem of degeneracy. In this paper, a multi-sensor fusion SLAM method based on the LVI-SAM framework was proposed. First of all, the state-of-the-art feature detection algorithm SuperPoint is used to extract the feature points from a visual-inertial system, enhancing the detection ability of feature points in complex scenarios. In addition, to improve the performance of loop-closure detection in complex scenarios, scan context is used to optimize the loop-closure detection. Ultimately, the experiment results show that the RMSE of the trajectory under the 05 sequence from the KITTI dataset and the Street07 sequence from the M2DGR dataset are reduced by 12% and 11%, respectively, compared to LVI-SAM. In simulated complex environments of animal farms, the error of this method at the starting and ending points of the trajectory is less than that of LVI-SAM, as well. All these experimental comparison results prove that the method proposed in this paper can achieve higher precision and robustness performance in localization and mapping within complex environments of animal farms. Full article
Show Figures

Figure 1

16 pages, 13027 KiB  
Article
A Real-Time Global Re-Localization Framework for a 3D LiDAR-Based Navigation System
by Ziqi Chai, Chao Liu and Zhenhua Xiong
Sensors 2024, 24(19), 6288; https://doi.org/10.3390/s24196288 - 28 Sep 2024
Viewed by 1048
Abstract
Place recognition is widely used to re-localize robots in pre-built point cloud maps for navigation. However, current place recognition methods can only be used to recognize previously visited places. Moreover, these methods are limited by the requirement of using the same types of [...] Read more.
Place recognition is widely used to re-localize robots in pre-built point cloud maps for navigation. However, current place recognition methods can only be used to recognize previously visited places. Moreover, these methods are limited by the requirement of using the same types of sensors in the re-localization process and the process is time consuming. In this paper, a template-matching-based global re-localization framework is proposed to address these challenges. The proposed framework includes an offline building stage and an online matching stage. In the offline stage, virtual LiDAR scans are densely resampled in the map and rotation-invariant descriptors can be extracted as templates. These templates are hierarchically clustered to build a template library. The map used to collect virtual LiDAR scans can be built either by the robot itself previously, or by other heterogeneous sensors. So, an important feature of the proposed framework is that it can be used in environments that have never been visited by the robot before. In the online stage, a cascade coarse-to-fine template matching method is proposed for efficient matching, considering both computational efficiency and accuracy. In the simulation with 100 K templates, the proposed framework achieves a 99% success rate and around 11 Hz matching speed when the re-localization error threshold is 1.0 m. In the validation on The Newer College Dataset with 40 K templates, it achieves a 94.67% success rate and around 7 Hz matching speed when the re-localization error threshold is 1.0 m. All the results show that the proposed framework has high accuracy, excellent efficiency, and the capability to achieve global re-localization in heterogeneous maps. Full article
Show Figures

Figure 1

15 pages, 6346 KiB  
Article
BA-CLM: A Globally Consistent 3D LiDAR Mapping Based on Bundle Adjustment Cost Factors
by Bohan Shi, Wanbiao Lin, Wenlan Ouyang, Chenyu Shen, Siyang Sun, Yan Sun and Lei Sun
Sensors 2024, 24(17), 5554; https://doi.org/10.3390/s24175554 - 28 Aug 2024
Viewed by 741
Abstract
Constructing a globally consistent high-precision map is essential for the application of mobile robots. Existing optimization-based mapping methods typically constrain robot states in pose space during the graph optimization process, without directly optimizing the structure of the scene, thereby causing the map to [...] Read more.
Constructing a globally consistent high-precision map is essential for the application of mobile robots. Existing optimization-based mapping methods typically constrain robot states in pose space during the graph optimization process, without directly optimizing the structure of the scene, thereby causing the map to be inconsistent. To address the above issues, this paper presents a three-dimensional (3D) LiDAR mapping framework (i.e., BA-CLM) based on LiDAR bundle adjustment (LBA) cost factors. We propose a multivariate LBA cost factor, which is built from a multi-resolution voxel map, to uniformly constrain the robot poses within a submap. The framework proposed in this paper applies the LBA cost factors for both local and global map optimization. Experimental results on several public 3D LiDAR datasets and a self-collected 32-line LiDAR dataset demonstrate that the proposed method achieves accurate trajectory estimation and consistent mapping. Full article
Show Figures

Figure 1

25 pages, 7789 KiB  
Article
Mix-VIO: A Visual Inertial Odometry Based on a Hybrid Tracking Strategy
by Huayu Yuan, Ke Han and Boyang Lou
Sensors 2024, 24(16), 5218; https://doi.org/10.3390/s24165218 - 12 Aug 2024
Viewed by 1188
Abstract
In this paper, we proposed Mix-VIO, a monocular and binocular visual-inertial odometry, to address the issue where conventional visual front-end tracking often fails under dynamic lighting and image blur conditions. Mix-VIO adopts a hybrid tracking approach, combining traditional handcrafted tracking techniques with Deep [...] Read more.
In this paper, we proposed Mix-VIO, a monocular and binocular visual-inertial odometry, to address the issue where conventional visual front-end tracking often fails under dynamic lighting and image blur conditions. Mix-VIO adopts a hybrid tracking approach, combining traditional handcrafted tracking techniques with Deep Neural Network (DNN)-based feature extraction and matching pipelines. The system employs deep learning methods for rapid feature point detection, while integrating traditional optical flow methods and deep learning-based sparse feature matching methods to enhance front-end tracking performance under rapid camera motion and environmental illumination changes. In the back-end, we utilize sliding window and bundle adjustment (BA) techniques for local map optimization and pose estimation. We conduct extensive experimental validations of the hybrid feature extraction and matching methods, demonstrating the system’s capability to maintain optimal tracking results under illumination changes and image blur. Full article
Show Figures

Figure 1

17 pages, 6246 KiB  
Article
YPL-SLAM: A Simultaneous Localization and Mapping Algorithm for Point–line Fusion in Dynamic Environments
by Xinwu Du, Chenglin Zhang, Kaihang Gao, Jin Liu, Xiufang Yu and Shusong Wang
Sensors 2024, 24(14), 4517; https://doi.org/10.3390/s24144517 - 12 Jul 2024
Viewed by 1068
Abstract
Simultaneous Localization and Mapping (SLAM) is one of the key technologies with which to address the autonomous navigation of mobile robots, utilizing environmental features to determine a robot’s position and create a map of its surroundings. Currently, visual SLAM algorithms typically yield precise [...] Read more.
Simultaneous Localization and Mapping (SLAM) is one of the key technologies with which to address the autonomous navigation of mobile robots, utilizing environmental features to determine a robot’s position and create a map of its surroundings. Currently, visual SLAM algorithms typically yield precise and dependable outcomes in static environments, and many algorithms opt to filter out the feature points in dynamic regions. However, when there is an increase in the number of dynamic objects within the camera’s view, this approach might result in decreased accuracy or tracking failures. Therefore, this study proposes a solution called YPL-SLAM based on ORB-SLAM2. The solution adds a target recognition and region segmentation module to determine the dynamic region, potential dynamic region, and static region; determines the state of the potential dynamic region using the RANSAC method with polar geometric constraints; and removes the dynamic feature points. It then extracts the line features of the non-dynamic region and finally performs the point–line fusion optimization process using a weighted fusion strategy, considering the image dynamic score and the number of successful feature point–line matches, thus ensuring the system’s robustness and accuracy. A large number of experiments have been conducted using the publicly available TUM dataset to compare YPL-SLAM with globally leading SLAM algorithms. The results demonstrate that the new algorithm surpasses ORB-SLAM2 in terms of accuracy (with a maximum improvement of 96.1%) while also exhibiting a significantly enhanced operating speed compared to Dyna-SLAM. Full article
Show Figures

Figure 1

16 pages, 4129 KiB  
Article
Place Recognition through Multiple LiDAR Scans Based on the Hidden Markov Model
by Linqiu Gui, Chunnian Zeng, Jie Luo and Xu Yang
Sensors 2024, 24(11), 3611; https://doi.org/10.3390/s24113611 - 3 Jun 2024
Viewed by 544
Abstract
Autonomous driving systems for unmanned ground vehicles (UGV) operating in enclosed environments strongly rely on LiDAR localization with a prior map. Precise initial pose estimation is critical during system startup or when tracking is lost, ensuring safe UGV operation. Existing LiDAR-based place recognition [...] Read more.
Autonomous driving systems for unmanned ground vehicles (UGV) operating in enclosed environments strongly rely on LiDAR localization with a prior map. Precise initial pose estimation is critical during system startup or when tracking is lost, ensuring safe UGV operation. Existing LiDAR-based place recognition methods often suffer from reduced accuracy due to only matching descriptors from individual LiDAR keyframes. This paper proposes a multi-frame descriptor-matching approach based on the hidden Markov model (HMM) to address this issue. This method enhances the place recognition accuracy and robustness by leveraging information from multiple frames. Experimental results from the KITTI dataset demonstrate that the proposed method significantly enhances the place recognition performance compared with the scan context-based single-frame descriptor-matching approach, with an average performance improvement of 5.8% and with a maximum improvement of 15.3%. Full article
Show Figures

Figure 1

23 pages, 4627 KiB  
Article
An Enhanced Indoor Three-Dimensional Localization System with Sensor Fusion Based on Ultra-Wideband Ranging and Dual Barometer Altimetry
by Le Bao, Kai Li, Joosun Lee, Wenbin Dong, Wenqi Li, Kyoosik Shin and Wansoo Kim
Sensors 2024, 24(11), 3341; https://doi.org/10.3390/s24113341 - 23 May 2024
Cited by 2 | Viewed by 1218
Abstract
Accurate three-dimensional (3D) localization within indoor environments is crucial for enhancing item-based application services, yet current systems often struggle with localization accuracy and height estimation. This study introduces an advanced 3D localization system that integrates updated ultra-wideband (UWB) sensors and dual barometric pressure [...] Read more.
Accurate three-dimensional (3D) localization within indoor environments is crucial for enhancing item-based application services, yet current systems often struggle with localization accuracy and height estimation. This study introduces an advanced 3D localization system that integrates updated ultra-wideband (UWB) sensors and dual barometric pressure (BMP) sensors. Utilizing three fixed UWB anchors, the system employs geometric modeling and Kalman filtering for precise tag 3D spatial localization. Building on our previous research on indoor height measurement with dual BMP sensors, the proposed system demonstrates significant improvements in data processing speed and stability. Our enhancements include a new geometric localization model and an optimized Kalman filtering algorithm, which are validated by a high-precision motion capture system. The results show that the localization error is significantly reduced, with height accuracy of approximately ±0.05 m, and the Root Mean Square Error (RMSE) of the 3D localization system reaches 0.0740 m. The system offers expanded locatable space and faster data output rates, delivering reliable performance that supports advanced applications requiring detailed 3D indoor localization. Full article
Show Figures

Figure 1

16 pages, 3398 KiB  
Article
Enhancing Pure Inertial Navigation Accuracy through a Redundant High-Precision Accelerometer-Based Method Utilizing Neural Networks
by Qinyuan He, Huapeng Yu, Dalei Liang and Xiaozhuo Yang
Sensors 2024, 24(8), 2566; https://doi.org/10.3390/s24082566 - 17 Apr 2024
Cited by 1 | Viewed by 1240
Abstract
The pure inertial navigation system, crucial for autonomous navigation in GPS-denied environments, faces challenges of error accumulation over time, impacting its effectiveness for prolonged missions. Traditional methods to enhance accuracy have focused on improving instrumentation and algorithms but face limitations due to complexity [...] Read more.
The pure inertial navigation system, crucial for autonomous navigation in GPS-denied environments, faces challenges of error accumulation over time, impacting its effectiveness for prolonged missions. Traditional methods to enhance accuracy have focused on improving instrumentation and algorithms but face limitations due to complexity and costs. This study introduces a novel device-level redundant inertial navigation framework using high-precision accelerometers combined with a neural network-based method to refine navigation accuracy. Experimental validation confirms that this integration significantly boosts navigational precision, outperforming conventional system-level redundancy approaches. The proposed method utilizes the advanced capabilities of high-precision accelerometers and deep learning to achieve superior predictive accuracy and error reduction. This research paves the way for the future integration of cutting-edge technologies like high-precision optomechanical and atom interferometer accelerometers, offering new directions for advanced inertial navigation systems and enhancing their application scope in challenging environments. Full article
Show Figures

Figure 1

18 pages, 17778 KiB  
Article
A Compact Handheld Sensor Package with Sensor Fusion for Comprehensive and Robust 3D Mapping
by Peng Wei, Kaiming Fu, Juan Villacres, Thomas Ke, Kay Krachenfels, Curtis Ryan Stofer, Nima Bayati, Qikai Gao, Bill Zhang, Eric Vanacker and Zhaodan Kong
Sensors 2024, 24(8), 2494; https://doi.org/10.3390/s24082494 - 12 Apr 2024
Cited by 2 | Viewed by 1898
Abstract
This paper introduces an innovative approach to 3D environmental mapping through the integration of a compact, handheld sensor package with a two-stage sensor fusion pipeline. The sensor package, incorporating LiDAR, IMU, RGB, and thermal cameras, enables comprehensive and robust 3D mapping of various [...] Read more.
This paper introduces an innovative approach to 3D environmental mapping through the integration of a compact, handheld sensor package with a two-stage sensor fusion pipeline. The sensor package, incorporating LiDAR, IMU, RGB, and thermal cameras, enables comprehensive and robust 3D mapping of various environments. By leveraging Simultaneous Localization and Mapping (SLAM) and thermal imaging, our solution offers good performance in conditions where global positioning is unavailable and in visually degraded environments. The sensor package runs a real-time LiDAR-Inertial SLAM algorithm, generating a dense point cloud map that accurately reconstructs the geometric features of the environment. Following the acquisition of that point cloud, we post-process these data by fusing them with images from the RGB and thermal cameras and produce a detailed, color-enriched 3D map that is useful and adaptable to different mission requirements. We demonstrated our system in a variety of scenarios, from indoor to outdoor conditions, and the results showcased the effectiveness and applicability of our sensor package and fusion pipeline. This system can be applied in a wide range of applications, ranging from autonomous navigation to smart agriculture, and has the potential to make a substantial benefit across diverse fields. Full article
Show Figures

Figure 1

20 pages, 5360 KiB  
Article
An Appearance-Semantic Descriptor with Coarse-to-Fine Matching for Robust VPR
by Jie Chen, Wenbo Li, Pengshuai Hou, Zipeng Yang and Haoyu Zhao
Sensors 2024, 24(7), 2203; https://doi.org/10.3390/s24072203 - 29 Mar 2024
Viewed by 868
Abstract
In recent years, semantic segmentation has made significant progress in visual place recognition (VPR) by using semantic information that is relatively invariant to appearance and viewpoint, demonstrating great potential. However, in some extreme scenarios, there may be semantic occlusion and semantic sparsity, which [...] Read more.
In recent years, semantic segmentation has made significant progress in visual place recognition (VPR) by using semantic information that is relatively invariant to appearance and viewpoint, demonstrating great potential. However, in some extreme scenarios, there may be semantic occlusion and semantic sparsity, which can lead to confusion when relying solely on semantic information for localization. Therefore, this paper proposes a novel VPR framework that employs a coarse-to-fine image matching strategy, combining semantic and appearance information to improve algorithm performance. First, we construct SemLook global descriptors using semantic contours, which can preliminarily screen images to enhance the accuracy and real-time performance of the algorithm. Based on this, we introduce SemLook local descriptors for fine screening, combining robust appearance information extracted by deep learning with semantic information. These local descriptors can address issues such as semantic overlap and sparsity in urban environments, further improving the accuracy of the algorithm. Through this refined screening process, we can effectively handle the challenges of complex image matching in urban environments and obtain more accurate results. The performance of SemLook descriptors is evaluated on three public datasets (Extended-CMU Season, Robot-Car Seasons v2, and SYNTHIA) and compared with six state-of-the-art VPR algorithms (HOG, CoHOG, AlexNet_VPR, Region VLAD, Patch-NetVLAD, Forest). In the experimental comparison, considering both real-time performance and evaluation metrics, the SemLook descriptors are found to outperform the other six algorithms. Evaluation metrics include the area under the curve (AUC) based on the precision–recall curve, Recall@100%Precision, and Precision@100%Recall. On the Extended-CMU Season dataset, SemLook descriptors achieve a 100% AUC value, and on the SYNTHIA dataset, they achieve a 99% AUC value, demonstrating outstanding performance. The experimental results indicate that introducing global descriptors for initial screening and utilizing local descriptors combining both semantic and appearance information for precise matching can effectively address the issue of location recognition in scenarios with semantic ambiguity or sparsity. This algorithm enhances descriptor performance, making it more accurate and robust in scenes with variations in appearance and viewpoint. Full article
Show Figures

Figure 1

29 pages, 3153 KiB  
Article
Ultra-Wideband Ranging Error Mitigation with Novel Channel Impulse Response Feature Parameters and Two-Step Non-Line-of-Sight Identification
by Hongchao Yang, Yunjia Wang, Shenglei Xu, Jingxue Bi, Haonan Jia and Cheekiat Seow
Sensors 2024, 24(5), 1703; https://doi.org/10.3390/s24051703 - 6 Mar 2024
Cited by 1 | Viewed by 1529
Abstract
The effective identification and mitigation of non-line-of-sight (NLOS) ranging errors are essential for achieving high-precision positioning and navigation with ultra-wideband (UWB) technology in harsh indoor environments. In this paper, an efficient UWB ranging-error mitigation strategy that uses novel channel impulse response parameters based [...] Read more.
The effective identification and mitigation of non-line-of-sight (NLOS) ranging errors are essential for achieving high-precision positioning and navigation with ultra-wideband (UWB) technology in harsh indoor environments. In this paper, an efficient UWB ranging-error mitigation strategy that uses novel channel impulse response parameters based on the results of a two-step NLOS identification, composed of a decision tree and feedforward neural network, is proposed to realize indoor locations. NLOS ranging errors are classified into three types, and corresponding mitigation strategies and recall mechanisms are developed, which are also extended to partial line-of-sight (LOS) errors. Extensive experiments involving three obstacles (humans, walls, and glass) and two sites show an average NLOS identification accuracy of 95.05%, with LOS/NLOS recall rates of 95.72%/94.15%. The mitigated LOS errors are reduced by 50.4%, while the average improvement in the accuracy of the three types of NLOS ranging errors is 61.8%, reaching up to 76.84%. Overall, this method achieves a reduction in LOS and NLOS ranging errors of 25.19% and 69.85%, respectively, resulting in a 54.46% enhancement in positioning accuracy. This performance surpasses that of state-of-the-art techniques, such as the convolutional neural network (CNN), long short-term memory–extended Kalman filter (LSTM-EKF), least-squares–support vector machine (LS-SVM), and k-nearest neighbor (K-NN) algorithms. Full article
Show Figures

Figure 1

20 pages, 9873 KiB  
Article
GY-SLAM: A Dense Semantic SLAM System for Plant Factory Transport Robots
by Xiaolin Xie, Yibo Qin, Zhihong Zhang, Zixiang Yan, Hang Jin, Man Xu and Cheng Zhang
Sensors 2024, 24(5), 1374; https://doi.org/10.3390/s24051374 - 20 Feb 2024
Cited by 1 | Viewed by 1739
Abstract
Simultaneous Localization and Mapping (SLAM), as one of the core technologies in intelligent robotics, has gained substantial attention in recent years. Addressing the limitations of SLAM systems in dynamic environments, this research proposes a system specifically designed for plant factory transportation environments, named [...] Read more.
Simultaneous Localization and Mapping (SLAM), as one of the core technologies in intelligent robotics, has gained substantial attention in recent years. Addressing the limitations of SLAM systems in dynamic environments, this research proposes a system specifically designed for plant factory transportation environments, named GY-SLAM. GY-SLAM incorporates a lightweight target detection network, GY, based on YOLOv5, which utilizes GhostNet as the backbone network. This integration is further enhanced with CoordConv coordinate convolution, CARAFE up-sampling operators, and the SE attention mechanism, leading to simultaneous improvements in detection accuracy and model complexity reduction. While [email protected] increased by 0.514% to 95.364, the model simultaneously reduced the number of parameters by 43.976%, computational cost by 46.488%, and model size by 41.752%. Additionally, the system constructs pure static octree maps and grid maps. Tests conducted on the TUM dataset and a proprietary dataset demonstrate that GY-SLAM significantly outperforms ORB-SLAM3 in dynamic scenarios in terms of system localization accuracy and robustness. It shows a remarkable 92.59% improvement in RMSE for Absolute Trajectory Error (ATE), along with a 93.11% improvement in RMSE for the translational drift of Relative Pose Error (RPE) and a 92.89% improvement in RMSE for the rotational drift of RPE. Compared to YOLOv5s, the GY model brings a 41.5944% improvement in detection speed and a 17.7975% increase in SLAM operation speed to the system, indicating strong competitiveness and real-time capabilities. These results validate the effectiveness of GY-SLAM in dynamic environments and provide substantial support for the automation of logistics tasks by robots in specific contexts. Full article
Show Figures

Figure 1

Back to TopTop