Autonomous Navigation of Mobile Robots in Unstructured Environments

A special issue of Robotics (ISSN 2218-6581).

Deadline for manuscript submissions: closed (31 August 2024) | Viewed by 12872

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Royal Military College of Canada, Kingston, ON, Canada
Interests: autonomous navigation; autonomous exploration; learning-based robotics control; mobile manipulation; robotic search and rescue

Special Issue Information

Dear Colleagues,

The field of autonomous navigation for mobile robots in unstructured environments has gained significant attention in recent years. With advancements in technology, there is a growing need for robots that can navigate and operate in dynamic and unpredictable surroundings. This Special Issue aims to present the latest research and developments in this field, showcasing innovative approaches, algorithms, and applications that enable mobile robots to autonomously navigate through unstructured environments.

This Special Issue welcomes researchers and practitioners to contribute original research articles, reviews, case studies, and short communications on various aspects of the autonomous navigation of mobile robots in unstructured environments. The topics of interest include, but are not limited to:

  1. Sensing and perception for autonomous navigation;
  2. Mapping and localization techniques;
  3. Path planning and obstacle avoidance algorithms;
  4. Machine learning and artificial intelligence for autonomous navigation;
  5. Multi-robot systems and coordination in unstructured environments;
  6. Human-robot interaction in unstructured environments;
  7. Robustness and fault tolerance in autonomous navigation;
  8. Navigation in challenging terrains (e.g., forests, disaster zones, underwater);
  9. Applications of autonomous navigation in industries, agriculture, search and rescue, etc.

Dr. Yugang Liu
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 17512 KiB  
Article
An ANFIS-Based Strategy for Autonomous Robot Collision-Free Navigation in Dynamic Environments
by Stavros Stavrinidis and Paraskevi Zacharia
Robotics 2024, 13(8), 124; https://doi.org/10.3390/robotics13080124 - 22 Aug 2024
Viewed by 748
Abstract
Autonomous navigation in dynamic environments is a significant challenge in robotics. The primary goals are to ensure smooth and safe movement. This study introduces a control strategy based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). It enhances autonomous robot navigation in dynamic environments [...] Read more.
Autonomous navigation in dynamic environments is a significant challenge in robotics. The primary goals are to ensure smooth and safe movement. This study introduces a control strategy based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). It enhances autonomous robot navigation in dynamic environments with a focus on collision-free path planning. The strategy uses a path-planning technique to develop a trajectory that allows the robot to navigate smoothly while avoiding both static and dynamic obstacles. The developed control system incorporates four ANFIS controllers: two are tasked with guiding the robot toward its end point, and the other two are activated for obstacle avoidance. The experimental setup conducted in CoppeliaSim involves a mobile robot equipped with ultrasonic sensors navigating in an environment with static and dynamic obstacles. Simulation experiments are conducted to demonstrate the model’s capability in ensuring collision-free navigation, employing a path-planning algorithm to ascertain the shortest route to the target destination. The simulation results highlight the superiority of the ANFIS-based approach over conventional methods, particularly in terms of computational efficiency and navigational smoothness. Full article
(This article belongs to the Special Issue Autonomous Navigation of Mobile Robots in Unstructured Environments)
Show Figures

Figure 1

28 pages, 7296 KiB  
Article
Autonomous Full 3D Coverage Using an Aerial Vehicle, Performing Localization, Path Planning, and Navigation towards Indoors Inventorying for the Logistics Domain
by Kosmas Tsiakas, Emmanouil Tsardoulias and Andreas L. Symeonidis
Robotics 2024, 13(6), 83; https://doi.org/10.3390/robotics13060083 - 23 May 2024
Viewed by 1155
Abstract
Over the last years, a rapid evolution of unmanned aerial vehicle (UAV) usage in various applications has been observed. Their use in indoor environments requires a precise perception of the surrounding area, immediate response to its changes, and, consequently, a robust position estimation. [...] Read more.
Over the last years, a rapid evolution of unmanned aerial vehicle (UAV) usage in various applications has been observed. Their use in indoor environments requires a precise perception of the surrounding area, immediate response to its changes, and, consequently, a robust position estimation. This paper provides an implementation of navigation algorithms for solving the problem of fast, reliable, and low-cost inventorying in the logistics industry. The drone localization is achieved with a particle filter algorithm that uses an array of distance sensors and an inertial measurement unit (IMU) sensor. Navigation is based on a proportional–integral–derivative (PID) position controller that ensures an obstacle-free path within the known 3D map. As for the full 3D coverage, an extraction of the targets and then their final succession towards optimal coverage is performed. Finally, a series of experiments are carried out to examine the robustness of the positioning system using different motion patterns and velocities. At the same time, various ways of traversing the environment are examined by using different configurations of the sensor that is used to perform the area coverage. Full article
(This article belongs to the Special Issue Autonomous Navigation of Mobile Robots in Unstructured Environments)
Show Figures

Figure 1

15 pages, 6675 KiB  
Article
Development of Local Path Planning Using Selective Model Predictive Control, Potential Fields, and Particle Swarm Optimization
by Mingeuk Kim, Minyoung Lee, Byeongjin Kim and Moohyun Cha
Robotics 2024, 13(3), 46; https://doi.org/10.3390/robotics13030046 - 8 Mar 2024
Cited by 2 | Viewed by 2380
Abstract
This paper focuses on the real-time obstacle avoidance and safe navigation of autonomous ground vehicles (AGVs). It introduces the Selective MPC-PF-PSO algorithm, which includes model predictive control (MPC), Artificial Potential Fields (APFs), and particle swarm optimization (PSO). This approach involves defining multiple sets [...] Read more.
This paper focuses on the real-time obstacle avoidance and safe navigation of autonomous ground vehicles (AGVs). It introduces the Selective MPC-PF-PSO algorithm, which includes model predictive control (MPC), Artificial Potential Fields (APFs), and particle swarm optimization (PSO). This approach involves defining multiple sets of coefficients for adaptability to the surrounding environment. The simulation results demonstrate that the algorithm is appropriate for generating obstacle avoidance paths. The algorithm was implemented on the ROS platform using NVIDIA’s Jetson Xavier, and driving experiments were conducted with a steer-type AGV. Through measurements of computation time and real obstacle avoidance experiments, it was shown to be practical in the real world. Full article
(This article belongs to the Special Issue Autonomous Navigation of Mobile Robots in Unstructured Environments)
Show Figures

Figure 1

26 pages, 12281 KiB  
Article
MonoGhost: Lightweight Monocular GhostNet 3D Object Properties Estimation for Autonomous Driving
by Ahmed El-Dawy, Amr El-Zawawi and Mohamed El-Habrouk
Robotics 2023, 12(6), 155; https://doi.org/10.3390/robotics12060155 - 17 Nov 2023
Cited by 1 | Viewed by 2376
Abstract
Effective environmental perception is critical for autonomous driving; thus, the perception system requires collecting 3D information of the surrounding objects, such as their dimensions, locations, and orientation in space. Recently, deep learning has been widely used in perception systems that convert image features [...] Read more.
Effective environmental perception is critical for autonomous driving; thus, the perception system requires collecting 3D information of the surrounding objects, such as their dimensions, locations, and orientation in space. Recently, deep learning has been widely used in perception systems that convert image features from a camera into semantic information. This paper presents the MonoGhost network, a lightweight Monocular GhostNet deep learning technique for full 3D object properties estimation from a single frame monocular image. Unlike other techniques, the proposed MonoGhost network first estimates relatively reliable 3D object properties depending on efficient feature extractor. The proposed MonoGhost network estimates the orientation of the 3D object as well as the 3D dimensions of that object, resulting in reasonably small errors in the dimensions estimations versus other networks. These estimations, combined with the translation projection constraints imposed by the 2D detection coordinates, allow for the prediction of a robust and dependable Bird’s Eye View bounding box. The experimental outcomes prove that the proposed MonoGhost network performs better than other state-of-the-art networks in the Bird’s Eye View of the KITTI dataset benchmark by scoring 16.73% on the moderate class and 15.01% on the hard class while preserving real-time requirements. Full article
(This article belongs to the Special Issue Autonomous Navigation of Mobile Robots in Unstructured Environments)
Show Figures

Figure 1

28 pages, 13987 KiB  
Article
Keypoint Detection and Description through Deep Learning in Unstructured Environments
by Georgios Petrakis and Panagiotis Partsinevelos
Robotics 2023, 12(5), 137; https://doi.org/10.3390/robotics12050137 - 30 Sep 2023
Cited by 2 | Viewed by 3976
Abstract
Feature extraction plays a crucial role in computer vision and autonomous navigation, offering valuable information for real-time localization and scene understanding. However, although multiple studies investigate keypoint detection and description algorithms in urban and indoor environments, far fewer studies concentrate in unstructured environments. [...] Read more.
Feature extraction plays a crucial role in computer vision and autonomous navigation, offering valuable information for real-time localization and scene understanding. However, although multiple studies investigate keypoint detection and description algorithms in urban and indoor environments, far fewer studies concentrate in unstructured environments. In this study, a multi-task deep learning architecture is developed for keypoint detection and description, focused on poor-featured unstructured and planetary scenes with low or changing illumination. The proposed architecture was trained and evaluated using a training and benchmark dataset with earthy and planetary scenes. Moreover, the trained model was integrated in a visual SLAM (Simultaneous Localization and Maping) system as a feature extraction module, and tested in two feature-poor unstructured areas. Regarding the results, the proposed architecture provides a mAP (mean Average Precision) in a level of 0.95 in terms of keypoint description, outperforming well-known handcrafted algorithms while the proposed SLAM achieved two times lower RMSE error in a poor-featured area with low illumination, compared with ORB-SLAM2. To the best of the authors’ knowledge, this is the first study that investigates the potential of keypoint detection and description through deep learning in unstructured and planetary environments. Full article
(This article belongs to the Special Issue Autonomous Navigation of Mobile Robots in Unstructured Environments)
Show Figures

Figure 1

Back to TopTop