Next Article in Journal
Research on A Global Path-Planning Algorithm for Unmanned Arial Vehicle Swarm in Three-Dimensional Space Based on Theta*–Artificial Potential Field Method
Next Article in Special Issue
Iterative Trajectory Planning and Resource Allocation for UAV-Assisted Emergency Communication with User Dynamics
Previous Article in Journal
New Concept of Smart UAS-GCP: A Tool for Precise Positioning in Remote-Sensing Applications
Previous Article in Special Issue
Reducing Oscillations for Obstacle Avoidance in a Dense Environment Using Deep Reinforcement Learning and Time-Derivative of an Artificial Potential Field
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Topological Map-Based Autonomous Exploration in Large-Scale Scenes for Unmanned Vehicles

1
School of Automation, Northwestern Polytechnical University, Xi’an 710129, China
2
Department of Precision Instrument, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Drones 2024, 8(4), 124; https://doi.org/10.3390/drones8040124
Submission received: 29 February 2024 / Revised: 23 March 2024 / Accepted: 24 March 2024 / Published: 27 March 2024
(This article belongs to the Special Issue Path Planning, Trajectory Tracking and Guidance for UAVs)

Abstract

:
Robot autonomous exploration is a challenging and valuable research field that has attracted widespread research interest in recent years. However, existing methods often encounter problems such as incomplete exploration, repeated exploration paths, and low exploration efficiency when facing large-scale scenes. Considering that many indoor and outdoor scenes usually have a prior topological map, such as road navigation maps, satellite road network maps, indoor computer-aided design (CAD) maps, etc., this paper incorporated this information into the autonomous exploration framework and proposed an innovative topological map-based autonomous exploration method for large-scale scenes. The key idea of the proposed method is to plan exploration paths with long-term benefits by tightly merging the information between robot-collected and prior topological maps. The exploration path follows a global exploration strategy but prioritizes exploring scenes outside the prior information, thereby preventing the robot from revisiting explored areas and avoiding the duplication of any effort. Furthermore, to improve the stability of exploration efficiency, the exploration path is further refined by assessing the cost and reward of each candidate viewpoint through a fast method. Simulation experimental results demonstrated that the proposed method outperforms state-of-the-art autonomous exploration methods in efficiency and stability and is more suitable for exploration in large-scale scenes. Real-world experimentation has also proven the effectiveness of our proposed method.

1. Introduction

Autonomous robot exploration technology requires robots to collect data within a given region and construct corresponding environmental maps. As a critical technology that reveals robotic autonomy, relevant research in robotics has garnered significant attention, driving widespread applications in geological exploration, 3D reconstruction, post-disaster rescue, and other fields.
Numerous autonomous exploration methods have been proposed in recent years and are divided into sampling-based and frontier-based categories. Sampling-based methods originated from the NBV (Next Best View) algorithm in the field of 3D reconstruction. RH-NBV (recurrent hybrid neural-based visual) first introduced the NBV algorithm into the autonomous exploration field [1], which consisted of the robot randomly sampling viewpoints in explored free space, constructing a rapidly exploring random tree (RRT), and evaluating the utility of each branch on the RRT. Finally, the robot focuses on the branch with the highest information reward and selects the first node of this branch as the local target. After that, numerous researchers have extended and improved the RH-NBV to meet the requirements of various application scenarios [2,3,4,5,6]. However, sampling-based autonomous exploration methods have lower exploration efficiency and lead to the robot being trapped. Ref. [7] first introduced the frontier-based exploration method, which groups free voxels adjacent to unknown voxels as frontier clusters and then drives the robot towards these frontier clusters to move to explore unknown areas. Since then, many frontier-based exploration methods have been proposed to meet various application requirements [8,9,10]. Ref. [11] proposed to select a viewpoint with minimal speed changes as the next goal within the Field Of View (FOV) of sensors, aiming to maintain the high movement speed of unmanned aerial vehicles (UAVs) and achieve efficient exploration. Fast UAV expLoration (FUEL) proposes the incremental frontier information structure (FIS) to address the problem of high computation of frontier extraction and low decision frequency of the path planner [12]. Based on FIS, UAVs can quickly and incrementally extract environmental information that the planner needs and promptly plan the exploration path.
However, most autonomous exploration methods tend to greedily guide the robot to exploration scenes with immediate rewards and neglect some targets with long-term rewards, resulting in lower efficiency in global exploration [12]. Although some methods plan paths from the global exploration standpoint, the robot inevitably overlooks some scenarios during exploration because of the limited perception range of sensors and the unpredictability of unknown environments [12,13]. To thoroughly explore a given region, the robot must revisit areas containing those missed scenarios, resulting in a waste of resources. Furthermore, when exploring large-scale scenes, the more information the robot collects with exploration, the more the path planner computes, which poses a significant challenge to onboard computers.
In order to solve the above problems, the work [14] proposed that supplying robots with prior information about a given region can aid them in making decisions that align with long-term benefits. Ref. [15] proposed a probabilistic information gain map as the prior knowledge to guide exploration. Ref. [16] introduced a general information theory framework to control multiple robots to search and rescue, wherein the prior knowledge of people is modeled to capture target positions and dynamics. Ref. [17] employs hand-drawn sketches as prior information, enabling the robot to explore even when the metric description of the environment is incomplete.
As the concision of a topological map, many researchers use them as prior information to guide robots in autonomous exploration. Ref. [18] proposed a novel autonomous exploration method based on a prior topometric graph, which verifies that prior information could aid the robot in swiftly completing the exploration of unknown environments. Ref. [19] proposed a path planning method based on topology information for 3D reconstruction, in which the multi-view stereo path planning is decomposed into a collection of overlapped viewing optimization problems that can be processed in parallel. In [14], the prior topometric map is employed to improve exploration efficiency and guide the robot to trigger a loop close, improving the localization accuracy of the Simultaneous Localization And Mapping (SLAM) system. Finally, the environmental information collected will be used to refine prior information.
Furthermore, some researchers focus on the generation of topological maps. Ref. [20] proposed a framework called “topomap” to provide robots with customized maps to simplify robot navigation tasks, transforming the sparse feature-based map from visual SLAM into a three-dimensional topological map. Ref. [21] proposed an efficient and flexible algorithm that generates a trajectory-independent 3D sparse topological skeleton graph captured from the spatial structure of free space.
Inspired by the abovementioned research, we select the topological map as the prior information to guide in robot exploration and employ the frontier-based exploration method suitable for exploring large-scale scenes to plan exploration paths. As a form of map representation, the topological map briefly provides relative position and connectivity between critical places in complex scenes, which could guide the robot in planning paths that follow long-term benefits [20]. In practical cases, many methods can easily acquire the skeleton structure of the environment as the topological map [20,21,22,23].
To fully take advantage of the guiding function of prior topological maps, we propose an autonomous exploration method based on topological maps. The proposed method employs a hierarchical path planning framework, integrates frontier information with prior topological maps, and plans the exploration path with long-term benefits. It first plans a global exploration path by solving the constructed Priority Constrained Asymmetric Traveling Salesman Problem (PCATSP). The global exploration path would follow optimal or customized global exploration strategies to guide the robot to cover frontiers but prioritizes exploring scenes outside the prior information, thus preventing the robot from revisiting previously visited areas. Then, the exploration path is refined from the global exploration path by quickly evaluating the rewards and costs of the candidate viewpoint for each frontier.
Because of the one-pass exploration process, our method will maintain the frontier at a small number, preventing excessive computational burden on the solver during exploration. The above properties make our method more suitable for autonomous exploration in large-scale scenes, and the contributions of this paper are as follows:
(1) An autonomous exploration method based on prior topological maps. The robot follows an optimal or customized strategy to explore a given region autonomously but prioritizes exploring scenes outside prior information, preventing the robot from revisiting the explored areas.
(2) A path planning method integrates information between frontiers and prior topological maps, which makes the topological map deeply involved in the path planning of robot exploration.
(3) A local path planning method, which quickly evaluates the rewards and costs of each candidate viewpoint to optimize the global exploration path, enhances the stability of exploration efficiency.

2. Design Objectives

Give the robot a region to autonomously explore, and provide it with a topological map of the region to be explored. The topological map should reflect the fundamental layout of the region but may not represent all of its spaces. The objectives we address are as follows:
Objective 1: The robot completes a comprehensive exploration of the given region. When there are no frontier clusters extractable within the given region, it indicates the completion of an information gathering task.
Objective 2: The robot utilizes real-time collected scene information and prior topological maps to plan exploration paths. When the robot encounters scenes that are not included in the priori information, the exploration path will guide the robot to prioritize exploring scenes beyond a priori information.
Objective 3: The exploration path enables the robot to complete an exploration of the visited area in a one-pass manner, preventing the robot from repeatedly visiting the areas that have been explored.

3. Methods

We define the topological map as follows. G ( S , E ) consist of the global targets set S = { s 1 , s 2 , , s n } and the undirected edges set E = { e 1 , e 2 , , e m } . s denotes a global target corresponding to a corner or intersection in the environment. e k = ( s i , s j ) is an undirected edge, connecting s i and s j , representing a straight-line scene, such as a road or corridor. With the support of a prior topological map, we can obtain a global exploration strategy O = { o 0 ¯ , o 1 ¯ , o 2 ¯ , , o h ¯ } for the given region by customizing or solving the Chinese Postman Problem (CPP) [24], which is a priority queue. o k ¯ = ( s i , s j ) denotes a directed line segment from s i to s j , corresponding to the exploration guidance. Define f as the frontier cluster, and F = { f 1 , f 2 , , f n } as a set of remaining frontier clusters in a scene. We adopt the incrementally frontier information structure (FIS) proposed by FUEL to update frontier clusters efficiently [12]. Viewpoint sequential queue V P k = { v p 1 k , v p 2 k , , v p m k } of frontier cluster f k is extracted by random sampling, where v p 1 k is a viewpoint with the largest reward of f k and will replace f k in constructing PCATSP.
Figure 1 shows an overview of the proposed method, which operates upon a voxel grid map. We employ a hierarchical architecture to plan the exploration path, which consists of global path planning (Section 3.1) and local path planning (Section 3.4). The global path planning module takes a prior topological map, global exploration strategy, and frontier clusters as input to plan the global exploration path based on the Priority Constrained Asymmetric Traveling Salesman Problem (PCATSP). Nodes with priority in PCATSP will be extracted (Section 3.2), and the movement cost of some frontier clusters will also be updated (Section 3.3). Then, the global exploration path is given to the local path planning module, which refines the input path based on rewards and costs of each viewpoint candidate to improve the stability of exploration efficiency. Finally, the exploration path will output to the trajectory generation module. The exploration task will be completed when no frontier clusters can be extracted from environment.

3.1. PCATSP-Based Global Path Planning

PCATSP is a variation of the classic Traveling Salesman Problem (TSP), which aims to find a minimum-cost Hamiltonian circuit, with the constraint that some nodes must be visited before others. If splitting the start and end points of PCATSP into two nodes, PCATSP is equivalent to finding a path between the start and end points that satisfy priority constraints, which is also considered a Sequential Ordering Problem (SOP) [25]. To address Objective 3, our basic idea of global path planning is to solve PCATSP with frontiers and priority-constrained global targets. It is equivalent to inserting frontiers into a sequential queue of global targets, utilizing the global targets to influence the covered sequence of some frontier clusters.
However, the construction of PCATSP faces the following challenges: First, constrained by the perception range of sensor and obstacle obstruction, the robot cannot accurately calculate the movement distance between global targets in unknown space and viewpoints inside the free space. Second, for TSP, the farther the metric distance between frontier cluster and global target, the less influence the global target can exert. So, we need to enable global targets to influence specific frontier clusters.
To address the above challenges, we define the following: if a global target s i is inside the free space, and the other global target s j connected to s i is in unknown space, then the shortest path P p k , s j between any node p k in free space and global target s j in unknown space is given by
P p k , s j = P p k , a + P a , s j ,
where a is an intersection point of frontier and undirected edge e k that connects s i and s j , as shown in Figure 2a. P a , s j is a portion of e k in unknown space, and P p k , a is a search path from p k to a.
Based on the above definition and by incorporating the solving property of TSP, if all elements in a certain row or column of cost matrix D of TSP are subtracted by the same value, a new cost matrix D * for the TSP will be obtained. However, D and D * correspond to the same TSP solution [26]. Thus, we can subtract P a , s j from all P p , s j , and leave the TSP solution unchanged. It is equivalent to setting an agent point for the global target on the frontier. For constructing PCATSP, the intersection point a j could be extracted on a frontier cluster as an agent for global target s j and the priority can be set to a j based on global exploration strategy O.
Hence, based on the above theory, we define agent a j as the intersection point of a frontier cluster f k and an undirected edge e k = ( s i , s j ) , which is associated with a global target s j in unknown space and possesses access priority. Then, the path between any point inside free space and an agent on frontier could be found by a path-searching algorithm, and then path length could also be accurately calculated. The method of agent extraction and priority assignment will be elaborated in Section 3.2.
Finally, we can naturally combine the prior topological map with real-time updated scene information in path planning based on PCATSP. We can then solve the PCATSP with the extracted agent set A = { a i , a j , , a l } and frontier cluster set F, where agents with priority will influence the visited order of nearby frontier clusters.
In this section, we suppose that the movement cost c p i , p j between any two nodes p i and p j is calculated as follows:
c p i , p j = L P p i , p j v max ,
where P p i , p j is the shortest search path between nodes p i and p j ; L ( ) denotes the length of search path; and v m a x is the maximum velocity of the robot.
PCATSP solver provides a path t o u r composed of input nodes. By removing A from t o u r , we obtain a global exploration path t o u r e x p l o r e = { v p 1 i , v p 1 j , , v p 1 l } that satisfies global exploration strategy O, as shown in Figure 2b [27]. However, t o u r e x p l o r e cannot guarantee that the robot with the priority will explore areas outside prior information, i.e., Objective 2. To address this objective, frontier clusters that guide the robot towards global targets will be recognized, and movement cost between these frontier clusters and the robot will be increased. Details are discussed in Section 3.3.
Figure 3 is a schematic diagram of the robot exploration process. The blue arrows represent the global exploration strategy, and green arrows are basic exert programs for global exploration paths. Based on our method, the robot explores the given region according to the global exploration strategy but prioritizes exploring scenarios outside prior information. Finally, following the global exploration strategy, the robot actively loops close and completes the exploration.

3.2. Agent Extraction and Priority Assignment

When the position of a global target s i is explored by the robot, other global targets s j connected to s i will be searched on a topological map. Then, the directed line segment e k ¯ = ( s i , s j ) is constructed based on e k = ( s i , s j ) , indicating from s i to s j . During implementation, we set up some satellite points for each global target to prevent the robot from missing them. The satellite points are evenly distributed at a set distance around the global target point.
The Oriented Bounding Box (OBB) of each new frontier cluster is extracted by principal component analysis (PCA), which is used to extract agents. As shown in Figure 4a, if f k is crossed by e k ¯ , OBB of f k must be crossed, and the following condition is met:
s j s i × o b b p s i z s j s i × o b b q s i z < 0 , p , q { 0 , 1 , 2 , 3 } , p q ,
where o b b are vertices of OBB. For the new frontier clusters that meet the condition, proceed with the following secondary evaluation to identify which global target the forthcoming agent will belong to:
π > cos 1 ( R f a v e k ) · s j f a v e k χ , 0 < cos 1 ( R f a v e k ) · s i f a v e k ψ ,
where R is the position of the robot, and f a v e k is the average position of f k , as shown in Figure 4b.
If the above conditions are met with the newly extracted frontier cluster f k , the cell closest to f a v e k in f k will be defined as an agent a j for global target s j . An agent of the global target is independent of the frontier cluster but is associated with it; if a frontier cluster is deleted or updated, the corresponding agent will be deleted.
The movement cost between an agent a j and other nodes of PCATSP are calculated as follows:
c R , a j = C , c a j , R = 0 , c a j , x = c x , a j = L P a j , x v max , x = v p 1 k , a i ,
where C is a large value, ensuring that the robot prioritizes exploring scenes outside prior information.
The priority p r y a j of agent a j is assigned as follows:
p r y a j = u , e k ¯ = o u ¯ s i z e ( O ) + 1 , o t h e r s ,
which is determined by the sequential position of its e k ¯ in O. o u ¯ is a directed line segment in O, where u represents its ordinal position in O. The higher the sequential position of e k ¯ , the lower the priority of the agent extracted from e k ¯ . If e k ¯ is not found in O, the priority of the agent equals s i z e ( O ) + 1 , and these agents with the lowest priority will not be actively explored, as shown in Figure 3. Additionally, during exploration, not only will one agent be generated by a directed line segment, but all agents will point towards the same global target. In such cases, we define the agent that is further away from the global target to have a higher priority, ensuring the robot does not miss the scene during exploration.

3.3. Update of Movement Cost of Frontier Clusters

As mentioned earlier, to ensure that the robot prioritizes exploring scenes outside prior information, the system needs to recognize frontier clusters that would guide the robot towards a global target and set a higher cost between them and the robot. To achieve this, we classify frontier clusters into three categories:
The first class includes frontier clusters with agents and other frontier clusters that are adjacent to these frontier clusters. They will guide the robot towards global targets.
The second class includes frontier clusters adjacent to the first class frontier clusters. These frontier clusters may guide the robot towards global targets. We utilize density-based spatial clustering of applications with noise (DBSCAN) algorithm to recognize these frontier clusters and rely on the following methods to identify whether they could guide the robot to global targets [28]:
e r s j , f k / r s j , s i e cos α k 1 cos β k ε , r s j , f k < r s j , R r s j , s i + γ ,
as shown in Figure 5a, α k is the angle between vector f a v e k s j and s i s j ; β k is the angle between vector f a v e k s j and R f a v e k ; and r is the Euclidean distance between two points.
However, as the robot explores, the relative position between frontier clusters and robot changes, and frontier clusters cannot always satisfy Equation (7), as shown in Figure 5b,c. Thus, to keep the consistency of determination for these frontier clusters during exploration, we employ the Dynamic Time Warping (DTW) algorithm to evaluate similarities between path P R , a j and all paths of P R , F = { P R , f 1 , P R , f 2 , , P R , f n } , as shown in Figure 5d [29]. If the similarity ranking of f k satisfies the following condition, f k is still believed to guide the robot towards global target s j :
r a n k d t w P R , a j , P R , f k φ .
The third class consists of the remaining frontier clusters. They will guide the robot to explore scenes outside prior information.
For the frontier clusters f k * that guide the robot towards global targets, the movement cost from the robot to them is set as follows:
c R , v p 1 k * = C .
The movement cost from robot to other frontier clusters f k is computed as follows:
c R , v p 1 k = max L P R , v p 1 k v max , min ξ 1 k ξ R , 2 π ξ 1 k ξ R ω max + λ cos 1 v p 1 k R · v R v p 1 k R v R ,
which considers the path length, yaw change, and motion consistency, where p 1 k and ξ 1 k are coordinates, and the yaw angle of viewpoint v p 1 k , ω m a x is the maximum angular change rate; ξ R is the yaw angle of the robot; and v R is the current velocity.
As c ( v p 1 k * , R ) and c ( v p 1 k , R ) do not impact the solution results of open-loop path planning, we set them to 0. The movement cost between all frontier clusters are calculated as follows:
c v p 1 i , v p 1 j = c v p 1 j , v p 1 i = max L P v p 1 i , v p 1 j v max , min ξ 1 i ξ 1 j , 2 π ξ 1 i ξ 1 j ω max .

3.4. Local Path Planning

The global path planning module aims to assist the robot in making decisions at a global standpoint for efficient exploration. The local path planning module aims to find the best viewpoints to make the robot to follow. Many previous works refine a path by evaluating the cost and reward for efficient exploration, but they consume significant computational resources in information evaluation [1,30]. Thus, we define the potential reward of a candidate viewpoint as a volume of unknown space within its a Field of View (FoV) and propose a simple and fast reward evaluation method based on incremental frontier information structure (FIS). Finally, the local path is refined by synthesizing the reward and movement cost of each viewpoint candidate.
Cells of a frontier cluster are stored in FIS [12]. We use them to evaluate the volume of unknown space in FOV. As shown in Figure 6, each truncated pyramid is constructed based on cells that the candidate viewpoint could cover, and its volume is calculated as follows:
V = h r w h r w e c e l l h c e l l 2 h c e l l e c e l l 2 / 3 ,
where e c e l l is width of cell, h c e l l is distance between cell and candidate viewpoint. h r w is effective distance to calculate reward and computed by:
h r w = min h max , h c e l l + δ ,
h m a x is maximum range of FOV, and δ is used to control the depth of truncated pyramid to balance movement cost and expected rewards.
The expected reward r w i k of a candidate viewpoint v p i k is evaluated by accumulating V:
r w i k = x = 1 n V x k + η y = 1 m V y l ,
where V x k is taken from current frontier cluster f k , and V y l is taken from next adjacent frontier cluster f l to be visited. m and n, respectively, represent the number of cells that v p i k could cover. η is a weight coefficient.
We formulate local path planning as a graph search problem and refine an optimal path from the global exploration path by balancing expected reward and movement cost, where viewpoints of each frontier cluster serve as candidate points. Suppose that the optimal exploration path p a t h = { v p i 1 , v p k 2 , , v p j N r f , v p 1 N r f + 1 } provided by the Dijkstra algorithm will minimize the cost/reward ratio:
c P R , v p 1 N r f + 1 = c R , v p i 1 W i 1 + c v p j N r f , v p 1 N r f + 1 W 1 N r f + 1 + n = 1 N r f 1 c v p k n , v p k n + 1 W k n + 1 ,
where N r f + 1 is the size of the frontier clusters to be optimized, and
W k n = r w k n V F O V ,
V F O V is the volume of FOV.
Finally, the exploration path p a t h is output to the trajectory generation module.

4. Experiments

4.1. Implementation Details

We set ψ = π / 3 , χ = 2 π / 3 , and C = 500 in Section 3.2, ε = 0.4 , γ = 3 m, and φ = 6 in Section 3.3, and δ = 1.5 m and η = 1.25 in Section 3.4. Additionally, we employ the SOP solver from LKH-3.0 to solve PCATSP and implement the Chinese Postman Problem (CPP) solver ourselves for the global exploration strategy [27,31]. All simulation experiments are conducted in ROS-Kinetic Gazebo platform on Ubuntu 18.04 computer system, running on an CPU. For real-world experiments, the unmanned ground vehicle shown in Figure 7 was utilized to explore a given region. We equipped it with a depth camera, an inertial measurement unit, and an onboard computer with Ubuntu 18.04 computer system.

4.2. Benchmark Comparisons

In this section, we conduct benchmark comparisons using simulation experiments to verify the effectiveness and exploration efficiency of the proposed method. Robot exploration in maze scenes is the most effective method to verify the efficiency of autonomous exploration [13]. Thus, we manually constructed two large-scale mazes, Maze-1 ( 48 × 63 × 2 m3 ) and Maze-2 ( 66 × 62 × 2 m3), in Gazebo simulation platform. The cross-sectional length of the road in Maze-1 is 4 6 m, and in Maze-2, 8 10 m. The topological maps of these mazes are generated by manually placing global targets on corners and intersections, and then connecting them according to the topology of mazes to simulate maze information [18]. But, we leave some space for the robot to explore autonomously without prior information. The mazes and their prior topological maps are shown in Figure 8.
We employ the optimal exploration strategy provided by the CPP solver to guide robot exploration and compare it with FUEL and FAEP [12,13]. They are state-of-the-art frontier-based methods which have been proposed in recent years, and which exhibit high exploration efficiency and have open-sourced their code to serve community. In all experiments, we utilize UAV as the exploration robot, with v m a x = 0.6 m/s, ω m a x = 0.9 rad/s, and the maximum acceleration is 0.6 m/s2. UAV equips depth camera to collect environmental information. FOV of depth camera is configured as [ 80 × 60 ] deg, h m a x is 4.5 m. The grid map of the local update range is 5 × 5 × 2 m3. In all scenarios, three methods are run more than six times with the same configuration.
We evaluate the performance of the above methods based on exploration time, flight distance, and coverage efficiency. FAEP also utilizes FIS to update frontier cluster information and based TSP to plan global exploration paths. Therefore, we set the frontier cluster length limit of all methods to 2 m and counted the remaining number of frontier clusters during exploration. The number could directly reflect the computational burden of the path planner: the more frontier clusters remain in scene, the more enormous the solver computation.
Figure 8 displays exploration trajectories of different methods in two mazes. Our method prevents the robot from unnecessarily revisiting explored areas but ensures that the robot does not miss scenes outside prior information, even in scenes with large cross-sectional road lengths like Maze-2. In contrast, other methods re-explore already visited areas during exploration, leading to lower efficiency and inevitable resource wastage.
The detailed exploration process of our method in Maze-1 is illustrated in Figure 9. The robot prioritizes exploring areas outside prior information in the given region. Subsequently, it follows the global exploration strategy to explore other areas, disregarding the direction of other frontier clusters during exploration. Finally, the robot actively loops close to cover the frontier clusters near the agent with the lowest priority. The exploration process aligns with the concept depicted in Figure 3.
Figure 10 displays the process of volume coverage for all methods in two mazes. Our method demonstrates higher efficiency and nearly linear performance in conducting exploration. In contrast, other methods exhibit coverage stagnation or slower growth during exploration, meaning the robot moved towards a previously missed or visited space.
The number of the remaining frontier clusters during exploration for all methods is shown in Figure 11. Compared with other methods, our method ensures a lower count of remaining frontier clusters. It proves that our method can avoid much computational burden for an onboard computer and is suitable for exploring large-scale scenes.
Table 1 presents a quantitative performance of three methods in two mazes. Compared with FUEL and FAEP, the average exploration time of our method is reduced by 18.76 % and 20.87 % in Maze-1, and 18.05 % and 21.77 % in Maze-2. Moreover, the average flight distance of our method is reduced by 18.93 % and 17.19 % in Maze-1, and 15.91 % and 6.79 % in Maze-2. In contrast, our proposed method outperforms other methods in exploration time, flight distance, and stability.
The average computation time of each module is shown in Table 2. The proposed method conducts a one-path planning with approximately 130 ms, meeting the frequency requirements for most robots. Furthermore, only 12 ms is consumed by local path planning, additionally proving the efficiency of the proposed reward evaluation method.
We deconstruct the proposed method to analyze the global and local path planning module performances. OursLocal and OursGlobal correspond to implementing local and global path planning, both built upon FUEL framework. The exploration data of OursLocal, OursGlobal in two mazes are list in Table 1.
Comparing the simulation results of OursGlobal and FUEL in two mazes, the standard deviations of OursGlobal in exploration time and flight distance are identical to FUEL. However, the average exploration time of OursGlobal is reduced by 18.88 % and 14.01 % , respectively, in Maze-1 and Maze-2. The average flight distance of OursGlobal is reduced by 19.57 % and 12.45 % , respectively, in Maze-1 and Maze-2. The average performance of OursGlobal outperforms FUEL, indicating that the primary contribution of global path planning lies in improving exploration efficiency. The same inference can also be drawn from comparing Ours and OursLocal.
The average exploration time and flight distance of OursLocal and FUEL in both mazes are approximate. But, OursLocal maintains a low standard deviation, indicating more stability. Similar performance is observed between OursGlobal and Ours. It is demonstrated that the proposed local path planning method can effectively enhance the stability of exploration efficiency.
Finally, we can conclude that the exploration efficiency is primarily attributed to global path planning, while local path planning enhances the stability of exploration efficiency.

4.3. Real-World Experiment

In order to further validate the effectiveness of our proposed method, we conduct a real-world experiment with a ground vehicle. Based on the prior topological map, the ground vehicle will explore an indoor corridor of size 55 × 15 × 2 m3, and a cross-sectional length of road that is 2 m, as shown in Figure 12. The prior information we provided is a rectangular topological map that outlines the basic structure of the indoor corridor. In the indoor corridor, the open spaces and nooks served as regions beyond prior information, testing the effectiveness of our method.
In the experiment, we run VINS-Fusion on GPU to provide the positional state, while the proposed method runs on CPU to plan an exploration path [32]. We set v m a x = 0.5 m/s, ω m a x = 0.8 rad/s, the maximum acceleration as 0.5 m/s2, and the grid map of local update range as 4 × 4 × 2 m3. The FOV of the depth camera is set as [ 80 × 60 ] deg, and the maximum range h m a x is 3.5 m.
Figure 13 shows the exploration trajectory of our method with the indoor environment, where the small labeled images represent the key nodes during exploration: (a) presents the robot exploring in the direction following a global exploration strategy; (b) and (c) present the instances where the vehicle prioritizes exploration directions beyond the prior information. The exploration time of the whole process is 738 s, and the movement distance is 167 m. It can be seen from the trajectory that the vehicle did not revisit the explored areas during exploration, which proves the effectiveness of our proposed method.
Figure 14, respectively, shows the process of volume coverage and the number of remaining frontier clusters during exploration. It reveals a stable and linear exploration process while the number of frontier clusters is maintained at a small level, demonstrating the efficiency of our proposed method.

5. Conclusions

This paper introduces a novel autonomous exploration method based on a prior topological map, in which a robot explores the given large-scale region, following a global exploration strategy, but prioritizes exploring scenes outside prior information. The proposed method employs a hierarchical framework to plan exploration paths. Based on PCATSP, the global path planning module merges the prior topological map with real-time scene information, planning efficient global exploration paths. Subsequently, the local path planning module rapidly evaluates the rewards and movement costs for each candidate viewpoint to refine the input global exploration paths. Finally, the output exploration path is used to generate local trajectories. Simulation results prove that the proposed method enables the robot to efficiently and rapidly explore a given region and is suitable for operation in large-scale scenes. The ablation study also demonstrates that our proposed local path planning method could enhance the stability of exploration efficiency. The experiment conducted in the real world further validates the effectiveness of our method.

Author Contributions

Conceptualization, Z.C.; methodology, Z.C.; software, Z.C.; validation, Z.C.; formal analysis, Z.C.; investigation, Z.D.; resources, Z.D.; data curation, Z.C.; writing—original draft preparation, Z.C.; writing—review and editing, Z.C.; visualization, Z.C.; supervision, J.Y.; project administration, Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bircher, A.; Kamel, M.; Alexis, K.; Oleynikova, H.; Siegwart, R. Receding horizon “next-best-view” planner for 3d exploration. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1462–1468. [Google Scholar]
  2. Duberg, D.; Jensfelt, P. Ufoexplorer: Fast and scalable sampling-based exploration with a graph-based planning structure. IEEE Robot. Autom. Lett. 2022, 7, 2487–2494. [Google Scholar] [CrossRef]
  3. Selin, M.; Tiger, M.; Duberg, D.; Heintz, F.; Jensfelt, P. Efficient autonomous exploration planning of large-scale 3-d environments. IEEE Robot. Autom. Lett. 2019, 4, 1699–1706. [Google Scholar] [CrossRef]
  4. Xu, Z.; Deng, D.; Shimada, K. Autonomous UAV exploration of dynamic environments via incremental sampling and probabilistic roadmap. IEEE Robot. Autom. Lett. 2021, 6, 2729–2736. [Google Scholar] [CrossRef]
  5. Respall, V.M.; Devitt, D.; Fedorenko, R.; Klimchik, A. Fast sampling-based next-best-view exploration algorithm for a MAV. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 89–95. [Google Scholar]
  6. Zhu, H.; Cao, C.; Xia, Y.; Scherer, S.; Zhang, J.; Wang, W. DSVP: Dual-stage viewpoint planner for rapid exploration by dynamic expansion. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September– 1 October 2021; pp. 7623–7630. [Google Scholar]
  7. Yamauchi, B. A frontier-based approach for autonomous exploration. In Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA’97. ‘Towards New Computational Principles for Robotics and Automation’, Monterey, CA, USA, 10–11 July 1997; pp. 146–151. [Google Scholar]
  8. Dornhege, C.; Kleiner, A. A frontier-void-based approach for autonomous exploration in 3d. Adv. Robot. 2013, 27, 459–468. [Google Scholar] [CrossRef]
  9. Zhong, P.; Chen, B.; Lu, S.; Meng, X.; Liang, Y. Information-driven fast marching autonomous exploration with aerial robots. IEEE Robot. Autom. Lett. 2021, 7, 810–817. [Google Scholar] [CrossRef]
  10. Deng, D.; Duan, R.; Liu, J.; Sheng, K.; Shimada, K. Robotic exploration of unknown 2d environment using a frontier-based automatic-differentiable information gain measure. In Proceedings of the 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Virtual, 6–10 July 2020; pp. 1497–1503. [Google Scholar]
  11. Cieslewski, T.; Kaufmann, E.; Scaramuzza, D. Rapid exploration with multi-rotors: A frontier selection method for high speed flight. In Proceedings of the RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 2135–2142. [Google Scholar]
  12. Zhou, B.; Zhang, Y.; Chen, X.; Shen, S. Fuel: Fast uav exploration using incremental frontier structure and hierarchical planning. IEEE Robot. Autom. Lett. 2021, 6, 779–786. [Google Scholar] [CrossRef]
  13. Zhao, Y.; Yan, L.; Xie, H.; Dai, J.; Wei, P. Autonomous Exploration Method for Fast Unknown Environment Mapping by Using UAV Equipped with Limited FOV Sensor. IEEE Trans. Ind. Electron. 2023, 5, 4933–4943. [Google Scholar] [CrossRef]
  14. Soragna, A.; Baldini, M.; Joho, D.; Kümmerle, R.; Grisetti, G. Active SLAM using connectivity graphs as priors. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 340–346. [Google Scholar]
  15. Zhang, Y.; McCalmon, J.; Peake, A.; Alqahtani, S.; Pauca, P. A Symbolic-AI Approach for UAV Exploration Tasks. In Proceedings of the 2021 7th International Conference on Automation, Robotics and Applications (ICARA), Prague, Czech Republic, 4–6 February 2021; pp. 101–105. [Google Scholar]
  16. Krzysiak, R.; Butail, S. Information-based control of robots in search-and-rescue missions with human prior knowledge. IEEE Trans.-Hum.-Mach. Syst. 2021, 52, 52–63. [Google Scholar] [CrossRef]
  17. Boniardi, F.; Valada, A.; Burgard, W.; Tipaldi, G.D. Autonomous indoor robot navigation using a sketch interface for drawing maps and routes. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 2896–2901. [Google Scholar]
  18. Oßwald, S.; Bennewitz, M.; Burgard, W.; Stachniss, C. Speeding-up robot exploration by exploiting background information. IEEE Robot. Autom. Lett. 2016, 1, 716–723. [Google Scholar]
  19. Shang, Z.; Shen, Z. Topology-based UAV path planning for multi-view stereo 3D reconstruction of complex structures. Complex Intell. Syst. 2023, 9, 909–926. [Google Scholar] [CrossRef]
  20. Blochliger, F.; Fehr, M.; Dymczyk, M.; Schneider, T.; Siegwart, R. Topomap: Topological mapping and navigation based on visual slam maps. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 3818–3825. [Google Scholar]
  21. Chen, X.; Zhou, B.; Lin, J.; Zhang, Y.; Zhang, F.; Shen, S. Fast 3D Sparse Topological Skeleton Graph Generation for Mobile Robot Global Planning. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 10283–10289. [Google Scholar]
  22. Mei, J.; Li, R.J.; Gao, W.; Cheng, M.M. CoANet: Connectivity attention network for road extraction from satellite imagery. IEEE Trans. Image Process. 2021, 30, 8540–8552. [Google Scholar] [CrossRef]
  23. Tan, Y.Q.; Gao, S.H.; Li, X.Y.; Cheng, M.M.; Ren, B. Vecroad: Point-based iterative graph exploration for road graphs extraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8910–8918. [Google Scholar]
  24. Sokmen, O.C.; Emec, S.; Yilmaz, M.; Akkaya, G. An overview of Chinese postman problem. In Proceedings of the 3rd International Conference on Advanced Engineering Technologies, Bayburt, Turkey, 19–21 September 2019; Volume 10, pp. 1175–1184. [Google Scholar]
  25. Gouveia, L.; Pesneau, P. On extended formulations for the precedence constrained asymmetric traveling salesman problem. Netw. Int. J. 2006, 48, 77–89. [Google Scholar] [CrossRef]
  26. Morrison, D.R.; Jacobson, S.H.; Sauppe, J.J.; Sewell, E.C. Branch-and-bound algorithms: A survey of recent advances in searching, branching, and pruning. Discret. Optim. 2016, 19, 79–102. [Google Scholar] [CrossRef]
  27. Helsgaun, K. An extension of the Lin-Kernighan-Helsgaun TSP solver for constrained traveling salesman and vehicle routing problems. Roskilde Rosk. Univ. 2017, 12, 966–980. [Google Scholar]
  28. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Kdd, Portland, OR, USA, 2–4 August 1996; Volume 96, pp. 226–231. [Google Scholar]
  29. Salvador, S.; Chan, P. Toward accurate dynamic time warping in linear time and space. Intell. Data Anal. 2007, 11, 561–580. [Google Scholar] [CrossRef]
  30. Tao, Y.; Wu, Y.; Li, B.; Cladera, F.; Zhou, A.; Thakur, D.; Kumar, V. SEER: Safe efficient exploration for aerial robots using learning to predict information gain. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 1235–1241. [Google Scholar]
  31. Helsgaun, K. An effective implementation of the Lin–Kernighan traveling salesman heuristic. Eur. J. Oper. Res. 2000, 126, 106–130. [Google Scholar] [CrossRef]
  32. Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
Figure 1. The overview of the proposed exploration method.
Figure 1. The overview of the proposed exploration method.
Drones 08 00124 g001
Figure 2. The basic scheme for autonomous exploration based on a prior topological map. (a) Extracting an agent for global target s j that is in unknown space. (b) Obtain the exploration path t o u r e x p l o r e from t o u r provided by the PCATSP solver.
Figure 2. The basic scheme for autonomous exploration based on a prior topological map. (a) Extracting an agent for global target s j that is in unknown space. (b) Obtain the exploration path t o u r e x p l o r e from t o u r provided by the PCATSP solver.
Drones 08 00124 g002
Figure 3. The schematic diagram of the exploration process. (a,b) Robot prioritizes exploring scenarios outside prior information. (c,d) After exploring scenarios outside prior information, the robot explores the unknown environment according to a global exploration strategy. (e,f) Robot actively conducts loop exploration according to the global exploration strategy and finally covers the frontier clusters near the lowest-priority agent.
Figure 3. The schematic diagram of the exploration process. (a,b) Robot prioritizes exploring scenarios outside prior information. (c,d) After exploring scenarios outside prior information, the robot explores the unknown environment according to a global exploration strategy. (e,f) Robot actively conducts loop exploration according to the global exploration strategy and finally covers the frontier clusters near the lowest-priority agent.
Drones 08 00124 g003
Figure 4. Method of agent extraction. (a) Crossing determination. (b) Angle determination.
Figure 4. Method of agent extraction. (a) Crossing determination. (b) Angle determination.
Drones 08 00124 g004
Figure 5. Determination of the second category of frontier clusters. (a) Determine whether the second-category frontier f k guides the robot towards the global target based on angle and distance conditions. (b) Robot moves towards other scenes following the generated exploration path. (c) As the robot moves, f k no longer meets the initial judgment criteria. (d) To maintain the consistency of the previous judgment to f k , perform a secondary judgment of f k by DTW.
Figure 5. Determination of the second category of frontier clusters. (a) Determine whether the second-category frontier f k guides the robot towards the global target based on angle and distance conditions. (b) Robot moves towards other scenes following the generated exploration path. (c) As the robot moves, f k no longer meets the initial judgment criteria. (d) To maintain the consistency of the previous judgment to f k , perform a secondary judgment of f k by DTW.
Drones 08 00124 g005
Figure 6. The schematic diagram of calculating the reward of a viewpoint.
Figure 6. The schematic diagram of calculating the reward of a viewpoint.
Drones 08 00124 g006
Figure 7. Real-world experiment vehicle platform.
Figure 7. Real-world experiment vehicle platform.
Drones 08 00124 g007
Figure 8. Benchmark comparison of exploration trajectories of the proposed method, FUEL, and FAEP, in two mazes.
Figure 8. Benchmark comparison of exploration trajectories of the proposed method, FUEL, and FAEP, in two mazes.
Drones 08 00124 g008
Figure 9. Partial exploration trajectories generated by our method in Maze-1.
Figure 9. Partial exploration trajectories generated by our method in Maze-1.
Drones 08 00124 g009
Figure 10. The exploration progress of three methods in Maze-1 (left) and Maze-2 (right).
Figure 10. The exploration progress of three methods in Maze-1 (left) and Maze-2 (right).
Drones 08 00124 g010
Figure 11. The number of remaining frontier clusters during exploration of the three methods in two mazes.
Figure 11. The number of remaining frontier clusters during exploration of the three methods in two mazes.
Drones 08 00124 g011
Figure 12. The experimental scene for real-world experiment.
Figure 12. The experimental scene for real-world experiment.
Drones 08 00124 g012
Figure 13. The exploration trajectory of the proposed method in real-world experiment. (a) shows the robot exploring in the direction following a global exploration strategy; (b,c) show the instances where the vehicle prioritizes exploration directions beyond the prior information.
Figure 13. The exploration trajectory of the proposed method in real-world experiment. (a) shows the robot exploring in the direction following a global exploration strategy; (b,c) show the instances where the vehicle prioritizes exploration directions beyond the prior information.
Drones 08 00124 g013
Figure 14. The process of volume coverage (left) and the number of remaining frontier clusters (right) during exploration in real-world scenario.
Figure 14. The process of volume coverage (left) and the number of remaining frontier clusters (right) during exploration in real-world scenario.
Drones 08 00124 g014
Table 1. Exploration statistics in Maze-1 and Maze-2.
Table 1. Exploration statistics in Maze-1 and Maze-2.
SceneMethodExploration Time (s)Flight Distance (m)
AvgStdMaxMinAvgStdMaxMin
Maze-1FUEL826.59925.275849.384777.630618.77316.821635.879589.081
FAEP848.61432.779906.516813.997605.41928.813655.931561.852
Ours671.10113.880694.328663.532501.52312.490520.579486.109
OursLocal826.88110.485855.695814.620616.2178.390635.330605.435
OursGlobal670.74224.009693.757626.820497.56316.467517.316469.384
Maze-2FUEL1407.04846.7291503.5421356.9091012.58818.8181036.984984.151
FAEP1474.68667.1251569.1421418.180913.55233.502960.631885.397
Ours1153.67729.7331200.4701104.911851.64622.824885.435819.141
OursLocal1396.99627.8411439.9081339.9741002.99718.9521037.262980.147
OursGlobal1210.02852.7351277.4811138.502886.09641.552940.927831.196
Table 2. Average computation time of each module.
Table 2. Average computation time of each module.
SceneAverage Computation Time (ms)
Global PlanningLocal PlanningTotal Planning
Maze-1141.7917.39159.19
Maze-2127.4512.35130.85
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, Z.; Du, Z.; Yang, J. Topological Map-Based Autonomous Exploration in Large-Scale Scenes for Unmanned Vehicles. Drones 2024, 8, 124. https://doi.org/10.3390/drones8040124

AMA Style

Cao Z, Du Z, Yang J. Topological Map-Based Autonomous Exploration in Large-Scale Scenes for Unmanned Vehicles. Drones. 2024; 8(4):124. https://doi.org/10.3390/drones8040124

Chicago/Turabian Style

Cao, Ziyu, Zhihui Du, and Jianhua Yang. 2024. "Topological Map-Based Autonomous Exploration in Large-Scale Scenes for Unmanned Vehicles" Drones 8, no. 4: 124. https://doi.org/10.3390/drones8040124

APA Style

Cao, Z., Du, Z., & Yang, J. (2024). Topological Map-Based Autonomous Exploration in Large-Scale Scenes for Unmanned Vehicles. Drones, 8(4), 124. https://doi.org/10.3390/drones8040124

Article Metrics

Back to TopTop