1. Introduction
Trade transportation is the basis of economic development, and, with the growth in trade volume in various countries, shipping accounts for more than 2/3 of international cargo transportation. Accidents, however, have been continuously posing risks to maritime transportation; according to one survey, more than 80% of accidents are caused by human factors in the process of ship navigation [
1]. Path planning, as one of the key technologies for traditional autonomic navigation scheme, aims to find a safe and economic path for ships and has received increasing attention from both industry and academia in recent years [
2].
The plan of the path provides the optimal route based on various factors such as distance, time, etc. Considering divergent factors, path planning can be divided into local path planning and global path planning [
3]. Local path planning indicates that the changes in the surrounding environment are unknown (mainly reefs and dynamic obstacles), and the path needs to be corrected based on sensor data in real time. Global path planning involves planning a route from the starting point to the endpoint, founded on known environmental information at a large geographic scale (1 km to 100 km level, or more).
This paper presents an in-depth study of global path planning. Traditional path planning algorithms currently used to solve global path planning include the A*, Dijkstra, and artificial potential energy field methods. Most of these algorithms use spatial discretization to represent the environment as equal-sized raster maps and discretize the action space into four-connected or eight-connected [
4]. The limitations of these path planning algorithms include the following. First, these algorithms rely on prior knowledge of the system when used for global path planning. Consequently, if there are changes in the environment, such as the emergence of new obstacles, it is necessary to obtain updated and accurate information about the environment in order to re-plan the route. In addition, these algorithms do not take into account the ship’s navigational capability, only searching the driving path without giving the bow heading and velocity vector of the nodes in the path. Moreover, these algorithms search the way topologically, leading to much higher computation and lower efficiency required for planning with the increasing complexity in the planning scene.
To address the challenges, as mentioned before, of traditional path planning, Sun proposed a dynamic trajectory generation model for ship rust removal robots based on a bionic neural network and utilized a kinematic state matching search method to achieve dynamic path planning while considering the robot’s kinematic model [
5]. However, during actual testing, it was observed that the proposed method lacked robustness in the planning process, making it unable to guarantee successful planning. To improve the robustness and stability of the algorithm in the path planning task, Shen et al. from Dalian Maritime University proposed an intelligent collision avoidance navigation method for unmanned ships, which considered navigation experience and collision avoidance rules using Deep Competitive Q-learning (DCQN) with the A* algorithm [
6]. Similarly, Wang et al., also from Dalian Maritime University, proposed the DRLOAD algorithm based on the Deep Q-Network (DQN), which achieved a multi-dimensional navigation by designing a reward function specifically for safe obstacle avoidance [
7]. Both methods employed value learning methods in reinforcement learning to solve path planning problems in a discrete action space. Nevertheless, these approaches faced issues with the reward function’s design and the ship propulsion strategy’s incomplete structure during the planning process. Another relevant contribution is the path planning algorithm proposed by Kuczkowski, which is based on the Deep Deterministic Policy Gradient (DDPG) algorithm. This approach integrates several subgoals of path planning, collision avoidance, compliance with the Convention on the International Regulations for Preventing Collisions at Sea (COLREGs), and task completion by designing four reward functions [
8]. Kuczkowski successfully achieved path planning and dynamic obstacle avoidance that satisfied COLREGs in complex scenarios. While adequately designing the reward functions and providing each path node’s coordinates and velocity vectors during planning, the environment modeling did not consider the ship’s kinematic model.
A summary of the above related work is presented in
Table 1.
Based on the above research work, this paper proposes a distributed sampler PPO algorithm based on Beta policy to address the pain points, i.e., that traditional path planning increases computationally in the case of rising scenario complexity and cannot directly deal with high-dimensional data. Furthermore, it addresses the problems that reinforcement learning algorithms face in solving path planning problems, such as the unreasonable design of the reward function—most algorithms only perform path planning in the discrete action space and primarily use deterministic policy in planning. The main technical contributions of this paper are as follows:
Designing and constructing a simulation environment that conforms to the ship propulsion characteristics, using the Box2D physics engine as the framework;
Constructing a reward function that integrates the mainline reward and auxiliary reward, where the mainline reward is determined based on the distance between the agent and the arrival point, while the auxiliary reward is potential-based, superimposed from boundary control, obstacles, and arrival point;
Proposing a distributed sampling policy on the basis of the stochastic policy algorithm PPO, which significantly enhances sampling performance and enables robust ship path planning in continuous domain spaces;
Introducing the Beta policy as a replacement for the Gaussian policy in the action selection policy. This substitution effectively resolves the issue of action boundary constraints.
The advantages of the algorithm proposed in this paper are reflected as follows. By employing the stochastic policy algorithm, this paper achieves better planning results by simulating a ship’s propulsion characteristics in a continuous state and action space. The integration of the mainline and auxiliary rewards results in a smooth trajectory, with each planned path node providing position and velocity vectors. Our proposed algorithm attains shorter planning paths than other reinforcement learning algorithms. Moreover, when compared to traditional path planning methods with similar path lengths, our algorithm generates smoother trajectories.
2. Background
In solving global path planning problems by reinforcement learning, the learning tasks chosen vary for the different discrete and continuous action spaces in global path planning. Reinforcement learning is usually described by a Markov decision process (MDP), which consists of a state-space
and an action space
. In MDP, the individual who performs the learning and decision-making is called the agent, and the range with which the agent is interacting is called the environment. At the discrete moment t, the agent observes the states
fed by the environment and gives feedback actions
to all these states, according to the policy
, where the policy can be deterministic or stochastic. The environment provides a reward
in response to the action and moves to the next state
; the state–action pairs generated by this agent-environment interaction constitute a trajectory:
The learning tasks of reinforcement learning (RL) can be divided into value-based and policy-based categories.
2.1. Value-Based Methods for Discrete Domain
Traditional RL uses the Q-learning method and stores the transfer state by Q-table when applied to path-planning tasks. To solve the problem that, when the state complex increases, the efficiency of the Q-learning update drops, DQN, as proposed in 2013, adds the deep neural network and experience replay based on Q-learning [
9]. Although it improves the policy performance dramatically and can be used to handle high-dimensional state-space tasks, DQN generates an unavoidable overestimation problem in the process of action-value Q estimation, which occurs because DQN performs action-value estimation based on the Bellman Equation (
1):
In this context, the action-value function is defined as the expected value of the immediate reward obtained by taking action in the current state , plus the expected value with discounted maximum action-value of the next state , corresponding to the action chosen by the target policy with parameters .
Here,
represents the target network parameters for the policy
, and
is the discount factor
. The target network always selects the action that maximizes the action-value
according to its own policy
, which leads to bootstrap. Silver et al. [
10] replaced the bootstrap action generated by the target network with online network policy to update the action value estimation:
The online network is used for action selection, and the corresponding action is used for target calculation; due to a delay in the update of the network target, the actions that correspond to the value q-max of the online network are not necessarily identical to the actions that correspond to the value q-max in the network target. Thus, the problem of overestimating is reduced.
2.2. Policy-Based Method for Continuous Domain
This value-based learning method cannot solve the stochastic policy problem well because it is a deterministic policy in essence. The policy gradient-based [
11] reinforcement learning algorithm optimizes a policy function by fitting a continuous function representing the current task performance metric and using gradient ascent. In this state, the value function can be used to assist the policy training, without need for action choice. The policy gradient-based approach has more robust convergence than the action-value-based approach; the policy gradient
is expressed as follows:
where
is the state distribution,
denotes current state s according to the policy sampled by the parameter
, and
represents the policy gradient estimator with
as the target:
In the actual calculation operation, the policy gradient is estimated by sampling in large numbers, so the sample mean of converges to its expected value when the number of samples is sufficient.
However, the policy gradient based method has the problems of falling into local optima and inefficient policy evaluation. To solve these problems, Sutton et al. proposed Actor–Critic. The Actor refers to the policy function, i.e., the agent action decision part. Critic refers to the value function that can be regarded as an evaluator of the current state. The Critic and Actor parameters are updated by the data obtained from the real-time interaction between the agent and the environment, which makes the algorithm converge.
Schulman et al. proposed the Proximal Policy Optimization (PPO) algorithm [
12] by using the Trust Region Policy Optimization (TRPO) [
13] surrogate objective; the PPO clips the surrogate objective to a smaller range and replaces the advantage function
in the agent with a Generalized Advantage Estimator (GAE) [
14], which is used for high-dimensional continuous action training by GAE to obtain stable policy. To ensure continuous exploration, even in states where the policy is relatively mature, PPO incorporates entropy rewards when calculating the surrogate objective. Additionally, the inherent advantage of its stochastic policy in path exploration prevents the algorithm from ending up trapped in local optima during the search process [
15].
This paper uses PPO as the baseline algorithm for global path planning implementation and optimization. Firstly, we conducted path planning environment design, and in the environment design process, we needed to consider the state-space and action space structure for deep reinforcement learning. For the algorithm to converge properly, we needed to design the reward function to be highly matched for the state-space of the global path planning task. The rationality of the reward function design directly guides the algorithm’s convergence speed and policy performance. The PPO limits the policy update magnitude by clipping the surrogate objective, so that the old and new policies do not produce significant changes in the update and, thus, obtain more stable policy updates.
3. Interaction Environment Design
Deep reinforcement learning updates the state through constant interaction between the agent and the environment, and optimizes the policy to maximize the reward. This method requires environment design, including state-space, action space, and reward function. The environment design determines the scene structure and interaction logic and adds constraints to each scene element.
The simulation environment is constructed using the Box2D physics engine for structural development, and the Pyglet module is utilized for rendering the plane frame. The size of the plane frame structure is set as
, with a render scale of
s. Obstacles are generated at the center of the frame, using a generation scale of
. The obstacle area is divided into a grid space of
, with a grid radius of
a. The obstacle radius ranges from
to
, and the number of obstacles generated falls within the range
. To prevent overlapping, an offset is applied to the obstacle generation coordinates. This offset follows a uniform distribution
, ensuring that no obstacles overlap with each other. Unlike other path planning simulation scenarios for RL, this simulation environment increases the ship and the endpoint generation range to improve the algorithm’s robustness. As shown in
Figure 1, the orange-covered area is the hull generation area, and the blue area is the arrival point generation area.
3.1. State-Space Design
The state information used for the global path planning task of deep reinforcement learning can be divided into the sensor data sequence and the image from the environment. DRL extracts the relationship between the features and the policy from training. Due to the sensor data sequence’s low dimension, it is easy for the agent to acquire the policy according to the shift features; the image contains a higher dimensionality message, so the agent needs to learn the policy from the image extracted feature. As the network processes the state information differently, we designed two state-spaces, described below.
3.1.1. Environmental Feedback Sensor Data Sequence-Based State-Space
To make the agent perceive the feature shifts in the motion, we designed an approach using LIDAR to measure the distance to avoid obstacles, as shown in
Figure 2. We set the ship’s radius as
b; the effective reflection range of LIDAR is within
h times the vessel’s radius, expressed as
. We set the horizontal field angle of the detection rays to
and uniformly arranged the
l ray beams within the field. During the ship’s motion, if another object truncates the ray, it is considered that the object is closing. In this case, we gave the ray identifier
associated with the intercepted object, as shown in the following equation:
Calculate the distance to the truncation point, denoted as
; the vector is shown in Equation:
The l-beam returns detection feedback based on the environmental information sensed by the vessel’s current position. To reduce the computational effort of the algorithm, we find when can obtain the best efficiency.
The data sequence state-space consists of five different sensor states:
Position sensor: obtain the current vessel’s coordinates ;
Speed sensor: obtain the current vessel speed vector ;
Heading sensor: obtain the current ship’s bow direction (true heading TC) ;
Radar: obtain the perception information about the vessel ;
Distance sensor: obtain the straight-line distance D to the endpoint.
Having completed the state-space design, we process the data and send it to the network for inference. During processing, the data are normalized, stacked, and formed into consecutive frames by superposition. The normalization, which, according to the maximal value of different sensors, is present to reduce the scale difference between samples and make the gradient update more stable, thus helping faster convergence to the optimal policy.
Before executing the continuous frame data overlay, we convert the data dimension to , concatenate along the direction of dim = 1, and flatten the concatenated data at the last dimension, i.e., . By overlaying multiple data frames, the agent can understand the current motion direction and speed of the ship and, thus, make reasonable decisions. In contrast, it is not available in single-frame data.
3.1.2. Image-Based State-Space
The image-based state-space contains the current agent scenario image and navigation data. The agent uses the image to evaluate the relationship between the ship and the arrival point, boundary, and obstacles to realize the perception of the surrounding area. During the data processing, we divide the image and navigation information into two parts, the image part (which handles grayscale and normalization), and the navigation part (which normalizes). After that, we stack features to the same dimension at each part, and the network fuses the features of these two parts.
In image processing, the image shape which returned from Pyglet rendering RGB data is . We convert the three-channel RGB image into a single-channel grayscale map, and the image size is appropriately reduced to for policymaking, while the grayscale image is normalized in the image processing.
The image-based state-space consists of the following vectors:
Current scenario seascape: image data in the shape of with normalization operation;
Position vector: spatial coordinate information where the current ship is located;
Velocity vector: obtain the current ship’s velocity vector information .
We keep the same data overlay operation with the data sequence state-space and concatenate the two groups of processed image and navigation feature data along the direction of dim = 1 (the image shape is and the navigation data shape is ). Finally, we obtain the overlayed feature data with the image shape and the navigation data shape .
3.2. Action Space Design
The Box2D physics engine is used in the environmental design for ship action control. The simulated trajectory of the ship’s rigid body is calculated by the semi-implicit Euler method to generate a planning curve in path planning that matches its physical characteristics for simulated propulsion control. To ensure accurate planning paths, the action space is designed in two dimensions, i.e., angle and propulsion power:
Angle dimension: In this path planning model, the angle design uses continuous space, ranging from 0 to 360 degrees, and is mapped normalized to the interval ; differently from traditional discrete angle design (such as four-connected or eight-connected actions), the continuous angle control allows the algorithm policy to perform more accurate navigation control to achieve more precise planning;
Propulsion power dimension: Propulsion power is the power generated by the ship’s thrusters. In this model, we use the Box2D to complete the propulsion power design, which considers the effects of thrust derating and fluid resistance. The maximum value of the main propulsion power is set to 70. Consistent with the angle treatment, the propulsion power is also normalized and mapped to the interval .
In summary, the action space is designed with continuous control in angle and propulsion power dimensions. The agent can flexibly adjust the thrust angle and power to competently adapt to the physical characteristics of the ship control in the path planning task and, thus, perform accurate planning in different scenarios.
3.3. Reward Function Design
The reward function should correlate highly with the state-space to make the agent learn the expected policy [
16]. We divided the reward function into mainline and auxiliary reward, where the design of mainline reward
mainly considers the distance between the ship and the destination, and the creation of auxiliary focuses on the effect of arrival point, boundary control, and obstacles on the traveling state. We set the initial distance between the vessel and the target generation position at initializing stage as
. While the agent interacts with the environment, we record the distance between the ship and the endpoint as
. With the hull position changing, if the current distance
is less than
, a positive reward is given and rewritten
by
; otherwise, it is returned as negative. The main line reward is as follows:
To prevent the agent from abnormal state-shifting in policy learning, which leads to failure to learn the excepted policy, we solved this problem by creating a reward function with a potential-based reward shaping method [
17], i.e., the repulsive field supported by obstacles and boundary control, and the gravitational field supported by arrival points. The agent is given a decreasing penalty when it is close to the endpoint and an increasing penalty when it is close to the obstacles or boundary. This reward format can help the agent learn the robust correlation between state and reward, thus enabling the agent to accelerate the agent policy convergence while maintaining the invariance of the optimal policy.
The reward function is designed to remap the rendered image of onto a plane of size and calculate the potential field value at each point on that plane. The specific details are as follows:
The repulsive field generation is calculated by the potential decline of the 2.5 times
obstacles range. The distance from the ship to the obstacle’s center is
. We design the reward function satisfy the factors of the potential; note that the bias calculated by
is
. After the matrix is generated, the potential matrix generated for each obstacle is normalized and summed, as shown in
Figure 3a and explained below:
The arrival point-based gravitational field is generated by calculating the distance between different positions of the potential matrix and the arrival point, and calculating the gravitational potential decay based on the span. Setting the radius of the arrival point as
, the distance between the current position and the arrival point is
. The bias calculated by
is
.
Figure 3b shows the arrival point-based potential plane, as calculated below:
The Repulsive field generation, based on boundary control, makes the agent consider the unknown of the boundary situation in path planning when the agent output action makes the ship move towards the boundary, gradually increasing the penalty. Thus, this ensures the route selected by the agent favors the known and clear center region and improves the planned route’s security. If we set that the penalty boundary is
, the distance between the current position and the nearest edge is
, and the bias calculated by
is
.
Figure 3c shows the potential plane based on boundary control, calculated as follows:
From the above analysis, after obtaining the potential matrices with obstacles, arrival points, and boundary control, we can superimpose them and normalize them to obtain the complete potential-based reward function, as shown in
Figure 4.
4. Beta Policy-Based Distributed Sampling PPO
When applying the policy gradient method for continuous domain control path planning, the DDPG [
18] algorithm is usually used to solve complex tasks with high-dimensional data in the continuous domain. However, since DDPG is a deterministic policy, exploration noise needs to be added at the early stage of training to help the agent explore the environment. Suppose the agent cannot obtain enough mainline rewards during exploration. Under the influence of sparse rewards and deterministic policy, DDPG quickly converges to a lousy policy [
19], which makes the actor enter a deadlock state, which cannot be jumped out of by increasing mainline rewards later.
DDPG suffers from long training time, low exploration efficiency, and unstable learning strategies in path planning tasks, leading to poor reproducibility and unsuitable path planning in complex environments.
4.1. PPO Algorithm Applied to Global Path Planning Task and Optimization
As an on-policy algorithm, PPO uses the surrogate objective to optimize the policy, making it available to figure out discrete and continuous action space. PPO performs action sampling through the latest version of the stochastic policy, which helps the algorithm complete the environment exploration at the early training stage. With training progress, the policy gradually changes from high entropy to low entropy. Hence, the stochastic policy also benefits when facing complex stochastic environments.
4.1.1. PPO Algorithm Operation
The PPO algorithm is a variant of the policy gradient method, which typically calculates a policy gradient estimator and uses the stochastic gradient ascent to optimize. PPO uses the clipping of the surrogate function to reduce the problem of excessive deviation of the new policy from the old one:
Above, and denote the new and old policies, and the hyperparameter is used to limit the difference between the two policies. Suppose, in state s, ; this indicates that the action-value brought by the selected act is greater than the average in the current state. In that case, the update towards the gradient ascent direction increases the selection of actions in such states. indicates the action-value brought by the selected act is lower than the average in the current state, and the update occurs in the gradient descent direction to decrease the selection of actions in such states. With the Actor and Critic training, the Critic evaluates more accurately, and the policy network’s distribution entropy decreases with the training to the convergence state.
4.1.2. Actor–Critic Network Structure
The Actor network uses the same feature extraction structure as the Critic network. However, the feature fusion differs due to the different output dimensions; the Actor and Critic networks vary slightly depending on the state-space.
The network for processing the data sequence state-space is a four-layer fully connected structure, and the hidden layer is [400, 300, 300]; the network for processing the image state-space adds feature extraction layers to extract information from image state, and the structure of the Actor in the image state-space is shown in
Figure 5.
The image feature extraction structure consists of four convolutional layers. The first three layers use the same convolutional architecture of DQN when processing Atari images [
9]. The fourth layer adds a convolutional layer with an output channel of 256, a kernel of 3∗3, strides of 1, and a padding of 1. The extracted feature is concatenated with the current ship’s navigation information and then fed into the fully connected network with the hidden layer of [1124, 500, 300]. Finally, the mean and variance of Gaussian distribution are output.
Since the current model is used to solve the global path planning task, the model can be made to learn the spatio-temporal relationship of object motion on consecutive time frames through data frame superposition. To match the continuous data frame superposition structure used in the two state-space designs, we consider multi-frame superposition in the design of both model network structures.
4.2. Problems of PPO Algorithm in Global Path Planning
When training with PPO for images, the different areas may lead to the environment not being fully explored, i.e., the generation of random positions can cause the agent to enter the termination state early due to collision with the object prematurely. This results in an extreme lack of samples, making it too difficult to train at some starting positions owing to the insufficient positive feedback, resulting in the agent learning only those policies for which data are easily obtained in the training process.
Figure 6a shows the test results of six positions selected by equal spacing. From
Figure 6a (subgraphs c and d), the agent can learn a better policy at the corresponding starting position in graph a due to the uneven sample acquisition. In contrast, the policies adopted by the agent at the related starting points in the b, c, e, d, and f graphs are similar to those acquired in graph a, indicating that the uneven sample acquisition during the agent’s interaction with the environment affects agent policy learning.
4.3. Distributed Sample Extraction PPO
According to the problems encountered in
Section 4.2, a distributed sampling strategy based on region partitioning was designed, w hich separates execution from training; each executor explores from different regions and interacts with the environment [
20].
The algorithm consists of N sub-processes interacting with the environment and collecting samples into their respective buffers. The process initializes the ship’s position from its sub-regions (
Figure 7) and interacts with the environment. After collecting T iter in the buffer, the worker calculates the advantage function and discount reward from the collected data and sends it to the main process in a package. When the main process completes training, the workers pull back the updated parameters from the global and use the new policy to interact.
After the global network collects the data sent back from workers, N∗T samples are shuffled and cut into mini-batches for training. Compared with the A3C algorithm [
21], which calculates the gradient by sub-process and uploads the grad to the global, the global network accumulates the grad and optimizes the policy. The newer distributed sample extraction PPO uses sub-processes to collect data and sends them to the main process for optimization. In this sampling method, data from different starting regions can be further integrated, sharing data among workers and increasing sample diversity. The experimental results in
Section 5.1 show that this improved method significantly enhanced the robustness of the agent policy, which indicates that the distributed sampling policy based on region division is sufficient to make progress on complex exploration problems.
Figure 8 shows the distributed sampling policy based on region division. On the one hand, the number of samples at each initial position is balanced by region interaction. On the other hand, the distributed sampling policy obtains several times the data of the PPO algorithm over the same duration, thus reducing the convergence time. The robustness of the algorithm is significantly improved due to the rising number of samples.
Figure 6b shows the test results. Compared with the PPO baseline (
Figure 6a), the distributed sampling policy with area division can achieve better performance in the test of starting positions at equal intervals.
4.4. Improvement of Action Policy Based on Beta Distribution in Continuous Action Space
In stochastic policy continuous action control problems, the action sample is typically obtained from a Gaussian distribution
by predicting the mean and variance in the neural network and optimizing the parameters by computing the gradient of the policy relative to
and
. However, the Gaussian distribution is an infinite support distribution. The mean and variance predicted by the neural network are limited by the action space when used in the Gaussian distribution, causing any action sampled from the distribution to be forcibly mapped back into the action space if it exceeds the action boundary. As shown in
Figure 9, the Actor calculated the rudder angle according to the specified state and received the distribution parameter of
and
, due to the rudder angle being limited to
. Still, the probability density
corresponding to the action a at any position in the Gaussian distribution is greater than 0. To analyze this process, assume that the action space of rudder angle
, with no clipping in the distribution sampling; then, clipping operates only in the environment. The gradient calculation uses the non-clipping action when computing the surrogate objective in PPO; the policy gradient estimator becomes
, where
denotes the clipped action, and
represents the advantage function generated in the clipped action. By calculating the gap between the gradient
, the expected policy gradient is as shown:
It could be seen from the formula that the calculation is unbiased within the action space . This boundary effect caused by the Gaussian distribution produces a deviation in action sampling, which is passed to the calculation of the policy gradient, affecting the training speed.
Chou et al. used the Beta distribution to replace the Gaussian distribution in a continuous control task and trained it faster than the infinite support distributions in the policy gradient algorithm (TRPO, ACER) [
22]:
represents the Beta function, which ensures that integrates to 1 over the interval . The Beta distribution takes values in the range , the shape of the distribution is determined by its shape parameters and , and, as a finite-support distribution, it does not produce a Gaussian-like boundary effects. Therefore, it is unbiased in the gradient computation and converges faster than other infinite support distributions.
In this paper, to ensure that the action outputs conform to the different limits in the action space, it is necessary to remap the range of the Beta distribution. Therefore, the Beta policy is defined as . The value of parameter c is determined based on the action range in different dimensions of the action space, thereby avoiding the issue of sampling actions beyond the action boundaries.
The Beta distribution is generated by using policy
to estimate the parameters
and
. Ensuring
makes the distribution a concave probability density curve in the interval with a single-peaked shape. In the experiments, when changing the sampling distribution to Beta with the same parameter settings, the training speed was faster than Gaussian, and the accumulated reward was higher than the Gaussian policy for the same training time.
Algorithm 1 Beta policy-based distributed sampling PPO (Worker) |
- 1:
Set the training episode E, maxstep T - 2:
Initialize the actor network , and critic network with the random parameters , - 3:
Receive event flag from main processing - 4:
for
do - 5:
According the divided sub-region random generate start point - 6:
for do - 7:
Run w.r.t Beta Policy, collect - 8:
Caculate decayed Reward - 9:
Caculate - 10:
Caculate GAE - 11:
end for - 12:
Send T length trajectories buffer to global through pipe - 13:
if not event.is_set() then - 14:
event.wait() - 15:
end if - 16:
Pull Global parameter to local - 17:
end for
|
Algorithm 2 Beta policy-based distributed sampling PPO (Global) |
- 1:
Set the training episode E, Number of workers N and training iterations K - 2:
Initialize the actor network , and critic network with the random parameters , - 3:
Initialize event flag - 4:
for
do - 5:
Initialize global buffer G - 6:
event.clear - 7:
for do - 8:
Receive worker buffer g from pipe and collect to global buffer G - 9:
end for - 10:
Sample shuffle, divided N worker samples to K iter batch, - 11:
for do - 12:
Caculate using - 13:
- 14:
Update Actor parameter by caculate gradient w.r.t. - 15:
- 16:
Update Critic parameter by caculate gradient w.r.t. - 17:
end for - 18:
event.set - 19:
end for
|
5. Experiments
The environment building parameters are shown in
Table 2, the training hyperparameters are set as shown in
Table 3, and the training scenarios are shown in
Table 4. We set the training and testing stage frame size as
, rendering mapping size
, and rendering screen size as
. Testing found that rendering the potential field in the range of
improved the algorithm performance when the mapping size increased. Still, when m was set above 160, the improvement of the algorithm effect was obviously weakened; thus, we set the potential field remapping parameter to
.
5.1. PPO Baseline Compared with Distributed Sample Extraction PPO
To compare the performance of distributed sample extraction PPO with PPO baseline in the path planning task, we compared the accumulated rewards of the training phase, as shown in
Figure 10.
Analyzing the training results, the accumulated reward of the PPO baseline converged to a value of −210.5 at 230 episodes, but, as the training proceeded, the reward showed a dropping trend, which means that an attenuation occurred in policy, and this status could not improve, even with increased training. The distributed sample extraction PPO algorithm uses multiple workers to collect samples from different regions. On the one hand, this sampling method increases the number of samples; on the other hand, it ensures that the dataset samples are collected in each region in a balanced manner, hence increasing the stability of training. In the improved sampling policy, the accumulated reward from the several workers was significantly higher than that obtained by the random position generation based on the initial region in the PPO baseline.
5.2. Comparison of Reinforcement Learning and Traditional Path Planning Algorithms for Planning Paths
To compare the difference between RL and traditional path planning methods in the planned paths, we selected two scenarios. We set the same initial status in each system and used the A* algorithm and RL method to compare. The scenario settings and the results of the planned paths of the two methods are shown in
Figure 11.
Figure 11a,b shows the paths searched by the A* and RL in two different scenarios. The top figure shows the A*-searched trajectory, and the bottom shows the RL-searched trajectory. As seen in
Figure 11a,b, the trajectory that RL searched was smoother than A*, and the planned paths did not have trouble with being too close to obstacles. Due to a large number of reefs near small islands, deceleration and detours are usually performed when passing through the reef area to ensure navigation safety. The paths of A* did not take into account the problem of the safety of navigation. In contrast, routes with economies and security could be planned with RL.
5.3. Beta Policy and Gaussian Policy
To compare the performance of the Beta and Gaussian policies in the path planning task, we trained the Beta and Gaussian policies in two different dimension state-spaces and compared the accumulated reward of each worker, as shown in
Figure 12. Meanwhile, the path planning test was driven for both state-spaces, comparing the planned path length distribution and the routed success rate in different policies, as shown in
Figure 13.
The Beta and Gaussian policies were trained for five rounds.
Figure 12 shows the reward with the corresponding mean curves and error intervals. In the worker0 and worker3 cases, the Beta policy had a higher reward than the Gaussian policy during the training, in terms of both data sequence and image. Meanwhile, the Gaussian policy in the worker1 and worker2 cases oscillated in the late training period, i.e., the issue of policy decline occurred. This occurred because the policy certainty of the Gaussian gradually increases during the training, with the gradual decrease in
leading to the high variance in the Gaussian, which creates a heavy oscillation when it is close to 600 episodes. In worker3, the Gaussian policy based on the data sequence and image had an obvious decay after 100 iterations. Conversely, the Beta policy had a more obvious recovery after 400 episodes, and the reward of the Beta was always greater than the Gaussian at the same iteration. Since the area corresponding to worker4 was closer to the endpoint, Gaussian and Beta demonstrated less difference in accumulative rewards;
Comparing the reward convergence of these two state-spaces in training, it was found that the number of iterations required for the image to converge in training was significantly less than that of the data sequence. This is mainly because the image provides more comprehensive global information, enabling the agent to have more feature information to be considered in execution, thus planning a more comprehensive path because the global information is knowable. However, because the state-space based on the data sequence cannot obtain complete information in the environment and can only detect through radar data, the data sequence takes a long time to acquire samples from interacting, so the image converges faster than the data sequence. In summary, the Beta policy learned by each worker significantly improved compared to Gaussian, and, in terms of stability and robustness, Beta was also higher than the Gaussian policy;
Next, we took the Beta and Gaussian policy models with the same number of training iterations for the tests of data sequence and image. In testing, we used the same test position in
Section 4, and each part was used for 200 iterations of path planning. We drew a box diagram of the planned length of each starting position according to the results, and compared the distance between the quartiles and the range of the extreme points by box plots to determine the stability of the planned path length.
Figure 13 shows the test results. The Beta policy was significantly better than the Gaussian in the path planning stability in different state-spaces, and this advantage was shown in the case of testing at the same initial position, in which the median of Area1 was nearly 50% higher in the Beta policy (data sequence 171, image 180) than the Gaussian policy (data sequence 329, image 370). The differences between Q3 and Q1 of the Beta policy in Area2, Area4, and Area5 were lower than those of the Gaussian (more obviously in the data sequence case), and the distribution of planning path length was relatively concentrated. As well, the median of the data was significantly lower than that of the Gaussian strategy, indicating that the Beta strategy was more stable during the test;
According to the shortest path length analysis based on the above data, as shown in
Table 5 and
Figure 13, the path length of Gaussian policy search in regions 1, 4, and 5 was 4.1%, 6.5%, and 43.9% higher than that of Beta, and the path length of Gaussian policy search in regions 2, 3, and 6 was not much different from that of Beta;
As shown in
Table 6, the planning success rates of Gaussian in regions 3, 4, and 5 were not as expected, but, in Beta, the success rate performance was higher in all areas, and the planning success rate was higher than 75% in each region.
5.4. Comparison of Distributed Sampling PPO Based on Beta Policy and Traditional Path Planning Algorithm
In this model, reinforcement learning (RL) used continuous action space exploration for global path planning. It leveraged Box2D for continuous angle manipulation and simulated ship engine propulsion effects in the action space framework. This approach resulted in smoother trajectories compared to traditional algorithms.
To verify whether there were other aspects of performance loss with smoother tracks in RL, we created experiments to compare the performance of distributed extractor PPO based on a Beta policy with the traditional algorithms. We refactored the code of the heatmap obstacles to generate the rasterized map. We discretized a map to a size of 160∗160 and marked barriers, in particular, to generate the map, which corresponded to simulations. The traditional algorithms searched the track in a rastered map and mapped the nodes searched to the dimensions of the simulation to determine the distance. The comparison between reinforcement learning path planning and traditional path planning algorithms was mainly assessed in three aspects: planning time, the number of planning loop operations, and path length.
The time required for the algorithm to perform path planning in different areas (
Table 7) and the number of planning loop operations (
Table 8), as well as the planned path length (
Table 9), were considered separately during the test. We compared the traditional algorithms IDA*, Dijkstra, A*, and bidirectional A* with the reinforcement learning algorithm TD3 [
23], as well as the optimized Gaussian policy PPO algorithm (Normal-PPO) and the Beta policy optimized PPO algorithm (Beta-PPO).
From
Table 7,
Table 8 and
Table 9, Dijkstra took the longest time of the traditional algorithms and required the most iterations, but obtained the best solution for regions 3 and 4. IDA took significantly longer to track than other algorithms in regions 1, 3, and 5. However, its search paths were the shortest in each region, compared with different algorithms of the same type. The Bi-directional A* took significantly shorter planning time than other algorithms in each area and had the lowest number of calls among traditional algorithms; however, Bi-directional A* did not guarantee that the searched path was the best solution.
During the comparison of reinforcement learning algorithms, the training occurred under the same hyperparameters, including interaction environment parameters, batch size, and number of iterations. The best policy model from the training process was then selected for testing. Upon analyzing the data in
Table 7,
Table 8 and
Table 9, we observe that the TD3 algorithm performs markedly worse than other algorithms for planned path length in regions 1–6. Additionally, it requires the highest number of algorithm calls among the reinforcement learning algorithms. Based on this analysis, we can conclude that the TD3 algorithm is notably less effective than the optimized PPO algorithm in generating random initial regions.
In the comparison between the optimized Gaussian policy PPO algorithm and the Beta policy optimized PPO algorithm, the Gaussian policy requires fewer algorithm calls than the Beta policy, resulting in better planning time in testing. However, the experimental results show that the Beta policy has a better planning success rate and path length than the Gaussian policy. This is because the Beta policy employs more algorithm calls in the planning process, enabling finer maneuvers during path search that lead to shorter paths in global path planning. When comparing the differences between RL and traditional path planning algorithms in terms of planning time, number of algorithm calls, and path length, as shown in
Table 7,
Table 8 and
Table 9, we see that RL considers the propulsion characteristics of the ship for continuous control in path planning. This approach yields smoother trajectories than traditional algorithms, and RL’s planning task performance and path length are comparable to that of conventional algorithms.
6. Conclusions
This study aims to address the limitations of both traditional planning algorithms and reinforcement learning algorithms in the context of path planning tasks. Traditional planning algorithms require precise environment modeling, while most reinforcement learning algorithms can only perform path planning in discrete action spaces. To this end, this paper designs a generic environment framework for path planning tasks, builds the environment structure using the Box2D physics engine, and implements the environment design for continuous state-space and action space for reinforcement learning. On this basis, a deep reinforcement learning path planning, without precise modeling to enable dynamic decision-making, is realized by designing two state-spaces based on sensor data and image data.
In addition, this study solves the problems that reinforcement learning faces in path planning tasks, such as the unreasonable design of the reward function, using deterministic policies, and incomplete consideration of ship propulsion characteristics, as well as that traditional planning algorithms increase computation when facing scenario enlargement and other situations and cannot handle high-dimensional data. In this study, a reward function is designed to match the state-space, which considers the distance between the agent and the arrival point as the main part and incorporates the potential-based superposition of the arrival point, obstacles, and boundary control as the auxiliary parts. Moreover, the stochastic policy algorithm PPO is employed as a baseline for path planning. This enables the reinforcement learning algorithm to learn from the continuous angle control and engine propulsion features simulated by Box2D, effectively incorporating the ship’s propulsion characteristics into the path planning process.
Building upon this, we propose a distributed sampling PPO algorithm based on the Beta policy. This algorithm addresses the issue of sample collection imbalance in global path planning tasks and effectively improves the performance and robustness of the algorithm. By resolving the action boundary problem associated with Gaussian policy, the proposed approach enhances the algorithm’s performance in terms of accumulated reward and planning success rates across different sub-regions.
However, this study still has certain limitations. Specifically, the absence of testing in real marine environments during the experiments may lead to potential performance degradation in practical applications. To address this limitation, we are currently engaged in the deployment of the algorithm on hardware to assess the performance of the deep reinforcement learning global path planning algorithm in the context of sim2real. Furthermore, this study did not include a comparison with genetic algorithms for path planning. As another essential algorithm in path planning, genetic algorithms deserve investigation regarding their performance within the environment framework designed in this study. Thus, we will introduce genetic algorithms into the path planning environment designed in this study to evaluate their performance.
Author Contributions
Conceptualization, Q.X.; methodology, Q.X.; software, Q.X.; validation, Q.X., L.J. and M.W.; formal analysis, Q.X.; investigation, Q.X.; resources, L.J. and X.Z.; data curation, Q.X.; writing—original draft preparation, Q.X.; writing—review and editing, Q.X. and X.Z.; visualization, Q.X.; supervision, L.J., X.Z. and M.W.; project administration, Q.X.; funding acquisition, L.J. and X.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- EMSA. Annual Overview of Marine Casualties and Incidents. 2019. Available online: https://emsa.europa.eu/we-do/safety/accident-investigation/items.html?cid=141&id=3734 (accessed on 23 October 2022).
- Felski, A.; Zwolak, K. The ocean-going autonomous ship—Challenges and threats. J. Mar. Sci. Eng. 2020, 8, 41. [Google Scholar] [CrossRef] [Green Version]
- Chen, P.; Huang, Y.; Papadimitriou, E.; Mou, J.; van Gelder, P. Global path planning for autonomous ship: A hybrid approach of Fast Marching Square and velocity obstacles methods. Ocean. Eng. 2020, 214, 107793. [Google Scholar] [CrossRef]
- Souissi, O.; Benatitallah, R.; Duvivier, D.; Artiba, A.; Belanger, N.; Feyzeau, P. Path planning: A 2013 survey. In Proceedings of the 2013 International Conference on Industrial Engineering and Systems Management (IESM), Rabat, Morocco, 28–30 October 2013; pp. 1–8. [Google Scholar]
- Sun, L. A real-time collision-free path planning of a rust removal robot using an improved neural network. J. Shanghai Jiaotong Univ. (Sci.) 2017, 22, 633–640. [Google Scholar] [CrossRef]
- Sheng, H.; Guo, C.; Li, T.; Yu, Y. Study navigation method for unmanned ship intelligent avoidance. Harbin Eng. Univ. Acad. Rep. 2018, 39, 998–1005. [Google Scholar]
- Wang, C.; Zhang, X.; Cong, L.; Li, J.; Zhang, J. Research on intelligent collision avoidance decision-making of unmanned ship in unknown environments. Evol. Syst. 2019, 10, 649–658. [Google Scholar] [CrossRef]
- Kuczkowski, Ł.; Śmierzchalski, R. Path planning algorithm for ship collisions avoidance in environment with changing strategy of dynamic obstacles. In Proceedings of the Polish Control Conference, Krakow, Poland, 18–21 June 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 641–650. [Google Scholar]
- Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing atari with deep reinforcement learning. arXiv 2013, arXiv:1312.5602. [Google Scholar]
- Van Hasselt, H.; Guez, A.; Silver, D. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
- Sutton, R.S.; McAllester, D.; Singh, S.; Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. Adv. Neural Inf. Process. Syst. 1999, 12, 1057–1063. [Google Scholar]
- Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal policy optimization algorithms. arXiv 2017, arXiv:1707.06347. [Google Scholar]
- Schulman, J.; Levine, S.; Abbeel, P.; Jordan, M.; Moritz, P. Trust region policy optimization. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 7–9 July 2015; pp. 1889–1897. [Google Scholar]
- Schulman, J.; Moritz, P.; Levine, S.; Jordan, M.; Abbeel, P. High-dimensional continuous control using generalized advantage estimation. arXiv 2015, arXiv:1506.02438. [Google Scholar]
- Engstrom, L.; Ilyas, A.; Santurkar, S.; Tsipras, D.; Janoos, F.; Rudolph, L.; Madry, A. Implementation matters in deep policy gradients: A case study on PPO and TRPO. arXiv 2020, arXiv:2005.12729. [Google Scholar]
- Ng, A.Y.; Harada, D.; Russell, S. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the Sixteenth International Conference on Machine Learning (ICML), Bled, Slovenia, 27–30 June 1999; Volume 99, pp. 278–287. [Google Scholar]
- Dong, P.; Zhang, Z.; Mei, X.; Zhu, S. Reinforcement learning path planning algorithm introducing potential field and trap search. Comput. Eng. Appl. 2018, 1, 129–134. [Google Scholar]
- Lillicrap, T.P.; Hunt, J.J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D. Continuous control with deep reinforcement learning. arXiv 2015, arXiv:1509.02971. [Google Scholar]
- Matheron, G.; Perrin, N.; Sigaud, O. The problem with DDPG: Understanding failures in deterministic environments with sparse rewards. arXiv 2019, arXiv:1911.11679. [Google Scholar]
- Heess, N.; TB, D.; Sriram, S.; Lemmon, J.; Merel, J.; Wayne, G.; Tassa, Y.; Erez, T.; Wang, Z.; Eslami, S.; et al. Emergence of locomotion behaviours in rich environments. arXiv 2017, arXiv:1707.02286. [Google Scholar]
- Mnih, V.; Badia, A.P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; Kavukcuoglu, K. Asynchronous methods for deep reinforcement learning. In Proceedings of the International Conference on Machine Learning, PMLR, New York, NY, USA, 20–22 June 2016; pp. 1928–1937. [Google Scholar]
- Chou, P.W.; Maturana, D.; Scherer, S. Improving stochastic policy gradients in continuous control with deep reinforcement learning using the beta distribution. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 834–843. [Google Scholar]
- Fujimoto, S.; Hoof, H.; Meger, D. Addressing function approximation error in actor-critic methods. In Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018; pp. 1587–1596. [Google Scholar]
Figure 1.
Location of simulation environment generation.
Figure 1.
Location of simulation environment generation.
Figure 2.
Data sequence state-space.
Figure 2.
Data sequence state-space.
Figure 3.
Potential-based reward: (a) Obstacle-based potential plane; (b) Arrival point-based potential plane; (c) Boundary control-based potential plane.
Figure 3.
Potential-based reward: (a) Obstacle-based potential plane; (b) Arrival point-based potential plane; (c) Boundary control-based potential plane.
Figure 4.
Potential plane based on obstacle, arrival point, and boundary control (superimposed).
Figure 4.
Potential plane based on obstacle, arrival point, and boundary control (superimposed).
Figure 5.
Actor network structure.
Figure 5.
Actor network structure.
Figure 6.
Testing results: (a) PPO baseline planning test; (b) Distributed fixed starting position test.
Figure 6.
Testing results: (a) PPO baseline planning test; (b) Distributed fixed starting position test.
Figure 7.
Refine sub-regions.
Figure 7.
Refine sub-regions.
Figure 8.
Distributed PPO algorithm.
Figure 8.
Distributed PPO algorithm.
Figure 9.
Gaussian boundary effects.
Figure 9.
Gaussian boundary effects.
Figure 10.
Accumulated rewards of PPO baseline compared with distributed sample extraction PPO.
Figure 10.
Accumulated rewards of PPO baseline compared with distributed sample extraction PPO.
Figure 11.
Planning path comparison: (a) A* and RL search path in scenario1; (b) A* and RL search path in scenario2.
Figure 11.
Planning path comparison: (a) A* and RL search path in scenario1; (b) A* and RL search path in scenario2.
Figure 12.
Sub-region accumulated rewards: (a) Data sequence state-space; (b) Image state-space.
Figure 12.
Sub-region accumulated rewards: (a) Data sequence state-space; (b) Image state-space.
Figure 13.
Comparison of planning path length: (a) Data sequence state-space; (b) Image state-space.
Figure 13.
Comparison of planning path length: (a) Data sequence state-space; (b) Image state-space.
Table 1.
Related work.
Algorithm | Deterministic/
Stochastic
Policy | Action
Space | Advantages | Disadvantages |
---|
Traditional
Path
Planning | Deterministic | Discrete | High computational efficiency; easy to adjust algorithm parameters. | Unable to directly process high-dimensional data; As the size of the scene map increases, the calculation increases; Unable to provide the recommended sailing information in path planning; Requires accurate environmental modeling; The ship kinematic model is not considered.
|
Sun
Proposed | Deterministic | Discrete | Consider the ship’s dynamics model. | |
DCQN + A* | Deterministic | Discrete | Shorter convergence time by combining Q-learning and A*. | |
DRLOAD | Deterministic | Discrete | Propose safe obstacle avoidance and target approach (reach point approach) reward function. | |
Kuczkowski
Proposed | Deterministic | Continuous | Design the reward functions for different sub-objectives. | |
Table 2.
Experiment environment parameters.
Table 2.
Experiment environment parameters.
Parameter | | Value |
---|
Plane frame structure size | n | 16 |
Render scale | s | 30 |
Obstacle grid size | k | 5 |
Obstacle generation ratio | | |
Ship radius | b | |
Laser horizontal field angle | | 360 |
Laser effective reflection range | h | |
Laser beam | l | 24 |
Reward function mapping size | m | 160 |
based offset | | |
based offset | | |
based offset | | |
Table 3.
Hyperparameters.
Table 3.
Hyperparameters.
Parameter | | Value |
---|
Actor training episode | | 16 |
Actor learning rate | | 30 |
Actor learning rate decay | | 5 |
Actor gradient clipping (L2) | | |
Critic training Episode | | |
Critic training Episode | | 360 |
Critic learning rate | | |
Critic learning rate decay | | 24 |
Critic gradient clipping (L2) | | 160 |
Discount factor | | |
Clip ratio | | |
Episode training iter | | |
GAE lambda | | |
Adapt KL target | | |
Batch Size | | |
Max Step | | |
Table 4.
Training set.
Parameter | Value |
---|
CPU | Intel Xeon E5 2666v (Boost 3.5 GHz) |
Memory | Quard Channel Team DDR4-2133 |
Motherboard | Msi X99A SLI |
Strong | SAMSUNG 840 EVO 1T |
GPU | 2∗Gainward DuraPro GTX 1080 Ti Hurricane |
Table 5.
Shortest planning path.
Table 5.
Shortest planning path.
Policy | Area1 | Area2 | Area3 | Area4 | Area5 | Area6 |
---|
Beta | 129.83 | 335.52 | 444.53 | 103.06 | 56.28 | 324.59 |
Normal | 135.50 | 338.99 | 444.58 | 110.27 | 100.39 | 327.88 |
Disparity | 4.1% | 1.0% | 0.0% | 6.5% | 43.9% | 1.0% |
Table 6.
Planning success rate.
Table 6.
Planning success rate.
Policy | Area1 | Area2 | Area3 | Area4 | Area5 | Area6 |
---|
Beta | 0.87 | 0.76 | 0.88 | 0.92 | 0.89 | 0.79 |
Normal | 0.85 | 0.76 | 0.31 | 0.36 | 0.15 | 0.71 |
Table 7.
Algorithm planning time comparison (in seconds).
Table 7.
Algorithm planning time comparison (in seconds).
Policy | Area1 | Area2 | Area3 | Area4 | Area5 | Area6 |
---|
IDA* | 13.799 | 0.051 | 86.633 | 0.119 | 9.287 | 0.051 |
Dijkstra | 1.371 | 5.015 | 5.302 | 0.79 | 0.237 | 3.111 |
A* | 0.057 | 7.118 | 2.231 | 0.057 | 0.037 | 1.961 |
Bi-A* | 0.083 | 0.509 | 0.975 | 0.058 | 0.048 | 0.182 |
TD3 | 0.046 | 0.102 | 0.134 | 0.038 | 0.024 | 0.095 |
Beta | 0.038 | 0.074 | 0.093 | 0.029 | 0.029 | 0.071 |
Normal | 0.025 | 0.049 | 0.072 | 0.029 | 0.022 | 0.047 |
Table 8.
Comparison of planning operands (iterative loop operands).
Table 8.
Comparison of planning operands (iterative loop operands).
Policy | Area1 | Area2 | Area3 | Area4 | Area5 | Area6 |
---|
IDA* | 1.02 × 106 | 385 | 6.58 × 106 | 9.14 × 103 | 7.09 × 105 | 359 |
Dijkstra | 6.47 × 103 | 1.96 × 104 | 2.07 × 104 | 4.12 × 103 | 1.85 × 103 | 1.44 × 104 |
A* | 433 | 7.82 × 103 | 3.77 × 103 | 262 | 236 | 2.31 × 103 |
Bi-A* | 452 | 1.85 × 103 | 2.31 × 103 | 364 | 382 | 923 |
TD3 | 15 | 36 | 45 | 12 | 8 | 33 |
Beta | 10 | 21 | 26 | 8 | 7 | 20 |
Normal | 6 | 12 | 17 | 7 | 6 | 13 |
Table 9.
Planning path length comparison (∗0.1 km).
Table 9.
Planning path length comparison (∗0.1 km).
Policy | Area1 | Area2 | Area3 | Area4 | Area5 | Area6 |
---|
IDA* | 134.26 | 349.66 | 428.67 | 97.32 | 55.32 | 333.50 |
Dijkstra | 137.27 | 349.66 | 427.43 | 96.08 | 58.32 | 333.51 |
A* | 140.5 | 353.26 | 434.76 | 99.31 | 59.53 | 335.43 |
Bi-A* | 141.76 | 355.39 | 440.11 | 105.05 | 61.42 | 333.15 |
TD3 | 159.29 | 369.10 | 467.07 | 132.75 | 87.45 | 351.83 |
Beta | 129.83 | 335.52 | 444.53 | 103.06 | 56.28 | 324.59 |
Normal | 135.50 | 338.99 | 444.58 | 110.27 | 100.39 | 327.88 |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).