Next Article in Journal
Intelligent Thermal Comfort Controlling System for Buildings Based on IoT and AI
Next Article in Special Issue
The Internet of Things for Smart Environments
Previous Article in Journal
A Methodology Based on Computational Patterns for Offloading of Big Data Applications on Cloud-Edge Platforms
Previous Article in Special Issue
MCCM: An Approach for Connectivity and Coverage Maximization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EAOA: Energy-Aware Grid-Based 3D-Obstacle Avoidance in Coverage Path Planning for UAVs

Department of Computer Science, International University of Beirut, Beirut P.O. Box 146404, Lebanon
*
Author to whom correspondence should be addressed.
Future Internet 2020, 12(2), 29; https://doi.org/10.3390/fi12020029
Submission received: 31 December 2019 / Revised: 24 January 2020 / Accepted: 31 January 2020 / Published: 8 February 2020
(This article belongs to the Special Issue The Internet of Things for Smart Environments)

Abstract

:
The presence of obstacles like a tree, buildings, or birds along the path of a drone has the ability to endanger and harm the UAV’s flight mission. Avoiding obstacles is one of the critical challenging keys to successfully achieve a UAV’s mission. The path planning needs to be adapted to make intelligent and accurate avoidance online and in time. In this paper, we propose an energy-aware grid based solution for obstacle avoidance (EAOA). Our work is based on two phases: in the first one, a trajectory path is generated offline using the area top-view. The second phase depends on the path obtained in the first phase. A camera captures a frontal view of the scene that contains the obstacle, then the algorithm determines the new position where the drone has to move to, in order to bypass the obstacle. In this paper, the obstacles are static. The results show a gain in energy and completion time using 3D scene information compared to 2D scene information.

1. Introduction

The usage of an Unmanned Aerial Vehicle (UAV) (also known as a drone) has become very fast-growing recently [1]. It pervades many application domains to achieve different missions such as terrain monitoring and smart agriculture [2,3], photogrammetry [4], harsh and disaster environments management [5], smart policing, wildfire tracking [6], and many more. One of the main active research areas for UAV is the Coverage Path Planning (CPP). It aims to build a path for the drone to explore every location in an Area of Interest (AoI). Often, the main concerns in CPP are the efficiency and completeness of the coverage. The effectiveness of a CPP work is mainly related to the following performance indices such as the path length [7,8], the mission completion time [9,10], and the number of turns. These metrics are related with energy consumption [11,12]. Most of the approaches try to reduce them in order to save energy.
One of the key challenges in CCP is also to build paths in scenes with obstacles or pop-up threats. Areas could have obstacles and non-flying zones. Obstacles could be static or mobile (such as trees and birds). Pop-up threats are obstacles which are not known a priori by the drone. Different path planning methods rely on two-dimensional or three-dimensional information of the scene to build the trajectory path [13].
Recent approaches in 3D path planning concentrate more on detecting and avoiding obstacle in a 3D view [13,14,15,16,17,18,19]. They have not given priority to compromise between the energy efficiency and the completeness of the area coverage [20] in the presence of obstacles. In our work, we propose an energy aware 3D obstacle avoidance (EAOA) which plans a full coverage path for an area of interest with the ability to avoid obstacles in a 3D view using a grid-based technique.
3D trajectory planning shows great benefits, unlike 2D path planning, with graphical limitations. Solutions are not only constrained by UAV size, weight, and flying mechanisms. There should be a compromise between the trajectory length, the total coverage rate for the area, and the energy consumption of the drone. The number of turns a drone makes during the coverage mission is also another issue and needs to be minimized. More turning points means more energy consumption of the drone [21].
In this paper, we present a grid-based path planning and a 3D obstacle avoidance technique. The planned path relies on a previous paper [14]. It combines the top-view and the frontal-view of the area that includes the obstacle. The algorithm generates the first path offline using the top-view then re-plans it using the front-view of the scene. We deal with static obstacles whose locations are known a priori to the UAV. We aim to avoid the obstacle with the shortest path and least number of turns possible taking into consideration two main factors, energy consumption and completion time.
The remainder of this paper is organized as follows: Section 2 discusses related works; Section 3 provides details about our work; Section 4 presents the evaluation metrics and results; and Section 5 concludes the paper.

2. Related Works

During the path planning mission, avoiding obstacles or non-flying zones is one of the critical challenges for UAVs. The trajectory needs not only to bypass obstacles but also to be efficient in terms of energy consumption and completion time. Obstacles could be mobile such as pigeon, waving flag, or could be fixed such as street lighting column, a building, etc.
Different research works [13,15,16] rely on either two-dimensional or three-dimensional methods for obstacle detection and avoidance. In [17], a 3D-SWAP technique for 3D collision avoidance with multiple UAVs is proposed. UAVs rely on local data from sensors and positions of other neighboring UAVs. Their approach is decentralized and requires low computational load. Another approach is presented in [18]. Authors propose a Model Predictive Control (MPC), and it aims to find an optimal path for an (UAV) in a changing environment. The work aims to avoid collision in the presence of non fly zones and pop-up threats as well as fixed obstacles.
Authors in [19] propose a Fast Geometric Avoidance algorithm (FGA) for 3D collision avoidance. The algorithm takes into account the avoidance start time based on kinematic considerations, collision probability, and navigational constraints. The efficiency of the work was evaluated with Monte Carlo simulations and scenarios in an aircraft simulator. To adapt a real-time path planning to UAV performances and capability (variations and fluctuations), authors in [22] propose an adaptive path planning for UAVs.
In [23], a method to detect changes in image patches is proposed in order to understand variations in the environment and obstacle sizes. In [24], the authors propose an approach for obstacle detection and avoidance. It uses a monocular camera on UAV and motion control measurements to imitate the human behavior in detecting objects in the environment.
Authors in [25] developed an algorithm to build a map of the environment and to generate a collision free path for autonomous UAVs. They used an RGB-D camera with sensors and tested their work in an indoor environment in the presence of obstacles. A fixed wing UAV is used in [26] in two phases. They propose a solution for obstacle avoidance by first generating an offline path, then by relying on it to generate an intelligent real-time pop-up threats avoidance path.
The work in [27] was inspired by the phenomenon of fluid disturbance to re-plan and tune the trajectory for a 3D collision-free path. The authors use the interfered fluid dynamical system (IFDS) algorithm for UAV. In [28], the noisy environment is not taken into consideration. The later approach relies on sensor readings, to localize the obstacle and understand its motion, in order to determine its velocity and autonomously plan a collision-free path.
Another algorithm for autonomous obstacle avoidance is proposed in [29]. It relies on radars and image processing of video frames to detect and avoid obstacles. It aims to reduce the computational load as UAVs are energy- and memory-constrained. In [30], authors use board passive sensors like vision systems on fixed wing drones. UAV flies at lower attitude for aerial surveillance. Determining the position of the obstacle relies on local map of the environment with measurements from sensors aboard the drone. Authors in [31] provides a 2D solution to avoid obstacles using a grid based technique. They propose an algorithm that plans a coverage path for UAVs over irregular shaped areas using a grid-based method with minimum energy consumption. Authors enhanced the algorithm proposed by Valente et al. [25], by substituting the Cost Function with a novel Energy-Cost Function. The novel algorithm comes up with 17% in energy gain compared to the algorithm provided in [25]. They also included two techniques that significantly decrease the computation Time. Authors choose different starting mission points in order to decide about the best path in an offline mode. For each path planned, they rely on the cost function to move from one cell to another unvisited neighboring cell. This produces changes in direction and consequently increases the number of turns. In our work, the UAV movement is parallel to the longest side of the grid (from the top view), which reduces the number of turns and the path length. As in [14], we take into consideration the percentage of the area in the cell to be covered, in order to increase the coverage rate. The work in [24] drops the whole cells that contain obstacles. Hence, if the obstacle occupies a small portion of a cell, the whole cell won’t be covered, thus the area coverage rate decreases. Additionally, avoiding the whole cell increases the path length and the turning angles. Consequently, it increases the completion time and energy consumption.
Authors in [14] propose an energy aware coverage path planning method. They use 2D grid partitioning to scan an area of interest. Scenarios with one drone and different regular and irregular area shapes were considered. The algorithm determines the turning positions as well as the start point and the scan direction of the area. The path is planned in an offline mode and showed efficient results in terms of energy saving, area coverage, and completion time. However, the areas of interest were without obstacles and nonflying zones. In this work, we develop an extension for this algorithm. The objective is to cover an area of interest in the presence of obstacles. Our proposed algorithm re-plans the original path by scanning the facing area which contains obstacles. We try to avoid obstacles with the shorter path possible while keeping the completion time and energy consumption to a minimum.

3. Energy-Aware Grid-Based 3D-Obstacle Avoidance (EAOA)

3.1. Basic Approach

Different grid-based related works avoid the whole cells that contain obstacles by moving immediately to another grid-cell [10,31]. In Figure 1, the obstacle O b 1 in the second column is avoided by moving at the edge of the cell. However, in the third column, another way to avoid the obstacle O b 2 is by making a detour (the basic approach). We noticed that, by making a detour, the path will have a lower number of turns and shorter path length. In Section 4.2, we present the results of adopting this approach (Scenario 1). As relying on the 2D top view doesn’t always give a shorter path or ensures an optimal coverage, we extend the basic approach by including 3D information. We rely on the area’s frontal view to decide about the obstacles shape and location and the detour to make. In the next section, we present preliminary concepts and definitions. Results of this approach are presented in Section 4.2–Scenario 2.

3.2. Preliminaries

In our previous work [14], the area is divided into grid-cells. Each grid-cell has a rectangular shape and covers a portion of the area. The UAV footprint has a circular shape and is projected to the area portion below the drone. Hence, each grid-cell represents the inscribed rectangle in the UAV projected footprint. The original planned path starts with the longest grid-side and crosses the center of the cells (see Figure 2a). It is plotted using the 2D top-view area scanning and is generated offline.
We denote C χ t a grid cell in this figure and P χ the perpendicular plane to C χ t which passes by the center of the cell. Hence, two consecutive planes P χ and P χ + 1 on the same grid column are h unit apart, where h is the length of C χ t . In this work, we assume the obstacles positions are fixed and known and we want to avoid them using 3D information. We lean on the original path in Figure 2a. The drone will follow it, and, right before the cell that contains the obstacle, an image for the front area is captured in order to decide later about the next drone’s position. The captured image frame is to be defined in the coming section.
To illustrate, in Figure 2b, we assume having an obstacle in plane P2. The red line represents the original path in Figure 2a. The drone re-plans the path and changes it dynamically according to the locations of the obstacles. The green line represents the new trajectory. The goal of this work is to determine the new position of the drone while keeping the total trajectory path and energy consumption to a minimum.
The main phases in our work are: (1) Determining the frame dimensions in the front view, (2) locating the next drone’s position.

3.3. Frame Dimensions

Figure 3a represents the 3D view. The (i, j) plane represents the area top view, (i, k) plane is parallel to the facing view.
After locating the drone in the (i, j) plane, the planner will decide to range the minimum and maximum allowed heights for the drone (denoted h m i n and h m a x , respectively). This range will specify the frame dimensions of the facing area. We denote F as the scanned front frame. We assume that the drone is located at point A, with height h i j k .
In Figure 3b, the frame is represented by the rectangle BCDE of area a × b, where a and b represent the length and width, respectively. We aim to determine the frame dimensions. The length a = ( h m a x h m i n ) . We assume the facing view is captured with a camera angle ϕ = 62°, similarly to the work of Al-Kaff et al. [24]. Authors found that any object detected out of this area angle will not cause any danger to the UAV. Point M is the projection of A. Both points represent actually the centers of two consecutive grid-cells in the plane (i, j). To calculate the length (BE) and width (BC) of the scanned frame, we have to find the dimensions of the rectangular pyramid (ABCDE). Let us take the isosceles triangle (AFG) that has angle ϕ , and height h = AM. The side AF (denoted R) can be computed from Equation (1), since AM is known and equal to the length of the top-view grid cell:
R = h c o s ( ϕ 2 ) .
Hence, the length b of the frame F can be computed as follows:
b = 2 × R × s i n ( ϕ 2 ) .
Now that the frame dimensions are known, the next step is to transform F to a grid-based form denoted Γ . The dimensions of each cell in Γ fit to the height and width of the drone. Hence, we denote v i j a cell located in row i and column j in the grid Γ and has dimensions L × W (L for length and W for width).

3.4. Determining the Next Position of the Drone

In this section, we aim to determine the closest location where the drone has to move to, in order to avoid an obstacle on its way. The main steps are described below:
  • Step 1: A heatmap of the frame F is generated. To exemplify, Figure 4 shows a 24 × 40 heatmap of a tree in the frame:
    Γ = { v i j v i j [ 0 , 1 ] , i [ 1 , b ] , j [ 1 , a ] }
    where v i j represents a grid cell element in Γ . We denote Γ w the set of zero-cells in the grid:
    Γ w = { v i j v i j = 0 }
    We denote Γ c the set of non zero-cells in the grid ( Γ c = Γ \ Γ w ):
    Γ c = { v i j v i j ] 0 , 1 ] }
  • Step 2: To recall, M is the projection of A in the frame. We check its position in the grid. If M is located in a cell in Γ w , then no obstacle is on the drone’s way. The current path will not be modified. However, if M is located at a non-zero cell (i.e., a cell in Γ c ), then the obstacle exists on the drone’s way. Hence, the path should be modified. For this purpose, we will investigate the list of cells surrounding M to find a loophole for the drone and make a detour.
    We denote N i j the neighbors of a grid-cell v i j . It is the list of cells that directly surround v i j :
    N i j = { v p q p [ i 1 , i + 1 ] , q [ j 1 , j + 1 ] } { v i j }
    We denote B m the nearest non-zero cells to M that are in Γ c (see Equation (7)). We call them boundary cells. We denote d(v1,v2) the Euclidean distance between two points v1 and v2:
    B m = { v i j Γ c v p q Γ c , d ( v i j , M ) = m i n { d ( v p q , M ) } v N i j v Γ w }
    We aim to find the nearest v i j to M and B m such that v i j belongs to Γ w . We denote S n the set of solutions, where n represents the number of white cells that directly surround B m :
    S n = { v i j N B m Γ w v p q N B m Γ w , d ( v i j , M ) = m i n { d ( v p q , M ) } }
    To find the list of solutions S n , we loop in a reverse spiral mode from M (see Figure 5) until we reach a shell that contains cells with v i j = 0 .
    To illustrate, in Figure 4, the projection of the drone is located at cell (12, 13) in the grid. The set of boundary cells B m is represented by the red boxes (| B m | = 3). The set of possible solutions S n is represented by the purple boxes (| S n | = 3). It should be noted that, if S n = , the planner has to increase the frame dimensions.
  • Step 3: To choose from S n the most convenient solution. We denote G = B m S n ; see Figure 6. Then, divide each cell into four sub-cells (see Figure 7a). The reason behind this is to have better understanding of the obstacles’ location in order to determine the nearest next position of the drone. After applying the sub division, a new solution set S S n is generated (see Figure 7b). The dashed red box represents the cell where the drone has to move to in order to avoid the obstacle.
The flowchart in Figure 8 summarizes the main steps in our work. Algorithm 1 checks the current drone’s cell-position (named M) in the facing area. If M is located in a zero-value cell, it becomes the new drone’s position. Otherwise, the algorithm checks the neighboring cells to find the nearest non-zero value cell.
Algorithm 1 getDronePosition
Input: M : Cell, F: Frame
Output: new Drone’s position denoted S p o s i t i o n
1:
S n : C e l l [ ]
2:
if M . g e t v a l u e ( ) = = 0 then
3:
S p o s i t i o n M
4:
else
5:
s h e l l N u m b e r 1
6:
C h e c k N e i g h b o r C e l l s ( M , s h e l l N u m b e r , S n , F )
7:
P o s s i b l e D r o n e P o s F i l t e r S o l u t i o n ( M , S n )
8:
B m F i n d B o u n d a r y C e l l s ( P o s s i b l e D r o n e P o s , F )
9:
G B m S n
10:
 {Perform subdivision on G }
11:
 {Group every 4 subcells after subdivision in one vertex (denote it v )}
12:
 {We denote ( S n ) the list of vertices v }
13:
S p o s i t i o n = F i l t e r S o l u t i o n ( M , ( S n ) )
14:
end if
15:
return S p o s i t i o n
Algorithm 2 ( C h e c k N e i g h b o r C e l l s ) looks for the list of possible solutions S n around M in the surrounding shells in a spiral mode starting from M. The list of solutions is filtered by Algorithm 3 ( F i l t e r S o l u t i o n ) to choose the nearest possible solution cell S to M which could be the next drone’s position.
Algorithm 2 CheckNeighborCells
Input: M: Cell, shellNumber : int, S n : Cell[], F :Frame
Output: List of possible Solutions S n
1:
n 1 ( M . g e t I ( ) s h e l l N u m b e r )
2:
n 2 ( M . g e t I ( ) + s h e l l N u m b e r )
3:
m 1 ( M . g e t K ( ) s h e l l N u m b e r )
4:
m 2 ( M . g e t K ( ) + s h e l l N u m b e r )
5:
C e l l t e m p = n u l l
6:
a = F . g e t L e n g t h ( )
7:
b = F . g e t W i d t h ( )
8:
if ( n 2 b m 2 a ) then
9:
 {look into the current shell for a solution (check each cell v i k )}
10:
for i n 1 to n 2 do
11:
  for k m 1 to m 2 do
12:
   if ( i ! = M . g e t I ( ) k ! = M . g e t K ( ) ) then
13:
    if ( v i k = = 0 ) then
14:
      S n . a d d ( v i k )
15:
    end if
16:
   end if
17:
  end for
18:
  if ( S n . i s E m p t y ( ) ) then
19:
    C h e c k N e i g h b o r C e l l s ( M , + + s h e l l N u m b e r , S n , F )
20:
  end if
21:
end for
22:
end if
Algorithm 3 FilterSolution
Input: M : Cell, S n : Cell []
Output: The nearest solution to cell M denoted S
1:
x 2 M . g e t I ( )
2:
y 2 M . g e t K ( )
3:
for i n d e x 0 to S n . s i z e ( ) do
4:
x 1 S n . g e t ( i n d e x ) . g e t I ( )
5:
y 1 S n . g e t ( i n d e x ) . g e t K ( )
6:
 {Find distance to M}
7:
d i s t a n c e ( x 2 x 1 ) 2 W + ( y 2 y 1 ) 2 L
8:
if ( i n d e x = = 0 ) then
9:
   o l d = d i s t a n c e
10:
end if
11:
if ( o l d > d i s t a n c e ) then
12:
   o l d d i s t a n c e
13:
   m i n I n d e x i n d e x
14:
end if
15:
end for
16:
S S n . g e t ( m i n I n d e x )
17:
return S
In order to locate the boundaries of the obstacle around S , Algorithm 4 ( F i n d B o u n d a r y C e l l s ) finds the list of boundary cells (denoted B m ). We combine S n and B m in a group box G, and to decide whether the drone will pass safely through the solution cell, we perform a subdivision on the cells. This helps with finding a better solution nearest to M than the one found before the subdivision. We group every four zero-value subcells in one vertex (denoted v ). The list of these vertices is denoted ( S n ) and is filtered later using Algorithm 3 to find the exact drone position (denoted S p o s i t i o n ). In the next section, we evaluate the performance of our approach.
Algorithm 4 FindBoundaryCells
Input: S : Cell, F :Frame
Output: List of boundary cells B m to S
1:
B m : c e l l [ ]
2:
n 1 ( S . g e t I ( ) 1 )
3:
n 2 ( S . g e t I ( ) + 1 )
4:
m 1 ( S . g e t K ( ) 1 )
5:
m 2 ( S . g e t K ( ) + 1 )
6:
for i n 1 to n 2 do
7:
for k m 1 to m 2 do
8:
  if ( i ! = S . g e t I ( ) k ! = S . g e t K ( ) ) then
9:
   if ( v i k > 0 v i k < = 1 ) then
10:
     B m . a d d ( v i k )
11:
   end if
12:
  end if
13:
end for
14:
end for
15:
return B m

4. Evaluation Metrics and Results

In this section, we present different scenarios with different obstacle shapes along the drone’s path. To evaluate the performance of the work, we rely on the following metrics: the energy consumption, completion time, path length, and turning angles.

4.1. Evaluation Metrics

The cells in the grid can be seen as vertices in a graph-like structure G ( V , E ) , where V is the list of vertices and E is list of edges. We denote v the center of a grid-cell before subdivision and consider it a vertex in the graph (refer to Figure 9). Each vertex has coordinates in a Cartesian plane.

4.1.1. Turning Angle

The power consumption is affected by the turn rate [32]. How many turns are needed during the whole flight mission is a main concern. Assume that the drone is coming from vertex v 1 to v 2 and is going to make a turn at v 2 to head later to v 3 (see Figure 9). The turning angle is computed as follows [14]:
Φ 123 = π cos 1 ( v 1 v 2 v 3 )
where cos 1 ( v 1 v 2 v 3 ) is computed from the law of cosines in v 1 v 2 v 3 . Hence, the angle can be found from the Euclidean distance between vertices as follows:
Φ 123 = π cos 1 ( e 12 2 + e 23 2 e 31 2 ) 2 e 12 e 23
where e i j = d ( v i , v j ) is the Euclidean distance between two vertices v i and v j .

4.1.2. Completion Time

Optimizing the trajectory length and the mission completion time is essential in energy-efficient path planning techniuqes for UAVs [7,10,33,34]. The mission completion time includes flying time and hovering time. We lean on the following formula from [4] to find the completion time denoted τ :
τ = υ + ϑ = 1 Φ ϑ ϖ
where ⊤ is the path length, υ is UAV speed, while ℧ is the number of turns, Φ ϑ is the angle of ϑ th turn, and ϖ is the UAV rotation rate.

4.1.3. Energy Consumption

The total energy cost of a UAV (denoted E t o t a l ) is the energy consumed to travel the total distance t o t a l (denoted E ( t o t a l ) ) and the energy consumed to perform the turns (denoted E ( Φ ) ) [14]:
E t o t a l = E ( t o t a l ) + E ( Φ ) = i V j V λ e i j + i V j V k V γ 180 π Φ i j k
where λ is the energy consumed per unit distance, and γ is the energy consumed per angle degree. In this work, we assume that λ = 0.1164 KJ/m and γ = 0.0173 KJ/degree.

4.2. Results

  • Scenario 1: The area of interest is 100 m× 115 m. The top view grid is 10 rows× 6 columns (see Figure 10). The marked red cells contain obstacles. The total trajectory path will be compared to the one obtained in the basic scenario. For the dimensions of the frame, we assume F is 16.66 m× 15 m. Each grid-cell in F is 0.69× 0.5, Γ is 40 rows×24 columns. We assume that the drone is located at height 10.5 m and faces cell (12,13) in Γ . Below, we present the heatmaps and the drone’s position at each obstacle.
    - First, Obstacle: We assume having a palm tree of height 15 m. The heatmap of the front area is shown in Figure 4. The drone’s new position is the center of the dashed group in Figure 7b. The drone thus chooses to go down through the cell solution by performing a horizontal left turn by 3° and a vertically down turn by 3°, then cuts a distance of 11.5744 m. This distance is between initial drone position and the new drone’s position ( S p o s i t i o n ). To get to initial the height behind the tree, it does 6° vertically back up, crossing the same distance.
    - Second Obstacle We assume having a signboard of height 13 m. The heatmap of the front area is shown in Figure 11. The projection of the drone is M(12,13). Figure 12 shows the heatmap after the subdivision. The nearest border cell is (12,5). The drone passes through the cell by performing a vertical upward turn by 16°, then cuts a distance of 11.985 m followed by a vertical turn down of 33°. Then, it returns to the initial height behind the obstacle. The red cell represents the solution cell (i.e., the final drone position S p o s i t i o n ).
    - Third Obstacle: Assume having a tree of height 11.6 m. The heatmap is shown in Figure 13. The projection of the drone position is the red cell (12,13). The nearest border and solution cells are shown in Figure 14. The dashed red box represents the solution cell (i.e., the final drone position S p o s i t i o n ).
    The drone will pass through the solution cell. It performs a horizontal turn by 2°, then turns upward vertically by 2° cutting a distance of 11.506 m. Then, it turns vertically down 3° to return to the initial height behind the obstacle and another turn to get back to the initial direction.
    - Fourth obstacle: Assuming the obstacle is part of a building, the heatmap is shown in Figure 15. The projection of the drone position is cell (12,13). After the subdivision, the cell (3,13) is selected solution ( S p o s i t i o n ). The red cells represent the boundaries set B m , and the blue cells represent the solution cells S n . The final drone position is represented in Figure 16 (the red box). The drone will pass through the cell. It performs a horizontal turn by 28°cutting a distance of 13.07 m. Then, it performs a horizontal turn by 57°. After that, it returns to the initial path direction.
    At the end of the mission, the resulted path by the basic approach is presented in Figure 17a. The total path obtained by our work is shown in Figure 17b.
    The paths are seen from the top view. Figure 18 shows the total trajectory path in our work seen from a different view. We compared our trajectory plan to the basic approach. The UAV speed is considered 8 m/s, and the UAV rotation rate is 30 degree/s. Table 1 shows 13.20% gain in completion time and 10.37% gain in energy by EAOA over the basic approach. It should be noted that the 3D path is 4.04% shorter than the 2D path in the basic approach with reduction of 25% in the total turn angles.
    In order to understand the behavior of the work, we varied the speed and the rotation rate. The UAV speed varies between 4 and 14 m/s, and the UAV rotation rate varies between 10 and 60 degree/s. Figure 19a shows the gain in completion time while increasing both the speed and rotation rate. Figure 19b shows the gain in completion time in our 3D work vs. a 2D path while increasing the speed and decreasing the rotation rate.
    Results show that the mission completion time is reduced by an average of 14.46% in 3D path, with energy saving of 10.37% compared to the results obtained in the basic approach. It should be noted that the variation between the speed and the rotation rate influences on the completion time but not on the energy consumption. The gain in energy remains 10.37% due to the fact that λ and γ are the average total energy consumption with standard deviation for the distance, and turn, respectively [35].
  • Scenario 2: In this scenario, we adopt the area used in [31] (see Figure 20). The area is 202 m × 172.2 m. It is divided into a grid of 8 rows × 10 columns (top view grid). It shows the area of interest with the presence of obstacles.
    Figure 21a shows a Google Maps 3D view of the cell containing the obstacles. It has two trees of height max 4 m. Figure 21b shows the Google Maps street view image for the facing area. We compared our work to [31], in which the drone flies at height of 10 m. We assume that the UAV speed varies between 4 and 14 m/s and UAV rotation rate varies between 10 and 60 degree/s.
    The green line in Figure 22a represents the path generated by [31]. The red path in Figure 22b is generated in our work.
    Table 2 shows that our path is 3.89% shorter than the path in [31]. It also shows a reduction of 37.77% in the total turn angles. This means better energy saving. Results show that the mission completion time is reduced by an average of 15.22% in our work, with energy saving of 11.30% compared to the results obtained in [31].
    It should be noted that the area is not totally covered by the work in [31]. The cell that contains obstacle (example cell (4,4)) is discarded. However, if the obstacle belongs to a small part of the cell, it is better not to discard the whole cell if there is a possibility to avoid it while maintaining a higher coverage. In our work, by tackling the frontal view, we can have a better understanding of the obstacle location which ensures a better coverage of the area.
    Authors in [31] declare that the cost function presented in the original approach in [10] has not been able to achieve an optimum energy solution. Therefore, they revised the cost-effectiveness and updated the cost-function. They proposed an approach based on an accurate energy model that correctly estimates the energy needed to complete the coverage mission. They showed that their work achieves minimum energy consumption. Our work achieves 11.3% energy gain over the work in [31] which in turn achieves a gain up to 17% over the approach in [10]. Moreover, our work achieves complete area coverage with shorter path over the work in [31], as they avoid the whole obstacle cell which results in more number of turns and less coverage rate.
    Figure 23a shows the gain in completion time in our 3D work vs. 2D work while increasing both the speed and rotation rate. Figure 23b shows the gain in completion time while increasing the speed and decreasing the rotation rate. Results show that the mission completion time is reduced by an average of 15.95% in EAOA compared to the results obtained in [31].

5. Conclusions

We proposed an Energy-aware Grid-based 3D-Obstacle Avoidance during path planning missions. The algorithm is named EAOA and has two phases. It first generates an original path offline using the area top-view grid, then scans the frontal view of the area to detect an obstacle along the drone’s path. The algorithm re-plans the path online to find a detour and bypass the obstacle. The results show a gain in completion time and energy saving. We aim in the future works to test the algorithm in an actual flight and study the behavior of the algorithm in a noisy environment and in the presence of dynamic obstacles or pop-up threats. We also intend to boost up the performance of the algorithm by applying more optimization using dynamic programming.

Author Contributions

Conceptualization, A.M.; Investigation, A.G. and A.M.; Supervision, A.G.; Validation, A.M.; Writing—original draft, A.G. and A.M.; Writing—review and editing, A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the editors and the anonymous reviewers, for their valuable comments which improved the clarity and quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fotouhi, A.; Qiang, H.; Ding, M.; Hassan, M.; Giordano, L.G.; García-Rodríguez, A.; Yuan, J. Survey on UAV Cellular Communications: Practical Aspects, Standardization Advancements, Regulation, and Security Challenges. IEEE Commun. Surv. Tutor. 2019, 21, 3417–3442. [Google Scholar] [CrossRef] [Green Version]
  2. Basilico, N.; Carpin, S. Deploying teams of heterogeneous UAVs in cooperative two-level surveillance missions. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 610–615. [Google Scholar] [CrossRef]
  3. Lottes, P.; Khanna, R.; Pfeifer, J.; Siegwart, R.; Stachniss, C. UAV-Based Crop and Weed Classification for Smart Farming. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017. [Google Scholar] [CrossRef]
  4. Nam, L.; Huang, L.; Li, X.; Xu, J. An approach for coverage path planning for UAVs. In Proceedings of the 2016 IEEE 14th International Workshop on Advanced Motion Control (AMC), Auckland, New Zealand, 22–24 April 2016; pp. 411–416. [Google Scholar] [CrossRef] [Green Version]
  5. Luo, C.; Miao, W.; Ullah, H.; McClean, S.; Parr, G.; Min, G. Unmanned Aerial Vehicles for Disaster Management. In Geological Disaster Monitoring Based on Sensor Networks; Durrani, T., Wang, W., Forbes, S., Eds.; Springer: Singapore, 2019; pp. 83–107. [Google Scholar] [CrossRef]
  6. Pham, H.; La, H.; Feil-Seifer, D.; Deans, M. A Distributed Control Framework for a Team of Unmanned Aerial Vehicles for Dynamic Wildfire Tracking. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017. [Google Scholar] [CrossRef] [Green Version]
  7. Artemenko, O.; Dominic, O.J.; Andryeyev, O.; Mitschele-Thiel, A. Energy-Aware Trajectory Planning for the Localization of Mobile Devices Using an Unmanned Aerial Vehicle. In Proceedings of the 2016 25th International Conference on Computer Communication and Networks (ICCCN), Waikoloa, HI, USA, 1–4 August 2016. [Google Scholar] [CrossRef]
  8. Xu, A.; Viriyasuthee, C.; Rekleitis, I. Complete Optimal Terrain Coverage using an Unmanned Aerial Vehicle. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 2513–2519. [Google Scholar] [CrossRef] [Green Version]
  9. Ost, G. Search Path Generation with UAV Applications Using Approximate Convex Decomposition. Master’s Thesis, Automatic Control, Linköping University, Linköping, Sweden, 2012. [Google Scholar]
  10. Valente, J.; Sanz, D.; Cerro, J.; Barrientos, A.; de Frutos, M. Near-optimal coverage trajectories for image mosaicing using a mini quad-rotor over irregular-shaped fields. Precis. Agric. 2013, 14, 115–132. [Google Scholar] [CrossRef]
  11. Maza, I.; Ollero, A. Multiple UAV cooperative searching operation using polygon area decomposition and efficient coverage algorithms. In Distributed Autonomous Robotic Systems 6; Alami, R., Chatila, R., Asama, H., Eds.; Springer: Tokyo, Japan, 2007; Volume 6, pp. 221–230. [Google Scholar] [CrossRef]
  12. Torres Anaya, M.; Pelta, D.; Verdegay, J.; Torres, J. Coverage path planning with unmanned aerial vehicles for 3D terrain reconstruction. Expert Syst. Appl. 2016, 55, 441–451. [Google Scholar] [CrossRef]
  13. Radmanesh, M.; Kumar, M.; Guentert, P.H.; Sarim, M. Overview of Path-Planning and Obstacle Avoidance Algorithms for UAVs: A Comparative Study. Unmanned Syst. 2018, 6, 95–118. [Google Scholar] [CrossRef]
  14. Ghaddar, A.; Merei, A. Energy-Aware Grid Based Coverage Path Planning for UAVs. In Proceedings of the SENSORCOMM 2019: The Thirteenth International Conference on Sensor Technologies and Applications, Nice, France, 27–31 October 2019; pp. 34–45. Available online: https://www.researchgate.net/profile/Ahmad_Merei/publication/335570774_Energy-Aware_Grid_Based_Coverage_Path_Planning_for_UAVs/links/5e00e2e14585159aa4959742/Energy-Aware-Grid-Based-Coverage-Path-Planning-for-UAVs.pdf (accessed on 3 February 2020).
  15. Stefansson, T. 3D Obstacle Avoidance for Drones Using a Realistic Sensor Setup. Ph.D. Thesis, KTH Royal Institute of Technology, Stockholm, Sweden, 2018. [Google Scholar]
  16. Zhao, L. 3D Obstacle Avoidance for Unmanned Autonomous System (UAS). Ph.D. Thesis, University of Nevada, Las Vegas, NV, USA, 2015. [Google Scholar]
  17. Ferrera, E.; Alcántara, A.; Capitán, J.; Castaño, Á.R.; Marrón, P.J.; Ollero, A. Decentralized 3D Collision Avoidance for Multiple UAVs in Outdoor Environments. Sensors 2018, 18, 4101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Lee, J.W.; Walker, B.; Cohen, K. Path Planning of Unmanned Aerial Vehicles in a Dynamic Environment. Infotech@Aerospace. 2011. Available online: https://arc.aiaa.org/doi/abs/10.2514/6.2011-1654 (accessed on 3 February 2020).
  19. Lin, Z.; Castano, L.; Mortimer, E.; Xu, H. Fast 3D Collision Avoidance Algorithm for Fixed Wing UAS. J. Intell. Robot. Syst. 2019. Available online: https://doi.org/10.1007/s10846-019-01037-7 (accessed on 3 February 2020).
  20. Cabreira, M.; Brisolara, L.; Ferreira, P., Jr. Survey on Coverage Path Planning with Unmanned Aerial Vehicles. Drones 2019, 3, 4. [Google Scholar] [CrossRef] [Green Version]
  21. Xiong, R.; Shan, F. DroneTank: Planning UAVs’ Flights and Sensors’ Data Transmission under Energy Constraints. Sensors 2018, 18, 2913. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Liu, W.; Zheng, Z.; Cai, K. Adaptive path planning for unmanned aerial vehicles based on bi-level programming and variable planning time interval. Chin. J. Aeronaut. 2013, 26, 646–660. [Google Scholar] [CrossRef] [Green Version]
  23. Mori, T.; Scherer, S. First results in detecting and avoiding frontal obstacles from a monocular camera for micro unmanned aerial vehicles. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 1750–1757. [Google Scholar] [CrossRef] [Green Version]
  24. Al-Kaff, A.; Garcia, F.; Martin, D.; de la Escalera, A.; Armingol, J.M. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs. Sensors 2017, 17, 1061. [Google Scholar] [CrossRef] [PubMed]
  25. Iacono, M.; Sgorbissa, A. Path following and obstacle avoidance for an autonomous UAV using a depth camera. Robot. Auton. Syst. 2018, 106, 38–46. [Google Scholar] [CrossRef]
  26. Esposito, J.F. Real-Time Obstacle and Collision Avoidance System for Fixed Wing Unmanned Aerial Systems. Ph.D. Thesis, University of Kansas, Lawrence, KS, USA, 2013. [Google Scholar]
  27. Liu, C.; Wang, H.; Yao, P. UAV Autonomous Collision Avoidance Method Based on Three-dimensional Dynamic Collision Region Model and Interfered Fluid Dynamical System. DEStech Trans. Eng. Technol. Res. 2017. [Google Scholar] [CrossRef] [Green Version]
  28. Choi, M.; Rubenecia, A.; Shon, T.; Choi, H.H. Velocity Obstacle Based 3D Collision Avoidance Scheme for Low-Cost Micro UAVs. Sustainability 2017, 9, 1174. [Google Scholar] [CrossRef] [Green Version]
  29. Parappat, P.; Kumar, A.; Mittal, R.; Khan, S. Obstacle Avoidance by Unmanned Aerial Vehicles Using Image Recognition Techniques. In Proceedings of the International Conference on Circuits, Systems, Communications and Computers, 2014; Volume 1, pp. 378–381. Available online: https://www.researchgate.net/profile/Suhel_Khan2/publication/264238395_Obstacle_avoidance_by_unmanned_aerial_vehicles_using_image_recognition_techniques/links/545bedd70cf2f1dbcbcb085b/Obstacle-avoidance-by-unmanned-aerial-vehicles-using-image-recognition-techniques.pdf (accessed on 3 February 2020).
  30. Cruz, G.; Encarnacao, P. Obstacle Avoidance for Unmanned Aerial Vehicles. J. Intell. Robot. Syst. 2012, 65, 203–217. [Google Scholar] [CrossRef]
  31. Cabreira, T.M.; Ferreira, P.R.; Di Franco, C.; Buttazzo, G.C. Grid-Based Coverage Path Planning with Minimum Energy over Irregular-Shaped Areas with UAVS. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 11–14 June 2019. [Google Scholar] [CrossRef]
  32. Gramajo, G.; Shankar, P. An Efficient Energy Constraint Based UAV Path Planning for Search and Coverage. Int. J. Aerosp. Eng. 2017, 2017, 8085623. [Google Scholar] [CrossRef]
  33. Li, D.; Wang, X.; Sun, T. Energy-optimal coverage path planning on topographic map for environment survey with unmanned aerial vehicles. Electron. Lett. 2016, 52, 699–701. [Google Scholar] [CrossRef]
  34. Recchiuto, C.; Nattero, C.; Sgorbissa, A.; Zaccaria, R. Coverage Algorithms for Search and Rescue with UAV Drones. In Proceedings of the Workshop of the XIII AIIA Symposium on Artificial Intelligence, Pisa, Italy, 10–12 December 2014. [Google Scholar]
  35. Modares, J.; Ghanei, F.; Mastronarde, N.; Dantu, K. UB-ANC planner: Energy efficient coverage path planning with multiple drones. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 6182–6189. [Google Scholar] [CrossRef]
Figure 1. Different ways in avoiding obstacles.
Figure 1. Different ways in avoiding obstacles.
Futureinternet 12 00029 g001
Figure 2. Illustration of the top and frontal views. (a) top view: a binary image representing an area of interest [14]: black cells are obstacles, grey cells are free areas; (b) example of a frontal view with obstacle. The red line is the original path and the green one is the new path.
Figure 2. Illustration of the top and frontal views. (a) top view: a binary image representing an area of interest [14]: black cells are obstacles, grey cells are free areas; (b) example of a frontal view with obstacle. The red line is the original path and the green one is the new path.
Futureinternet 12 00029 g002
Figure 3. 3D view.
Figure 3. 3D view.
Futureinternet 12 00029 g003
Figure 4. Example of a tree Heatmap in the frame. The drone position is (12, 13). The red box represents boundary cells, and the purple box represents solution cells.
Figure 4. Example of a tree Heatmap in the frame. The drone position is (12, 13). The red box represents boundary cells, and the purple box represents solution cells.
Futureinternet 12 00029 g004
Figure 5. Determine set of Solutions S n .
Figure 5. Determine set of Solutions S n .
Futureinternet 12 00029 g005
Figure 6. Group of obtained cells G.
Figure 6. Group of obtained cells G.
Futureinternet 12 00029 g006
Figure 7. Finding the new position of the drone. (a) the heatmap after subdivision—set of new solutions after subdivision; (b) the dashed box is the best solution.
Figure 7. Finding the new position of the drone. (a) the heatmap after subdivision—set of new solutions after subdivision; (b) the dashed box is the best solution.
Futureinternet 12 00029 g007
Figure 8. Flow chart of the obstacle detection and avoidance.
Figure 8. Flow chart of the obstacle detection and avoidance.
Futureinternet 12 00029 g008
Figure 9. Graph representation: red points are vertices and blue lines represent edges.
Figure 9. Graph representation: red points are vertices and blue lines represent edges.
Futureinternet 12 00029 g009
Figure 10. Area of Interest in [4].
Figure 10. Area of Interest in [4].
Futureinternet 12 00029 g010
Figure 11. Heatmap with signboard as obstacle (second obstacle).
Figure 11. Heatmap with signboard as obstacle (second obstacle).
Futureinternet 12 00029 g011
Figure 12. Subdivision for the grid Box G for the second obstacle.
Figure 12. Subdivision for the grid Box G for the second obstacle.
Futureinternet 12 00029 g012
Figure 13. Tree heatmap (third obstacle).
Figure 13. Tree heatmap (third obstacle).
Futureinternet 12 00029 g013
Figure 14. Subdivision for the grid Box G for the third obstacle.
Figure 14. Subdivision for the grid Box G for the third obstacle.
Futureinternet 12 00029 g014
Figure 15. Heatmap of the frame containing part of a building.
Figure 15. Heatmap of the frame containing part of a building.
Futureinternet 12 00029 g015
Figure 16. Subdivision for the grid Box G for the fourth obstacle.
Figure 16. Subdivision for the grid Box G for the fourth obstacle.
Futureinternet 12 00029 g016
Figure 17. (a) total trajectory paths. Total path generated by our 2D basic obstacle avoidance algorithm; (b) total path generated in our 3D obstacle avoidance algorithm.
Figure 17. (a) total trajectory paths. Total path generated by our 2D basic obstacle avoidance algorithm; (b) total path generated in our 3D obstacle avoidance algorithm.
Futureinternet 12 00029 g017
Figure 18. Total trajectory path at the end of the mission.
Figure 18. Total trajectory path at the end of the mission.
Futureinternet 12 00029 g018
Figure 19. Gain in completion time in EAOA compared to the basic approach. (a) gain in completion time while increasing both the speed and rotation rate; (b) gain in completion time while increasing the speed and decreasing the rotation rate.
Figure 19. Gain in completion time in EAOA compared to the basic approach. (a) gain in completion time while increasing both the speed and rotation rate; (b) gain in completion time while increasing the speed and decreasing the rotation rate.
Futureinternet 12 00029 g019
Figure 20. Scenario 2: Area of Interest [31].
Figure 20. Scenario 2: Area of Interest [31].
Futureinternet 12 00029 g020
Figure 21. The Facing area in cell (4,4). (a) Google Maps 3D image for the cell containing obstacle; (b) Google Maps street view image for the facing area.
Figure 21. The Facing area in cell (4,4). (a) Google Maps 3D image for the cell containing obstacle; (b) Google Maps street view image for the facing area.
Futureinternet 12 00029 g021
Figure 22. Trajectory Planning in our work versus the work in [31]. (a) trajectory planning by Di Franco et al. [31]; (b) trajectory plan by our work.
Figure 22. Trajectory Planning in our work versus the work in [31]. (a) trajectory planning by Di Franco et al. [31]; (b) trajectory plan by our work.
Futureinternet 12 00029 g022
Figure 23. Gain in completion time. (a) gain in completion time in our work compared to [31] while increasing both speed and rotation rate; (b) gain in completion time in our work compared to [31] while increasing speed and decreasing rotation rate.
Figure 23. Gain in completion time. (a) gain in completion time in our work compared to [31] while increasing both speed and rotation rate; (b) gain in completion time in our work compared to [31] while increasing speed and decreasing rotation rate.
Futureinternet 12 00029 g023
Table 1. Our basic approach vs. EAOA.
Table 1. Our basic approach vs. EAOA.
(a) Basic Approach(b) Our Work (EAOA)
Route length ⊤ in meters507.003486.514
UAV speed υ in m/s88
Turns in degree14761107
Rotation rate ϖ in degree/s3030
Completion-time τ in sec112.57597.714
Energy consumption E t o t a l in KJ84.55175.782
Table 2. Results obtained by CPP in Ref. [31] (Figure 22a) vs. EAOA (Figure 22b).
Table 2. Results obtained by CPP in Ref. [31] (Figure 22a) vs. EAOA (Figure 22b).
(a) Ref. [31](b) Our work (EAOA)
Route length ⊤ in meters1075.55551033.702
UAV speed υ in m/s88
Turns in degree20251260
Rotation rate ϖ in degree/s3030
Completion-time τ in sec201.944171.212
Energy consumption E t o t a l in KJ160.229142.122

Share and Cite

MDPI and ACS Style

Ghaddar, A.; Merei, A. EAOA: Energy-Aware Grid-Based 3D-Obstacle Avoidance in Coverage Path Planning for UAVs. Future Internet 2020, 12, 29. https://doi.org/10.3390/fi12020029

AMA Style

Ghaddar A, Merei A. EAOA: Energy-Aware Grid-Based 3D-Obstacle Avoidance in Coverage Path Planning for UAVs. Future Internet. 2020; 12(2):29. https://doi.org/10.3390/fi12020029

Chicago/Turabian Style

Ghaddar, Alia, and Ahmad Merei. 2020. "EAOA: Energy-Aware Grid-Based 3D-Obstacle Avoidance in Coverage Path Planning for UAVs" Future Internet 12, no. 2: 29. https://doi.org/10.3390/fi12020029

APA Style

Ghaddar, A., & Merei, A. (2020). EAOA: Energy-Aware Grid-Based 3D-Obstacle Avoidance in Coverage Path Planning for UAVs. Future Internet, 12(2), 29. https://doi.org/10.3390/fi12020029

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop