Next Article in Journal / Special Issue
An Experimental Apparatus for Icing Tests of Low Altitude Hovering Drones
Previous Article in Journal
Design and Implementation of a UUV Tracking Algorithm for a USV
Previous Article in Special Issue
Low-Altitude Sensing of Urban Atmospheric Turbulence with UAV
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Camera Networks for Coverage Control of Drones

by
Sunan Huang
*,
Rodney Swee Huat Teo
and
William Wai Lun Leong
Temasek Laboratories, National University of Singapore, T-Lab Building, 5A, Engineering Drive 1, Unit 09-02, Singapore 117411, Singapore
*
Author to whom correspondence should be addressed.
Drones 2022, 6(3), 67; https://doi.org/10.3390/drones6030067
Submission received: 27 January 2022 / Revised: 25 February 2022 / Accepted: 25 February 2022 / Published: 3 March 2022
(This article belongs to the Special Issue Unconventional Drone-Based Surveying)

Abstract

:
Multiple unmanned multirotor (MUM) systems are becoming a reality. They have a wide range of applications such as for surveillance, search and rescue, monitoring operations in hazardous environments and providing communication coverage services. Currently, an important issue in MUM is coverage control. In this paper, an existing coverage control algorithm has been extended to incorporate a new sensor model, which is downward facing and allows pan-tilt-zoom (PTZ). Two new constraints, namely view angle and collision avoidance, have also been included. Mobile network coverage among the MUMs is studied. Finally, the proposed scheme is tested in computer simulations.

1. Introduction

Unmanned aerial vehicle (UAV) technology is currently a growing area. It offers many potential civil applications, inspiring scientists to undertake the development of new algorithms to automate UAV systems. Employing multiple unmanned multirotors (MUMs) [1,2,3,4,5,6,7,8] is rapidly becoming possible due to the development of computer hardware and communication technology. The use of UAVs is advantageous when compared to a single one. For example, when a task is very difficult, a single UAV may take a long time or may not be able to accomplish it effectively. The research challenge is then to develop the appropriate cooperation logic so that the UAVs work together to complete missions effectively and efficiently.
Coverage control is attracting research interest in MUMs [9]. The basic principle is to drive MUMs to optimal coverage for a given environment. The early works in coverage control address the visibility problem [10] or the Watchmen Tour Problem (WTP) [11], which determines the optimal number of guards and their routes, respectively, to observe a given area. These approaches form the primary research in this area, but they are not suitable for real applications. It is necessary to real world constraints in the development of coverage algorithms for MUMs. Cortés et al. [12,13] have proposed some approaches for coverage control, which are based on the Voronoi algorithm for MUMs. Here, the Voronoi algorithm is a decentralized iterative scheme to partition a 2D plane into several cells. Thereafter, the coverage problem received greater attention and many methods [4,14,15,16] have been proposed. Here, we give several examples. Schwager et al. [14,17] presented a distributed algorithm for dealing with the coverage, which controls MUMs to implement their coverage by configuring position and pan and tilt parameters. Piciarelli et al. [15] considered a fixed camera network and proposed a camera PTZ reconfiguration for coverage control with considering a relevance parameter, where an ellipse sensing field of view (FOV) is assumed. Parapari et al. [4] presented a distributed collision avoidance control for MUMs, but no PTZ configuration. Wang and Guo [16] handled the coverage problem by deriving a distributed control from a potential function, but no PTZ configuration. It should be noted that the result of Schwager et al. [17] is significant since the sensor used is the downward facing camera which is a practical configuration. The difficulty in this approach is that the sensing field of view is an arbitrary convex polygon of four sides. Most results do not consider the case of MUMs with downward facing cameras and arbitrary trapezoidal FOV flying in the sky. However, the camera focal parameter was not considered for configuration in that paper. In order to minimize mobility and hovering MUMs over the monitoring area, a simultaneous configuration of PTZ parameters is an important topic in coverage control. In addition, sometime a sensor would face an obstacle (e.g., a tree). Without the consideration of this situation in the coverage control, the sensor view may be occluded by dynamic or static obstacles. Thus, the view angle parameter is also important and should be considered. In a later work, Arslan et al. [18] presented a circular image sensor and developed an algorithm of configuring PTZ parameters. Unfortunately, the result of Arslan et al. [18] is based on a circular FOV, but the actual FOV is a trapezoidal one. In addition, Arslan et al. [18] use a fixed camera network to configure PTZ parameters without tuning the cameras’ positions.
In this paper, we develop a distributed coverage control approach for MUMs with downward facing cameras. The proposed approach is based on the result of Schwager et al. [17]. A modified camera model is adopted in this paper. Based on this model, we extend Schwager et al.’s method to include more control variables in coverage control. The following features are improved compared with Schwager et al.’s method: (1) camera sensor model close to actual one; (2) configured rotation and PTZ parameters simultaneously; (3) imposed view angle constraints; (4) controller that can avoid collision among UAVs; (5) network convergence with more control variables and guaranteed collision avoidance.
Originally, a primary version of this paper was published in ICUAS 2018 [19]. We extend [19] to more contents: (1) the stability analysis is given for the view angle control; (2) we extend the coverage algorithm by incorporating with collision avoidance and the stability analysis is given; (3) more examples are tested in the simulation.
The paper is structured as follows: in Section 2, problem formulation and our research objectives are briefly described. In Section 3, the solution of a distributed coverage control with consideration of the unmanned aerial vehicle (UAV) position, rotation and PTZ parameters, and view angle constraint, is given. The simulation study is given in Section 4. Finally, the conclusion is given in Section 5.

2. Problem Statements

In this section, we describe the problem of coverage control of MUMs with downward facing cameras. We first define the environment and the camera model and then give the coverage control objective.

2.1. Environment

Let Q be a bounded environment Q R 2 . The degree of interest of the area in its interior is represented by a density function φ ( q ) where q Q .

2.2. Camera Sensor Model

We consider n UAVs in an environment Q. Each UAV with a downward facing camera is located in position:
p i = [ x i , y i , z i ] T .
Figure 1 shows the camera model, where the center of the camera is at the center of the lens; f is the focal length of the lens; L l is the length of the camera image; L w is the width of the camera image; each camera has a rectangular field of view (FOV), which is the intersection of the cone of the camera lens with the environment Q. There are two coordinate systems in the coverage problem: one is the camera coordinates (CC) of UAV i, and the other one is the global coordinates (GC). The CC is at the center of the camera lens, with the z-axis pointing downward facing to the ground through the lens, while the GC is fixed on the ground, with the z-axis points upward normal to the ground. In CC, the camera view is like a pyramid object whose base is a rectangle shape and whose sides are triangles uniting at a common apex, the center of the camera lens. The sides of the pyramid have the outward normal vectors e 1 , e 2 , e 3 , e 4 . Here, we derive the first outward normal vector e 1 . The side view of the camera with pyramid FOV is shown in Figure 2. From the figure, it is known that two triangles Δ v 1 o v 2 and Δ v 3 o v 4 are similar since the vector e 1 and the line o v 3 are perpendicular. Thus, the projection of the normal vector e 1 on the y-axis is o v 2 and its value is f, while the rejection of the normal vector e 1 from the y-axis is v 1 v 2 and its value is L l 2 . Therefore, the normal vector e 1 in the CC system is [ 0 f L l 2 ] . We are interested in the unit normal vector and thus it follows that:
e 1 = [ 0 2 f L l 2 + 4 f 2 L l L l 2 + 4 f 2 ] T
In a similar way, we can obtain the other remained three outward normal vectors:
e 2 = [ 2 f L w 2 + 4 f 2 0 L w L w 2 + 4 f 2 ] T
e 3 = [ 0 2 f L l 2 + 4 f 2 L l L l 2 + 4 f 2 ] T
e 4 = [ 2 f L w 2 + 4 f 2 0 L w L w 2 + 4 f 2 ] T
For the GC system, we want to transform it to get a CC system, which can be represented in 3D rotation matrix form as R. This matrix can be implemented by using two elementary transforms, each matrix realizes a rotation. This is achieved by rotating the z-axis by π / 2 whose transformation is given by:
R i ( π 2 ) = c o s ( π / 2 ) s i n ( π / 2 ) 0 s i n ( π / 2 ) c o s ( π / 2 ) 0 0 0 1 ,
and then turning the x-axis over by π whose transformation is given by:
R i ( π ) = 1 0 0 0 c o s ( π ) s i n ( π ) 0 s i n ( π ) c o s ( π ) ,
We can express this transformation by:
R i π 2 π = R i ( π ) R i ( π 2 ) = 0 1 0 1 0 0 0 0 1 ,
Finally, for a 3D point v in GC, we can transform the vector ( v p i ) into the CC system by the transformation R i π 2 π , that is R i π 2 π ( v p i ) . For a q Q , the vector I 32 q p (where I 32 = 1 0 0 1 0 0 ) expressed in CC is given by R i π 2 π ( I 32 q p ) .
Now, we are concerned about the pyramid FOV, as shown in Figure 1. This pyramid has four sides and they intersect with the ground to get four edges of the camera FOV. Each edge is defined as l k , k = 1 , 2 , 3 , 4 . If the point q is on the edge l k of the camera FOV, then:
e k T R i π 2 π ( I 32 q p i ) = 0 , k = 1 , 2 , 3 , 4
since the outward normal vector e i and the vector R i π 2 π ( I 32 q p i ) are perpendicular. Thus, for a given q Q , the camera FOV is represented by:
B i = { q : e 1 i T R i π 2 π ( I 32 q p i ) 0 e 2 i T R i π 2 π ( I 32 q p i ) 0 e 3 i T R i π 2 π ( I 32 q p i ) 0 e 4 i T R i π 2 π ( I 32 q p i ) 0 } .
The observed area A from the camera, which is a rectangle form is given by:
A = z 2 L l L w f 2
which is obtained directly from Figure 1. The number of pixels are given by N l and N w . The area/pixel is given by:
a r e a A p i x e l = L l L w z 2 N l N w f 2 .
Thus, we have a function:
g ( p i , q ) = L l i L w i z i 2 N l i N w i f i 2 , for q B i , otherwise .
Remark 1.
It should be noted that the area/pixel is different from the result of Schwager et al. [17], where Schwager et al. used a magnification factor to represent the area observed from the camera, while our expression is more practical by using area and pixel resolution. The proposed sensor model also differs from that of Schwager et al.’s work. The center of the UAV (or camera) is defined at the center of the camera lens in our model, while the center of the camera in Schwager et al.’s work was defined at the center of the camera focal. If the zoom parameter f is configured based on Schwager et al.’s work, this will result in the changes of the UAV altitude (it is redundant to adjust the zoom parameter in this situation). Whereas, the configuration based on our sensor model will not affect the UAV altitude. In addition, Schwager et al.’s work does not consider the size of the camera image, which affects the FOV.

2.3. Wireless Camera Networks

In this paper, the coverage algorithm is applicable to the wireless camera networks formed of cameras with PTZ. This implies that each UAV is communicated with other UAVs. The main assumptions of this network are described below.
  • Each camera can obtain its position and velocity by its GPS built in UAV.
  • Each camera can obtain its FOV information by its camera image.
  • The camera network is connected; therefore, each camera has a sensing and communication capability or multi-hop transmission capacity to transmit related information to neighbor cameras.
    It should be noted that in the present wireless camera network, (a) we need to transmit relevant FOV information to adjacent cameras to find FOV overlap, it is not necessary to transmit all FOVs of cameras in the environment. Therefore, the information broadcast can be set like two or three hops before it stops rebroadcasting (UDP protocol can be used). (b) It needs to communicate with all of the cameras to obtain only the position information of other UAVs.

2.4. Optimization Coverage Problem

For the multi-UAV coverage control, we have to design a cost function that can be used for evaluating the coverage performance. It is a natural way to use the information, as discussed in the previous section. Defining the cost function (see [17]) as our control objective is given by:
J = Q ( i = 1 n g ( p i , q ) 1 + Υ 1 ) 1 d q
where Υ > 0 is a positive constant. This constant is used to avoid the integration part as zero due to g ( p i , q ) 1 . It may be interpreted as baseline information of the environment. The optimization coverage problem of multi-UAVs is to minimize the objective function (11).

3. Distributed Coverage with Configuration of Position, Yaw and PTZ

Our goal is to develop a distributed control for the coverage problem, where each UAV exchanges information with its neighboring UAVs. Thus, the area can be represented by:
J N q = ( i N q g ( p i , q ) 1 + Υ 1 ) 1
where N q is the camera set described by:
N q = { i : q B i , i = 1 , 2 , . . . n } .
The objective function becomes:
J = Q J N q φ ( q ) d q
When the UAV heading yaw and pan-tilt-zoom (PTZ) cameras are considered in UAV camera networks, many problems arise. One of the problems is how to achieve the coverage by adjusting position, yaw (rotation) and PTZ parameters in a MUM system. Since the camera FOV affects coverage, any strategy for coverage control must be associated with cameras PTZ parameters. In this section, we develop the control law to consider such a general case, the configuration of position, yaw (rotation) and PTZ cameras. Consider the state of the UAV i with gimbal camera to be given by:
χ i = [ x i , y i , z i , ψ i r , ψ i p , ψ i t , f i ] T ,
where ψ i r , ψ i p , and ψ i t are the rotation (or yaw), pan and tilt angles, respectively, and f i is the focal length of the camera i. Usually, the rotation (yaw) is controlled by UAV, while the PTZ parameters are controlled by a gimbal camera system. We assume that both the UAV and camera are considered as one particle (camera lens representing this point) and their positions are the same one represented by this point. As explained in Section 2, the center of the UAV position is the camera lens. Consider the state equation of the camera i, that is:
χ i ˙ i = u i
where:
u i = [ u x i , u y i , u z i , u ψ i r , u ψ i p , u ψ i t , u f i ] T .
Let p i = [ x i , y i , z i ] T be the position of the camera i at the center of the lens (it is also the position of UAV). It should be noticed that for each UAV their dynamic is independent. For neighboring UAVs, the wireless communication network is assumed to be connected during the mission.
As shown in Section 2, a rectangular FOV on the GC coordinates is converted into one which is based on the CC system, by rotating a matrix R i π 2 π . Sometimes we have to configure the yaw angle such that the coverage is achieved optimally. In this situation, a rotation matrix for rotating the yaw (rotation) angle is introduced as described below:
R o t a t i o n a b o u t z a x i s ( y a w a n g l e ) R y a w ( ψ i r ) = cos ψ i r sin ψ i r 0 sin ψ i r cos ψ i r 0 0 0 1 .
Similarly, we can tune pan and tilt angles to control the camera to get a better FOV on the ground. The rotation matrices corresponding to the pan and tilt transformation are given by:
R o t a t i o n a b o u t x a x i s ( p a n a n g l e ) R p a n ( ψ i p ) = 1 0 0 0 cos ψ i p sin ψ i p 0 sin ψ i r cos ψ i r ,
and:
R o t a t i o n a b o u t y a x i s ( t i l t a n g l e ) R t i l t ( ψ i t ) = cos ψ i t 0 sin ψ i t 0 1 0 sin ψ i t 0 cos ψ i t ,
respectively. Finally, for a 3D point v in a GC system, we can transform the vector ( v p i ) into the CC system by rotating R i , which is given by:
R i = R t i l t ( ψ i t ) R p a n ( ψ i p ) R y a w ( ψ i r ) R i π 2 π
For a point q inside FOV, we can express the vector ( I 32 q g i ) in the CC system by R i ( I 32 q p i ) . Next, we are concerned about the FOV cone, as shown in Figure 1. By the transformation, the camera FOV shown in (8) in the CC system is changed to:
B i = { q : e 1 i T R i ( I 32 q p i ) e 2 i T R i ( I 32 q p i ) e 3 i T R i ( I 32 q p i ) e 4 i T R i ( I 32 q p i ) 0 }
It should be noted that when tuning yaw or PTZ parameters, FOV is not a rectangular shape, as shown in Section 2, but it is a trapezium or trapezoid in this transformation R i . Thus, the area of such FOV shape can be viewed as a mapping defined by C f o v from the image area L l L w to an area of a trapezium FOV on the ground, that is expressed as C f o v : L l L w F O V . Therefore, the a r e a p i x e l in (9) should be changed to:
a r e a p i x e l = C f o v N l N w f 2 I 32 q p i 2 ,
where I 32 q p i is the the distance between the camera and the point q inside FOV. Now, due to the use of yaw and PTZ cameras, the function g ( p i , q ) is changed to:
g ( p i , q ) = C f o v ( L l i L w i ) N l i N w i f i 2 I 32 q p i 2 , for q B i , otherwise .
Consider the state equation of UAV i with the PTZ camera, that is:
χ i ˙ i = u i
u i = ρ J χ i , χ i = [ p i , ψ i r , ψ i p , ψ i t , f i ]
where ρ > 0 is a factor, and the gradient J χ i (see Appendix A) is given by:
J p i = k = 1 4 Q l k i ( J N q J N q { i } ) φ ( q ) R i T e k i I 23 R i T e k i d q Q B i 2 N l i N w i f i 2 J N q 2 ( I 32 q p i ) L l i L w i I 32 q p i 4 φ ( q ) d q ,
J ψ i w = k = 1 4 Q l k i ( J N q J N q { i } ) φ ( q ) e k i T R i ψ i w ( p i I 32 q ) I 23 R i T e k i d q
w { r , p , t }
where R i ψ i w can be obtained by differentiating R i with respect to ψ i w directly, and:
J f i = k = 1 4 Q l k i ( J N q J N q { i } ) φ ( q ) e k i T f i R i ( p i I 32 q ) I 23 R i T e k i d q Q B i 2 J N q 2 f i N l N w L l L w | | I 32 q p i | | 2 φ ( q ) d q .
It is observed from (26) that if we fix the parameters x , y , z , the updating parameter f will not affect the position of UAV. This implies that we can hover the UAV at a certain position and tune PTZ parameters to get the optimal coverage. The factor ρ is used to affect the convergence to the coverage. Large values of ρ can speed up the convergence. However, it cannot be increased to a very large value due to the control limits.
The following theorem is given for configuring the parameters x i , y i , z i , ψ i r , ψ i p , ψ i t , f i simultaneously.
Theorem 1.
In a network of n UAVs governed by dynamics (22) with downward facing cameras, the gradient of the cost function J with respect to the state variables χ i = [ x i , y i , z i , ψ i r , ψ i p , ψ i t , f i ] T is given by (24)–(26).
The proposed coverage control is a decentralized form. Based on the gradient descent, the configuration of the control variables is an iterative process, as shown in Figure 3. The iteration is terminated when the desired coverage rate is reached, where the coverage rate is m u l t i p l e U A V s c o v e r a g e o f t h e a r e a o f i n t e r e s t a r e a o f i n t e r e s t .
Remark 2.
In [17], the configuration of the focal parameter f is not considered. In the proposed algorithm, we can configure the position ( x , y , z ) , yaw and PTZ parameters. This improves the results of [17].

3.1. Incorporating View Angle

In a practical situation, sometimes a sensor (e.g., solid-state LiDAR) faces an obstacle (e.g., a tree), which may be between the sensor and the target and it can not capture the object image even if the target is within the view angle coverage of the sensor; this is known as an occlusion. If we consider the view angle as a target feature, this implies that the coverage must include this feature. Without the consideration of this feature in the coverage control, the target may be occluded by dynamic or static objects. For example, we hope the camera i to rotate its angle to 30 degrees so that we can see the observed region clearly. This is a reference following control problems.
Let us consider this situation in coverage control. Assume that we require the UAV to follow a desired view angle ψ i w D , w { r , p , t } . The controller design should be to achieve this objective, i.e., ψ i w approaches the desired ψ i w D as soon as possible. Let ψ i w ˜ = ψ i w D ψ i w . We propose the following controller:
u ψ i w = ψ i ˙ w D + k p i w ψ i w ˜
where k p i w > 0 is the constant control gain, which can be determined by users.
Assumption A1.
The desired angle ψ i w D , w { r , p , t } is smooth and bounded, and its derivative ψ i ˙ w D is also bounded.
Rearranging the state equations of the camera i, we get:
η ˙ i               = u η i
ψ ˙ i w = u ψ i w , w { r , p , t } .
where η i     =     [ x i , y i , z i , f i ] T , u η i     =     [ u x i , u y i , u z i , u f i ] T , ψ i w     =     [ ψ i r , ψ i p , ψ i t ] T and u ψ i w     =     [ u ψ i r , u ψ i p , u ψ i t ] T .
We have the following theorem to ensure the stability.
Theorem 2.
In a network of n UAVs governed by dynamics (22) with downward facing cameras, the control laws u x i , u y i , u z i , u f i are designed by (23), while the control laws u ψ i w , w { r , p , t } are designed by (27). If Assumption 1 holds, then:
(1) the proposed control laws lead to a bounded coverage of J ;
(2) lim t ψ i w ˜ = 0 .
Proof. 
See Appendix A. □
Remark 3.
For a camera network, it is possible that some UAVs use the position and PTZ control with/without considering view angle. In this situation, those UAVs without considering view angle are referred to as zero view angle control, i.e., the desired view angle is set to ψ i w D = 0 . Thus, we have a similar result as in Theorem 2.
Remark 4.
The stability in Theorem 2 ensures that (1) the coverage is bounded when applying the proposed coverage control with view angle; (2) it is converged to the desired view angle.

3.2. Incorporating Collision Avoidance

Collision avoidance is a practical problem in multi-UAV networks. So far, the control design does not consider collisions with other UAVs when implementing the coverage objective. In what follows, we incorporate a potential function to design a coverage control with autonomously avoiding collisions.
Define a distance as d i j = | | p i p j | | , i , j = 1 , 2 , . . . n , i j . Let Γ i be the set of indices of cameras of the ith UAV for which d i j is within certain regions; that is, Γ i = { j : r < d i j R , j = 1 , 2 , . . . n , j i } . The view of r < d i j R is shown in Figure 4, where r and R ( R > r > 0 ) are the radii of the avoidance and detection regions. The potential function [20] based on n UAVs is given by:
V P i = j = 1 , j i n v P i j
where:
v P i j = ( min 0 , d i j 2 R 2 d i j 2 r 2 ) 2
This is a repulsive potential function. The partial derivative of v P i j with respect to p i = [ x i , y i , z i ] T is given by:
v P i j T p i = 0 , i f d i j R 4 ( R 2 r 2 ) ( d i j 2 R 2 ) ( d i j 2 r 2 ) 3 ( p i p j ) T , i f r < d i j < R 0 , i f d i j < r
Rearrange the state equations of the UAV i as:
p ˙ i = u p i
β ˙ i = u β i ,
where β i = [ ψ i r , ψ i p , ψ i t , f i ] T .
Consider the following performance criterion:
V = J + 1 2 i = 1 n V P i
The following theorem is given to establish a stability result.
Theorem 3.
In a network of n UAVs governed by dynamics (22) with downward facing cameras, the control laws are given by:
u p i = ρ ( J p i + j = 1 , j i n v P i j p j )
u β i = ρ J β i ,
with (32), (A9), (A17) and (A18). If the initial configuration such that d i j ( 0 ) > r , then:
(1) The proposed control laws lead to a bounded coverage of J ;
(2) The collision among UAVs is avoided.
Proof. 
See Appendix A. □
Remark 5.
If we incorporate the view angle control, in this situation, we have a similar conclusion as in Theorem 3.

4. Simulation Studies

Simulation tests of the proposed coverage controls are given in this section. The HP computer with intel (R) Core(TM) i7-4770 CPU was used. The operation system was window 10. MATLAB software was used for simulations. In all case studies, the environment Q was the same as in the rectangle form [ 200 , 200 ] × [ 200 , 200 ] R 2 . All UAVs were identical with the camera model, as shown in Figure 1. The parameters of the camera image sensor were L l = 3 mm, L w = 5 mm, N l = 640 pixels, N v = 480 pixels and Υ was chosen as 1000.

4.1. Case 1

We considered 2 UAVs for coverage control. The sensor model in Section 2.2 was used in a rectangle form. The configuration of the camera network included UAV positions without rotation and PTZ tuning. The proposed algorithm (23) was used to control UAVs such that the coverage cost function was minimized. We have considered a density function ϕ ( q ) = e ( x 20 20 ) 2 ( y 40 40 ) 2 which is an area of interest (convex type). Two UAVs try to cover the area of interest by tuning the UAV position (x,y,z) parameters. Simulation results are illustrated in three parts: initial stage, after the initial stage and final stage. Initially, two UAVs started as seen in Figure 5. Figure 6 shows a representation of the coverage improvement after the initial stage. The final stage is shown in Figure 7. It was observed that the camera network coverage increased significantly through multiple rounds and the area was almost covered in the final stage.
We consider configuring more parameters such as the UAV position (x,y,z), rotation, pan and tilt angles, where the proposed controls (23) were used. However, one UAV is required to configure the view angle (rotation), where the proposed control (27) was used. A density function ϕ ( q ) = e ( x 20 ) 2 ( y 20 40 ) 2 + e ( x 50 50 ) 2 ( y 50 20 ) 2 is a region of interest, which is a non-convex type, as shown in Figure 8a. Three UAVs try to cover the area of interest as soon as possible. Simulation results are illustrated in three parts: initial stage, after the initial stage and final stage. The first UAV, which is marked in blue color, configures the (x,y,z), rotation, pan and tilt parameters, but the rotation angle is required to follow the desired rotation angle 90 degree, starting at the x-axis of the CC system, which is right-handed. The other two UAVs use the coverage control with the configuration of the camera position ( x , y , z ) , rotation, pan and tilt parameters. Initially, three UAVs start grouped, as seen in Figure 8 and the coverage is poor. Later, Figure 9 shows a representation of the coverage improvement after the initial stage and the 1 / 2 coverage of the area of interest is achieved. It is observed from Figure 8 and Figure 9 that the view angle of the first UAV (marked in blue color) is controlled in 90 degree. The final stage is shown in Figure 10, where the rotation angle of the first UAV is still controlled in 90 degrees. It is observed from Figure 8, Figure 9 and Figure 10 that the camera network coverage is increased significantly through multiple rounds and 93% of the area of interest is almost covered in the final stage.

4.2. Case 3

We configured the camera focal length (zoom parameter), which was not controlled in Cases 1-2. The altitude (z) was fixed at 35 m high so that the camera’s FOV was observed as updating the camera’s zoom parameter. Thus, the configuration parameters include the camera’s position (x,y), rotation and PTZ. We used the coverage control laws, as proposed in (23). The density function ϕ ( q ) was the same as in Case 2. Four UAVs were intended to be used in the coverage control laws to cover the area of interest automatically. Simulation results are illustrated in three parts: initial stage, after the initial stage and final stage. Initially, four UAVs grouped, as seen in Figure 11, where the initial focal length was 8 for all cameras. It is observed that at the initial stage, the multi-UAVs were poor in coverage. After that, the coverage control was used to drive all UAVs to improve the coverage by configuring the (x,y), rotation and PTZ. Figure 12 shows a representation of the coverage improvement after the initial stage and almost 1 / 2 the coverage of the area of interest was achieved. It is observed that the the FOV of each UAV is changing such that the coverage performance is improved, especially in the size of the FOV of each UAV. This is because the zoom parameter was configured. The final stage is shown in Figure 13. It is observed that even though the altitude of all UAVs was fixed, the camera network coverage was achieved as about 97 percent by controlling four UAVs by configuring the zoom and other parameters.
The previous results did not consider collision avoidance when configuring the camera parameters. In a practical situation, it is quite dangerous without collision avoidance mechanisms in controlling UAVs. In this case, we show the configuration of the parameters (x,y,z), rotation, pan and tilt with the collision avoidance proposed in Section 3.2. The density function ϕ ( q ) was the same as in Case 2. Four UAVs try to cover the area of interest area as soon as possible. It is required to have a coverage rate of 85 percent. Assume that the radii of the avoidance and detection were r = 30 and R = 35 , respectively. Figure 14 shows a representation of the coverage without considering the collision avoidance, while Figure 15 shows a representation of the coverage with the collision avoidance function. The horizontal distance profile between the ith UAV and jth UAV ( i j ) is shown in Figure 16. It is observed from Figure 16 that the minimum separation distance was about 25 m (<the radius of the avoidance) when, without considering the collision avoidance, it was greater than 30 m (>the radius of the avoidance) when considering the collision avoidance. This verifies that the proposed collision avoidance can work well. Nearly 85 percent of the camera network coverage rate was achieved. In this case, the coverage rate was less than in Cases 1–3, since we intend to demonstrate the collision avoidance and use a large radius of avoidance, i.e., 30.
The proposed decentralized control requires knowing the position and FOV information of the neighboring UAVs. As discussed in Section 2.3, a communication network should be built for exchanging information from each other. The proposed coverage control is an iterative process based on the gradient descent. Simulation results have shown that the desired coverage rate can be reached by multi-UAV configurations. Even if during the initial stage, UAVs are poor in coverage (see Cases 2 and 3), the desired coverage rate can still be achieved by configuration of the variables. Moreover, as a collision avoidance term is added into the control, it is not necessary to worry about avoiding issues when conducting a coverage control, which was observed in Case 4. The drawback of the proposed algorithm is that the FOV of the ith UAV may be outside of the area of interest, which is observed from Figure 13.

5. Discussion and Conclusions

The development of coverage strategies is the main topic in multi-UAV systems. Technical considerations in camera networks consisting of multi-UAVs are still a challenge. Little work has been done in the design of the coverage control that involves the configuration of multiple parameters and view angle and collision avoidance.
This paper has presented a solution for coverage control of the multi-UAV system with downward facing cameras. We have extended the result of [19] to more contents. For example, we consider more parameters for tuning and enhancing control freedom. The view angle is discussed when designing the coverage issue. In addition, the proposed coverage control involves the collision avoidance issue. This is quite important when multiple UAVs are configuring the variables to achieve the coverage objective. The theoretical analysis is also given. The simulation tests have shown that the multi-UAVs with cameras can cover the whole interesting area automatically and verified that the proposed coverage control scheme is successful. The drawback of the proposed algorithm is that the FOV of the ith UAV may be outside of the area of interest. The current version also cannot handle invisible and obscured areas. This is a 3D coverage issue.
In future research we will consider the deformation of reality that may arise in the image due to the differentiation of the tilt angle or rotation. Another objective is to implement the proposed method on a real multi-UAV system.

Author Contributions

Conceptualization, S.H. and R.S.H.T.; Methodology, S.H. and R.S.H.T; software, W.W.L.L.; formal analysis, W.W.L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This paper has been approved by Temasek Lab. on Feb.15,2022 (Number is 2720).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
Symbols
qpoint belongs to a known environment
Qenvironment
φ ( q ) the density function of the area of interest
p i the ith UAV’s position
e i the ith outward normal vectors
fthe focal length of the lens
L l the length of the camera image
L w the width of the camera image
R i the rotation matrix of the ith UAV
Athe observed area by camera
B i the field of view of the ith camera
g ( p i , q ) the area/pixel information
nthe total number of UAVs
J N q the cost index
ψ i w the angle of the ith camera,w={r,p,t}
u i the controlled variable of the ith UAV
N l the pixel number of the length of image
N w the pixel number of the width of image
List of abbreviations
MUMMultiple unmanned multirotor
PTZPan-tilt-zoom
UAVUnmanned aerial vehicle
WTPWatchmen Tour Problem
FOVField of view
CCCamera coordinates
GCGlobal coordinates
GPSGlobal positioning system
LiDARLight detection and ranging

Appendix A

Proof of Theorem 1. 
Taking a similar procedure as in [17] (see Equation (37) of [17]), we have:
H p i = Q B i ( J N q J N q { i } ) ϕ ( q ) q Q B i T p i n Q B i d q + Q B i J N q p i ϕ ( q ) d q
where q Q B i and n Q B i are denoted as the point q and the outward normal vector n are along the boundary Q B i of UAV i, respectively. We have to calculate:
q Q B i T p i n Q B i
and:
J N q p i .
Since n Q B i is composed of four edges of the camera FOV on the ground, it is decomposed into n 1 i , n 2 i , n 3 i and n 4 i along the edges l 1 i , l 2 i , l 3 i and l 4 i , respectively. It should be noted that each n k i is in the GC system and thus n k i can be obtained through an inverse transformation R i to e k i , that is I 23 R i 1 e k i = I 23 R i T e k i . Considering a unit vector, we have:
n k i = I 23 R i T e k i I 23 R i T e k i .
When q lies on the boundary Q B i , the condition e k i T R i ( I 32 q p i ) = 0 is satisfied. Thus, by differentiating this condition, we can obtain:
q Q B i T p i n k i = R i T e k i I 23 R i T e k i
The term of J N q p i can be calculated as:
J N q p i = g i ( j = 1 n g j 1 + Υ 1 ) 1 g i p i
= J N q 2 1 g i 2 g i p i
= 2 N l i N w i f i 2 J N q 2 ( I 32 q p i ) L l i L w i I 32 q p i 4
Thus, we have:
J p i = k = 1 4 Q l k i ( J N q J N q { i } ) φ ( q ) R i T e k i I 23 R i T e k i d q Q B i 2 N l i N w i f i 2 J N q 2 ( I 32 q p i ) L l i L w i I 32 q p i 4 φ ( q ) d q
For calculating the gradient of J with respect to f i , we have:
J f i = Q B i ( J N q J N q { i } ) φ ( q ) q Q B i T f i n Q B i d q + Q B i J N q f i φ ( q ) d q
Differentiating the boundary e k i T R i ( I 32 q p i ) = 0 , we have:
e k i T f i R i ( I 32 q p i ) + e k i T R i I 32 q f i = 0
and:
q T f i n k i = e k i T f i R i ( I 32 q p i ) I 23 R i T e k i
For the gradient e k i T f i , from Equations (1, 2, 3, 4), we have:
e 1 i T f i = [ 0 2 L l i 2 4 f i L l i ] / ( 4 f i 2 + L l i 2 ) 3 / 2
e 2 i T f i = [ 2 L w i 2 0 4 f i L w i ] / ( 4 f i 2 + L w i 2 ) 3 / 2
e 3 i T f i = [ 0 2 L w i 2 4 f i L w i ] / ( 4 f i 2 + L w i 2 ) 3 / 2
e 4 i T f i = [ 2 L w i 2 0 4 f i L l i ] / ( 4 f i 2 + L l i 2 ) 3 / 2
Thus, it follows that:
J f i = k = 1 4 Q l k i ( J N q J N q { i } ) φ ( q ) e k i T f i R i ( p i I 32 q ) I 23 R i T e k i d q Q B i 2 J N q 2 f i N l N w L l L w | | I 32 q p i | | 2 φ ( q ) d q
Similarly, we have:
J ψ i w = k = 1 4 Q l k i ( J N q J N q { i } ) φ ( q ) e k i T R i ψ i w ( p i I 32 q ) I 23 R i T e k i d q
w { r , p , t }
where R i ψ i w can be obtained by differentiating R i with respect to ψ i w directly. □
Proof of Theorem 2. 
Consider the Lyapunov-like function:
V = J + w { r , p , t } 1 2 ψ i w ˜ 2
Its time derivative is given by:
V ˙ = i = 1 n [ J T η i η ˙ i + w { r , p , t } ψ i w ˜ ψ i w ˜ ˙ ] = i = 1 n [ J T η i η ˙ i + w { r , p , t } ψ i w ˜ ( ψ i ˙ w D ψ i ˙ w ) ]
Substituting the state Equations (28) and (29) with the control laws (23) and (27) yields:
V ˙ = i = 1 n [ ρ J T η i J η i + w { r , p , t } k p i w ( ψ i w ˜ ) 2 ] 0
Since V ˙ 0 , the camera network is bounded, i.e.,
J + w { r , p , t } 1 2 ψ i w ˜ 2 H ( 0 ) + w { r , p , t } 1 2 ψ i w ˜ ( 0 ) 2 .
This implies that J , ψ i w ˜ are bounded. The conclusion (1) is proved.
Next, we prove that conclusion (2) holds. The view angle controls result in ψ ˜ ˙ i w = k p i w ψ ˜ i w . Since ψ i w ˜ is bounded, this implies that ψ ˜ ˙ i w is also bounded. Equation (A21) and the positive definiteness of V imply that:
V ( t ) V ( 0 ) = 0 t i = 1 n [ ρ J T η i J η i + w { r , p , t } k p i w ( ψ i w ˜ ) 2 ]
This implies that:
0 t k p i w ( ψ i w ˜ ) 2 V ( 0 ) V ( t ) 0 t i = 1 n ρ J T η i J η i V ( 0 )
Thus, 0 t k p i w ( ψ i w ˜ ) 2 is bounded. By virtue of Barbalat’s lemma, we have:
lim t ψ i w ˜ = 0
where we have used the fact that V ( t ) > 0 . This completes the proof. □
Proof of Theorem 3. 
We consider the Lyapunov-like function (35) and its time derivative is given by:
V ˙ = i = 1 n [ J T p i p ˙ i + J T β i β i ˙ + 1 2 ( V P i T p i p ˙ i + V P i T p j p ˙ j ) ] = i = 1 n ( J T p i p ˙ i + J T β i β i ˙ ) + 1 2 i = 1 n j = 1 , j i n ( v P i j T p i p ˙ i + v P i j T p j p ˙ j ) ] = i = 1 n ( J T p i p ˙ i + J T β i β i ˙ ) + 1 2 i = 1 n j = 1 , j i n v P i j T p i p ˙ i + 1 2 i = 1 n j = 1 , j i n v P i j T p j p ˙ j
Exchanging i , j in the last term yields:
V ˙ = i = 1 n ( J T p i p ˙ i + J T β i β i ˙ ) + 1 2 i = 1 n j = 1 , j i n v P i j T p i p ˙ i + 1 2 i = 1 n j = 1 , j i n v P j i T p i p ˙ i
Notice that v P j i T p i = v P i j T p i . Thus, we have:
V ˙ = i = 1 n ( J T p i p ˙ i + J T β i β i ˙ ) + i = 1 n j = 1 , j i n v P i j T p i p ˙ i = i = 1 n [ ( J T p i + j = 1 , j i n v P i j T p i ) p ˙ i + J T β i β i ˙ ]
Substituting the control laws (36) and (37) into the above equation yields:
V ˙ = i = 1 n [ ρ ( J T p i + j = 1 , j i n v P i j T p i ) ( J p i + j = 1 , j i n v P i j p i ) + ρ ( J T β i ) ( J β i ) ] 0
Since V > 0 , V ˙ 0 and this proves that H is bounded.
Next, we prove the collision avoidance of the proposed control. With the initial condition d i j ( 0 ) > r , we show that if the safe distance, i.e., d i j ( t ) > r , is violated, then:
lim d i j r + v P i j = +
according to (31). However, this results in the Lyapunov-like function V . From V > 0 and V ˙ 0 , it is known that V is bounded. Thus, V contradicts the bounded V. Therefore, the safe distance d i j > r holds over the whole coverage control and the collision among UAVs is avoided. □

References

  1. Huang, S.; Teo, R.S.H.; Tan, K.K. Collision avoidance of multi unmanned aerial vehicles: A review. Annu. Rev. Control. 2019, 48, 147–164. [Google Scholar] [CrossRef]
  2. Huang, S.; Teo, R.S.H.; Leong, W.L.; Martinel, N.; Forest, G.L.; Micheloni, C. Coverage Control of Multiple Unmanned Aerial Vehicles: A Short Review. Unmanned Syst. 2018, 6, 131–144. [Google Scholar] [CrossRef]
  3. Tan, C.Y.; Huang, S.; Tan, K.K.; Teo, R.S.H.; Liu, W.; Lin, F. Collision Avoidance Design on Unmanned Aerial Vehicle in 3D Space. Unmanned Syst. 2018, 6, 277–295. [Google Scholar] [CrossRef]
  4. Parapari, H.F.; Abdollahi, F.; Menhaj, M.B. Distributed coverage control for mobile robots with limited-range sector sensors. In Proceedings of the 2016 IEEE International Conference on Advanced Intelligent Mechatronics, Banff, AB, Canada, 12–15 July 2016; pp. 1079–1085. [Google Scholar]
  5. Bhattacharya, S.; Ghrist, R.; Kumar, V. Multi-robot coverage and exploration in non-Euclidean metric spaces. In Algorithmic Foundations of Robotics X; Frazzoli, E., Lozano-Perez, T., Roy, N., Rus, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 245–262. [Google Scholar]
  6. Bullo, F.; Carli, R.; Frasca, P. Gossip coverage control for robotic networks: Dynamical systems on the space of partitions. SIAM J. Control. Optim. 2012, 50, 419–447. [Google Scholar] [CrossRef]
  7. Daingade, S.; Sinha, A.; Borkar, A.; Arya, H. Multi UAV Formation Control for Target Monitoring. In Proceedings of the 2015 Indian Control Conference Indian Institute of Technology Madras, Chennai, India, 5–7 January 2015; pp. 25–30. [Google Scholar]
  8. Smith, S.L.; Broucke, M.E.; Francis, B.A. Stabilizing a multi-agent system to an equilibrium polygon formation. In Proceedings of the 17th International Symposium on Mathematical Theory of Networks and Systems, Kyoto, Japan, 24–28 July 2006; pp. 2415–2424. [Google Scholar]
  9. Huang, S.; Teo, R.S.H.; Leong, W.L.; Martinel, N.; Forest, G.L.; Micheloni, C. Coverage Control of Multi Unmanned Aerial Vehicles: A Short Review, Unmanned Systems. 2018, 6,131–144 Huang, S.; Teo, R.; Leong, W.L. Review of coverage control of multi unmanned aerial vehicles. Unmanned Systems. In Proceedings of the 2017 11th Asian Control Conference (ASCC), Gold Coast, Australia, 17–20 December 2017; pp. 228–232. [Google Scholar]
  10. Chvatal, V. A combinatorial theorem in plane geometry. J. Comb. Theory (B) 1975, 18, 39–41. [Google Scholar] [CrossRef] [Green Version]
  11. Carlsson, S.; Nilsson, B.J.; Ntafos, S.C. Optimum guard covers and m-watchmen routes for restricted polygons. Int. J. Comput. Geom. Appl. 1993, 3, 85–105. [Google Scholar] [CrossRef]
  12. Cortés, J.; Martínez, S.; Karatas, T.; Bullo, F. Coverage control for mobile sensing networks. Berlin/Heidelberg, Germany, IEEE International Conference on Robotics and Automation. In In Proceedings of the IEEE International Conference on Robotics and Automation, Washington, DC, USA, 11–15 May 2002; pp. 1327–1332. [Google Scholar]
  13. Cortés, J.; Martínez, S.; Karatas, T.; Bullo, F. Coverage control for mobile sensing networks. IEEE Trans. Robot. Autom. 2004, 20, 243–255. [Google Scholar]
  14. Schwager, M.; Julian, B.J.; Rus, D. Optimal coverage for multiple hovering robots with downward facing cameras. In Proceedings of the IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3515–3522. [Google Scholar]
  15. Piciarelli, C.; Micheloni, C.; Foresti, G.L. PTZ Camera Network Reconfiguration. In Proceedings of the IEEE/ACM Intl. Conf. on Distributed Smart Cameras, Como, Italy, 30 August–2 September 2009; pp. 1–8. [Google Scholar]
  16. Wang, H.; Guo, Y. A decentralized control for mobile sensor network effective coverage. In Proceedings of the 7th World Congress on Intelligent Control and Automation, Chongqing, China, 25–27 June 2008; pp. 473–478. [Google Scholar]
  17. Schwager, M.; Julian, B.J.; Angermann, M.; Rus, D. Eyes in the sky: Decentralized control for the deployment of robotic camera networks. Proc. IEEE 2011, 99, 1541–1561. [Google Scholar] [CrossRef] [Green Version]
  18. Arslan, O.; Min, H.; Koditschek, D.E. Voronoi-based coverage control of pan/tilt/zoom camera networks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018. [Google Scholar]
  19. Huang, S.; Teo, R.S.H.; Leong, W.L. Distributed Coverage Control for Multiple Unmanned Multirotors with Downward Facing Pan-Tilt-Zoom-Cameras. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems (ICUAS), Dallas, TX, USA, 12–15 June 2018; pp. 744–751. [Google Scholar]
  20. Mastellone, S.; Stipanović, D.M.; Graunke, C.R.; Intlekofer, K.A.; Spong, M.W. Formation Control and Collision Avoidance for Multi-agent Non-holonomic Systems: Theory and Experiments. Int. J. Robot. Res. 2008, 27, 107–126. [Google Scholar] [CrossRef]
Figure 1. Camera model.
Figure 1. Camera model.
Drones 06 00067 g001
Figure 2. Side view of camera with pyramid FOV.
Figure 2. Side view of camera with pyramid FOV.
Drones 06 00067 g002
Figure 3. Chart block of decentralized coverage control scheme.
Figure 3. Chart block of decentralized coverage control scheme.
Drones 06 00067 g003
Figure 4. 3D view of avoidance function. Avoidance region with radius r and detection region with radius R.
Figure 4. 3D view of avoidance function. Avoidance region with radius r and detection region with radius R.
Drones 06 00067 g004
Figure 5. Test 1. Camera movement and sensing FOV (Initial configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Figure 5. Test 1. Camera movement and sensing FOV (Initial configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Drones 06 00067 g005
Figure 6. Test 1. Camera movement and sensing FOV (Middle configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Figure 6. Test 1. Camera movement and sensing FOV (Middle configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Drones 06 00067 g006
Figure 7. Test 1. Camera movement and sensing FOV (Final configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Figure 7. Test 1. Camera movement and sensing FOV (Final configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Drones 06 00067 g007
Figure 8. Test 2. Camera movement and sensing FOV (Initial configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Figure 8. Test 2. Camera movement and sensing FOV (Initial configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Drones 06 00067 g008
Figure 9. Test 2. Camera movement and sensing FOV (Middle configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Figure 9. Test 2. Camera movement and sensing FOV (Middle configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Drones 06 00067 g009
Figure 10. Test 2. Camera movement and sensing FOV (Final configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Figure 10. Test 2. Camera movement and sensing FOV (Final configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Drones 06 00067 g010
Figure 11. Test 3. Camera movement and sensing FOV (Initial configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Figure 11. Test 3. Camera movement and sensing FOV (Initial configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Drones 06 00067 g011
Figure 12. Test 3. Camera movement and sensing FOV (Middle configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Figure 12. Test 3. Camera movement and sensing FOV (Middle configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Drones 06 00067 g012
Figure 13. Test 3. Camera movement and sensing FOV (Final configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Figure 13. Test 3. Camera movement and sensing FOV (Final configuration): (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Drones 06 00067 g013
Figure 14. Test 4. Camera movement and sensing FOV without considering collision avoidance: (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Figure 14. Test 4. Camera movement and sensing FOV without considering collision avoidance: (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Drones 06 00067 g014
Figure 15. Test 4. Camera movement and sensing FOV with considering collision avoidance: (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Figure 15. Test 4. Camera movement and sensing FOV with considering collision avoidance: (a) 2D coverage, (b) cameras’ locations and their corresponding sensing regions.
Drones 06 00067 g015
Figure 16. Test 4. Separation distance along horizontal direction: (a) configuration without considering collision avoidance, (b) configuration with considering collision avoidance.
Figure 16. Test 4. Separation distance along horizontal direction: (a) configuration without considering collision avoidance, (b) configuration with considering collision avoidance.
Drones 06 00067 g016
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huang, S.; Teo, R.S.H.; Leong, W.W.L. Multi-Camera Networks for Coverage Control of Drones. Drones 2022, 6, 67. https://doi.org/10.3390/drones6030067

AMA Style

Huang S, Teo RSH, Leong WWL. Multi-Camera Networks for Coverage Control of Drones. Drones. 2022; 6(3):67. https://doi.org/10.3390/drones6030067

Chicago/Turabian Style

Huang, Sunan, Rodney Swee Huat Teo, and William Wai Lun Leong. 2022. "Multi-Camera Networks for Coverage Control of Drones" Drones 6, no. 3: 67. https://doi.org/10.3390/drones6030067

APA Style

Huang, S., Teo, R. S. H., & Leong, W. W. L. (2022). Multi-Camera Networks for Coverage Control of Drones. Drones, 6(3), 67. https://doi.org/10.3390/drones6030067

Article Metrics

Back to TopTop