Next Article in Journal
Design of a Lightweight Cryptographic Scheme for Resource-Constrained Internet of Things Devices
Next Article in Special Issue
RRT*-Fuzzy Dynamic Window Approach (RRT*-FDWA) for Collision-Free Path Planning
Previous Article in Journal
Differences in Water Consumption and Yield Characteristics among Winter Wheat (Triticum aestivum L.) Varieties under Different Irrigation Systems
Previous Article in Special Issue
Hierarchical Sliding Mode Control Combined with Nonlinear Disturbance Observer for Wheeled Inverted Pendulum Robot Trajectory Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ship Defense Strategy Using a Planar Grid Formation of Multiple Drones

System Engineering Department, Sejong University, Seoul 2639, Republic of Korea
Appl. Sci. 2023, 13(7), 4397; https://doi.org/10.3390/app13074397
Submission received: 7 March 2023 / Revised: 23 March 2023 / Accepted: 28 March 2023 / Published: 30 March 2023
(This article belongs to the Special Issue Advances in Robot Path Planning, Volume II)

Abstract

:
This article introduces a ship defense strategy using a planar grid formation of multiple drones. We handle a scenario where a high-speed target with variable velocity heads towards the ship. The ship measures the position of the target in real time. Based on the measured target, the drones guidance laws are calculated by the ships on-board computer and are sent to every drone in real time. The drones form a planar grid formation, whose center blocks the Line-Of-Sight (LOS) line connecting the target and the ship. Since the target is guided to hit its goal (ship), the drones can effectively block the target by blocking the LOS line. We enable slow drones to capture a fast target by making the drones stay close to the ship while blocking the LOS at all times. By using a grid formation of drones, we can increase the capture rate, even when there exists error in the prediction of the target’s position. To the best of our knowledge, this article is unique in using a formation of multiple drones to intercept a fast target with variable velocity. Through MATLAB simulations, the effectiveness of our multi-agent guidance law is verified by comparing it with other state-of-the-art guidance controls.

1. Introduction

This article introduces a ship defense strategy using a clustered formation of multiple drones. We handle a scenario where a high-speed target with variable velocity heads towards the ship. The role of the drone team is to protect the ship from the incoming target.
We propose a ship defense approach with multiple drones, such that each drone is not equipped with powerful sensors or an on-board computer. Thus, a drone cannot measure the target, and the target moves based on the commands sent by the ship. In this way, we can decrease the cost of a drone, which can be destroyed once it intersects the target. This enables us to develop rather cheap drones.
Instead, the ship measures the position of the target in real time. Position measurements can be provided by various sensors, such as radar, IR, or laser sensors of the ship. Each drone’s guidance law is calculated by the ship’s on-board computer and is transmitted to each drone in real time. (This approach relies on the communication between the ship and a drone. Since the signal speed is sufficiently fast ( 3 × 10 8 m/s) in the air, we argue that signal delay is negligible in our ship defense scenario.)
Consider a high-speed target whose goal is to hit a ship. The target heads towards its goal (ship) at least in the terminal phase. Otherwise, it is impossible to make a target hit the ship.
Therefore, this paper lets multiple drones form a planar grid formation, whose center lies on the line segment connecting the target and the ship. Moreover, the planar grid formation is generated to be perpendicular to the line segment connecting the target and the formation center. The grid formation can be considered as a “net” structure for capturing the incoming target. The target may perform elusive maneuvers, and there may be measurement noise in measuring the target position. By maximizing the grid formation size, we can increase the capture rate, even when there exists error in the prediction of the target’s position. Since the target is guided to hit its goal (ship), the drones can effectively block the target using this grid formation.
As an interceptor, we consider a highly maneuverable drone, such as a quadrotor drone [1,2,3], which is much slower than the incoming target. This article addresses a high-level path planner, which generates reference position signals for a low-level controller [1,2,3].
To the best of our knowledge, our paper is novel in developing a ship defense approach using clustered multiple drones. Our paper is novel in addressing a 3D formation of multiple drones for intercepting a high-speed target with variable speeds. We show that in the case where the drones block the LOS line between the target and the ship, the target cannot reach the ship without being captured by the drones. Since the target is guided to hit its goal (ship), the drones can effectively block the target using this strategy. We further let slow drones stay close to the ship in order to protect the ship from the fast target. As far as we know, our paper is novel in showing that slow drones can capture a fast target by staying close to the ship while blocking the LOS at all times.
Our paper is unique in capturing a maneuvering target with variable speeds, which can be faster than the interceptors. In order to estimate the pose of a maneuvering high-speed target, we applied target-tracking filters in [4]. We control the drone formation, based on the prediction of the target’s position after one sample-index in the future. This prediction may be erroneous due to the target’s elusive maneuvers or measurement noise. This is the motivation for utilizing a grid formation of drones instead of a single drone. Since we use a grid formation, we can increase the capture rate even when the target prediction is erroneous. By maximizing the grid formation size, we can increase the capture rate even when there exists error in the prediction of the target’s position.
To the best of our knowledge, our paper is novel in the following aspects:
  • We develop a ship defense approach using clustered multiple drones;
  • We use a 3D formation of multiple drones for intercepting a high-speed target with variable speeds;
  • We let the drones stay close to the ship while blocking the LOS between the ship and the target. Thus, we enable slow drones to capture a fast target.
Through MATLAB simulations, the effectiveness of our multi-agent guidance law is verified by comparing it with other state-of-the-art guidance controls.
We organize this article as follows. Section 2 addresses the literature review of this paper. Section 3 addresses the preliminary information of this paper. Section 4 discusses several definitions and assumptions in this article. Section 5 introduces our multi-agent guidance law. Section 6 shows simulation results to present the effectiveness of the proposed guidance law. Section 7 provides a conclusion.

2. Literature Review

There are many papers on interceptors’ guidance laws [5,6,7,8]. The authors of [9,10,11,12,13,14,15,16] applied motion camouflage to develop the guidance law of an interceptor. Here, we say that the interceptor is in the motion camouflage state if an interceptor moves in the presence of a target while appearing stationary at a focal point.
References [17,18] developed a motion camouflage guidance law so that the interceptor approaches the target while appearing stationary at a focal point that is infinitely far from the interceptor. The authors of [19] used a neural network architecture to perform motion camouflage in 2D environments. The authors of [12] developed an optimal control approach to derive a 2D motion camouflage position for an interceptor, assuming there is a constant velocity (speed and heading) target. However, assuming a target with constant velocity is not realistic since a maneuvering target could escape from the interceptor. Our paper thus handles a target with variable velocity.
Proportional Navigation Guidance (PNG) laws have been widely applied to let an interceptor hit the target [20,21,22,23]. PNG laws are designed considering an interceptor that can measure the bearing of the target by utilizing on-board sensors. PNG laws are based on the fact that two vehicles are on a collision course when their direct Line-Of-Sight (LOS) does not change direction as they get closer to each other. PNG laws are designed so that the interceptor velocity vector rotates at a rate proportional to the rotation rate of the line-of-sight and in the same direction.
Multi-agent systems can be applied for many tasks, such as monitoring environments [24,25], multi-agent herding [26], and sensor deployment [27,28,29,30]. References [31,32] controlled multiple mobile sensors to estimate the target position in real time. References [33,34] considered the case where two interceptors, which measure bearings of a target, track the target in two dimensions. The formulation of the homing problem of multiple missiles against a single target, subject to constraints on the impact time, was discussed in [35]. In [36], a fully distributed adaptive method was proposed to solve the simultaneous attack problem with multiple missiles against maneuvering targets. The authors of [37] considered the relative interception angle constraints of multiple interceptors, which is intended to enhance the survivability of multiple interceptors against a defense system with a high value target and also to maximize the collateral target damage. The authors of [38] addressed simultaneous cooperative interception for a scenario where the successful handover cannot be guaranteed by a single interceptor due to the target maneuver and movement information errors at the handover moment.
As far as we know, other guidance laws in the literature make one or more interceptors continue to chase the target. Our paper is unique in making slow interceptors (drones in our paper) stay close to the ship, so they can block a fast target from reaching the ship. This blocking strategy is desirable considering the energy consumption of an interceptor since an interceptor does not have to move far from the ship. Through MATLAB simulations, the effectiveness of this blocking strategy is verified by comparing it with other state-of-the-art guidance controls.

3. Preliminaries

This article utilizes two frames: an inertial reference frame { I } and a body-fixed frame { B } [39]. We address several definitions in rigid-body dynamics [39].
The origin of { I } is a point with three axes pointing North, East, and Down, respectively. We use the virtual agent for drone controls. The virtual agent is a virtual drone located at the center of the grid formation. { B } is fixed to the virtual agent, such that the origin of { B } is at the virtual agent’s center.
The virtual agent changes its yaw and pitch while not rotating its body. In rigid-body dynamics [39], θ and ψ define pitch and yaw, respectively. For convenience, let c ( η ) define cos ( η ) . In addition, let s ( η ) define sin ( η ) . Let t ( η ) define tan ( η ) .
The rotation matrix indicating the counterclockwise (CC) rotation of an angle ψ centered at the z-axis in { B } is
M R ( ψ ) = c ( ψ ) s ( ψ ) 0 s ( ψ ) c ( ψ ) 0 0 0 1 .
The rotation matrix representing the CC rotation of an angle θ centered at the y-axis in { B } is
M R ( θ ) = c ( θ ) 0 s ( θ ) 0 1 0 s ( θ ) 0 c ( θ ) .
The combined rotation matrix is built by multiplying Equations (1) and (2) to obtain
M R ( ψ , θ ) = M R ( ψ ) M R ( θ ) .

4. Assumptions and Definitions

This section discusses assumptions and definitions in our paper. m a x ( a , b ) returns a bigger value between two variables (a and b). In addition, m i n ( a , b ) returns a smaller value between two variables (a and b). In our paper, bold characters are used to denote vectors and matrices. ( v 1 , v 2 ) is the angle formed by two vectors ( v 1 and v 2 ). Mathematically, ( v 1 , v 2 ) = arccos ( v 1 · v 2 v 1 v 2 ) . Here, 0 ( v 1 , v 2 ) π . l ( A , B ) is the line segment connecting two locations A and B . Furthermore, l ( A , B ) indicates the length of l ( A , B ) .
This article uses the discrete-time system, where T denotes the sample duration. In this article, all drones make a planar grid formation to protect against the incoming target. The grid formation can be considered as a “net” structure for capturing the incoming target.
Let M indicate the total number of drones. M is selected such that
M = G 2 ,
where G 1 is a positive integer.
In the case where G = 1 , we use only one drone. In this case, the grid formation cannot be used, and the waypoint of the drone is set as the virtual agent.
In the inertial reference frame, let r 0 , k define the 3D Cartesian coordinates of the virtual agent. In the inertial reference frame, let r i , k ( i { 1 , 2 ,   , M } ) denote the 3D Cartesian coordinates of the i-th drone at sample-index k. Note that the subscript k indicates the sample-index k.
In the inertial reference frame, let r k t denote the target’s 3D Cartesian coordinates at sample-index k. In the inertial reference frame, let r k s denote the ship’s 3D Cartesian coordinates at sample-index k.
Let v k t denote the target’s speed at sample-index k. Let v i , k denote the speed of the i-th drone at sample-index k. Let v m a x indicate the maximum speed of a drone or the virtual agent. Note that v k t , v i , k , and v m a x are scalar values.
We say that the target is captured when the relative distance between the target and any drone is less than a constant, say Δ . The motion model of the i-th drone ( i { 1 , 2 ,   ,   M } ) is
r i , k + 1 = r i , k + T v i , k u i , k .
Here, u i , k indicates the i-th drone’s heading vector at sample-index k. Note that u i , k is a unit vector presenting the i-th drone’s heading direction. The motion model in Equation (5) is commonly used in multi-drone systems [40,41,42,43,44,45,46].
In Equation (5), r i , k + 1 generates the high-level reference position signal at every sample-index k. For letting the i-th drone move towards r i , k + 1 at every sample-index k, one utilizes low-level controls in [1,2,3].
The motion model of the virtual agent is
r 0 , k + 1 = r 0 , k + T v 0 , k u 0 , k .
Recall that the virtual agent is at the center of the grid formation. We say that the virtual agent is in the lineState at sample-index k if the line segment l ( r k t , r k s ) meets the virtual agent position r 0 , k . At every sample-index k, v 0 , k and u 0 , k are set so that the virtual agent is in the lineState.
Let L k denote the infinite line crossing both r k s and r k t at sample-index k. Let L ¯ k + 1 denote the infinite line crossing both r k + 1 s and r k + 1 t . c k is the point on L ¯ k + 1 , which is the closest to r 0 , k .
Figure 1 depicts the case where the ship moves as the time index changes from k to k + 1 . The ship positions are indicated by crosses. In this figure, L k , L ¯ k + 1 , and c k are depicted.

Assumptions

This article assumes that both the ship and a drone’s 3D Cartesian coordinates are measured in real time. Global Positioning Systems (GPSs) and Inertial Measurement Units (IMUs) can be used for this localization.
Furthermore, the ship can measure the target’s 3D Cartesian coordinates at every sample-index. Position measurements can be provided by various sensors, such as radar sensors or laser sensors. Therefore, the ship at sample-index k can derive L k .
Furthermore, based on the target’s recent trajectory, the ship at sample-index k can predict r k + 1 t , the target’s 3D Cartesian coordinates, after one sample-index in the future. Section 5.1 shows how to predict the target’s 3D Cartesian coordinates after one sample-index in the future.
The ship can also predict r k + 1 s , the ship’s 3D Cartesian coordinates, after one sample-index in the future. This is feasible because the ship has GPS and IMU. Therefore, the ship can predict L ¯ k + 1 , which crosses both r k + 1 t and r k + 1 s .
It is desirable that as the target is caught, it is sufficiently far from the ship. Otherwise, the ship may be partially caught by the debris of the target. Let β > 0 define the safety distance. The safety distance is set by the operator of the drones. As the target is caught, it is desirable that its distance from the ship is bigger than the safety distance β . This way, we can assure the safety of the ship.

5. Multi-Drone Guidance Law

We consider a high-speed target whose goal is to hit a ship. The target heads towards its goal (ship) at least in the terminal phase. Otherwise, it is impossible to make a target hit the ship.
Therefore, we let multiple drones form a planar grid formation, whose center lies on the line segment connecting the target and the ship. Moreover, the planar grid formation is generated to be perpendicular to the line segment connecting the target and the formation center. The grid formation can be considered as a “net” structure for capturing the incoming target. By maximizing the grid formation size, we can increase the capture rate, even when there exists error in the prediction of the target’s position. Since the target is guided to hit its goal (the ship), the drones can effectively block the target using this grid formation.
The proposed multi-drone guidance law is summarized as follows. At every sample-index, the ship measures the 3D Cartesian coordinates of the incoming target. Thereafter, we run the Kalman filter to predict the target’s 3D Cartesian coordinates after one sample-index in the future. See Section 5.1 for the prediction of the target’s position after one sample-index.
Based on the predicted target pose, the virtual agent is guided to remain in the lineState. See Section 5.2 for the guidance law of the virtual agent. In addition, each drone is guided to generate a grid formation centered at the virtual agent. See Section 5.3 for the guidance law of a drone. Figure 2 shows the block diagram of the proposed multi-drone guidance law.

5.1. Prediction of the Target’s Position after One Sample-Index

The ship can measure the target’s 3D Cartesian coordinates at every sample-index. To track a target with variable velocity, we present how to predict the target one sample-index forward in time. In order to track a maneuvering target, we applied target-tracking filters to [4].
In the inertial reference frame, let [ x k t , y k t , z k t ] indicate the vector presenting the 3D coordinates of the target at sample-index k. In addition, [ x ˙ k t , y ˙ k t , z ˙ k t ] T denotes the vector presenting the target velocity at sample-index k. Furthermore, [ x ¨ k t , y ¨ k t , z ¨ k t ] T defines the vector presenting the target acceleration at sample-index k. Let X k = [ x k t , x ˙ k t , x ¨ k t , y k t , y ˙ k t , y ¨ k t , z k t , z ˙ k t , z ¨ k t ] T define the vector presenting the target state. Based on [4], the target’s process model is set as
X k + 1 = F X k + w k ,
where w k is the process noise with following properties: w k N ( 0 , Q ) . Here, N ( 0 , α ) denotes a Gaussian distribution with a mean of 0 and a covariance matrix α . Furthermore, F in Equation (7) is
F = M F 0 0 0 M F 0 0 0 M F ,
where we use
M F = 1 , T , a T 1 + e a T a 2 0 , 1 , 1 + e a T a 2 0 , 0 , e a T .
In w k , Q is set as
Q = 2 a σ m 2 M Q 0 0 0 M Q 0 0 0 M Q ,
where
M Q = T 5 / 20 , T 4 / 8 , T 3 / 6 T 4 / 8 , T 3 / 6 , T 2 / 2 T 3 / 6 , T 2 / 2 , T .
a and σ m in Equation (10) are tuning parameters for tracking a maneuvering target. Detailed derivations of Equation (7) appear in [4].
At every sample-index k, the ship measures the target’s 3D position, say m k . See the first block of Figure 2. The target measurement model is
m k = H X k + v k
where H is
H = 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 .
Furthermore, v k is the measurement noise, such that v k N ( 0 , R k ) . We assume that R k is known a priori.
The Kalman filter (KF) [47] is applied to obtain the estimate vector and its covariance at every sample-index. The KF is composed of the prediction step and the measurement update step. In the KF, the prediction step uses Equation (7), and the measurement update step uses Equation (12).
Let X ^ k | k define the estimation of X k derived using all measurements up to sample-index k. Let P k | k define the error covariance matrix of X ^ k | k .
In the prediction step of the KF, we derive the predicted state vector as
X ^ k + 1 | k = F X ^ k | k ,
where Equation (7) is used. Utilizing Equations (7) and (10), the covariance matrix is predicted as
P k + 1 | k = F P k | k F T + Q .
The measurement update step is
X ^ k + 1 | k + 1 = X ^ k + 1 | k + W k ( m k H X ^ k + 1 | k ) ,
where
W k = P k + 1 | k H T S 1 .
Here, we use
S = H P k + 1 | k H T + R k .
In addition, the covariance matrix is updated using
P k + 1 | k + 1 = P k + 1 | k W k S W k T .
The ship at sample-index k predicts the target state after one sample-index forward in time using Equation (14). Let r ^ k + 1 t denote the target position at sample-index k + 1 , which is predicted using all measurements up to sample-index k. Using Equation (14), r ^ k + 1 t is predicted as
r ^ k + 1 t = H X ^ k + 1 | k .
Here, recall that H was defined in Equation (13).
We acknowledge that the prediction of the target’s position may not be accurate due to the target’s elusive maneuvers or measurement noise. This is the motivation for using a formation of drones instead of a single drone.
In our paper, a formation of drones is used instead of a single drone in order to increase the capture rate. Since we use a drone formation, we can increase the capture rate even when the target prediction is erroneous. Through MATLAB simulations, the effectiveness of our formation-based guidance law is verified by comparing it with other state-of-the-art guidance controls.

5.2. Guidance Law of the Virtual Agent

Using the predicted 3D coordinates of the target, the virtual agent is guided to remain in the lineState. At every sample-index k, the virtual agent is guided to head towards the guidance point g k , which is defined as follows:
1. Suppose that r k s r 0 , k < β . This implies that the ship needs to expel the virtual agent away from the ship. Moreover, suppose that
r 0 , k c k v m a x T
holds. Then, we set the guidance point as
g k = c k + r k + 1 t c k r k + 1 t c k δ k .
Here, δ k > 0 is defined as
δ k = ( v m a x T ) 2 ( r 0 , k c k ) 2 .
This implies that g k r 0 , k is v m a x T . See Figure 1 for an illustration of this case.
2. Otherwise, we set the guidance point as
g k = c k .
At every sample-index k, the virtual agent moves to reach g k if possible. Suppose that r k s r 0 , k   <   β . Furthermore, suppose that Equation (21) holds, as depicted in Figure 1. Consider a sphere centered at r 0 , k , whose radius is v m a x T . Using Equation (21), L ¯ k + 1 meets this sphere at two points. Between these two points, g k is the point that is closer to r k + 1 t . In this way, the virtual agent can approach the target while staying on the line segment that connects the target and the ship.
The direction command u 0 , k is selected to make the virtual agent move towards g k at sample-index k + 1 . At every sample-index k, the direction command is set as follows. At every sample-index k, the virtual agent sets the new direction command u 0 , k as
u 0 , k = g k r 0 , k g k r 0 , k .
Note that the direction command is a unit vector.
In addition, the speed command at sample-index k is set as follows. At every sample-index k, the virtual agent sets the new speed command v 0 , k as
v 0 , k = m i n ( g k r 0 , k T , v m a x ) .
This implies that the virtual agent moves with the maximum speed v m a x when it is too far from the guidance point g k .
Consider the situation in which r 0 , k r k + 1 t < v m a x T holds. In this situation, the virtual agent heads towards r k + 1 t directly while not using the direction command u 0 , k = g k r 0 , k g k r 0 , k . In this way, the target is caught at sample-index k + 1 .
According to the definitions of the guidance point g k (see Equations (22) and (24)), g k lies on l ( r k + 1 t , r k + 1 s ) .
In the case where g k r 0 , k T v m a x , the heading command (Equation (25)) and speed command (Equation (26)) lead to
r 0 , k + 1 = g k .
We thus have the following theorem.
Theorem 1. 
Suppose that c k r 0 , k T v m a x . The heading command (Equation (25)) and speed command (Equation (26)) make r 0 , k + 1 exist on l ( r k + 1 t , r k + 1 s ) .
Theorem 1 implies that in the case where c k r 0 , k T v m a x , the virtual agent position r 0 , k + 1 is on the line segment l ( r k + 1 t , r k + 1 s ) . Since the target’s goal is reaching the ship, the target must be hit by the virtual agent eventually.

L k and L ¯ k + 1 Meet at a Point

Next, we consider a special case where L k and L ¯ k + 1 meet at a point. Here, recall that L ¯ k + 1 denotes the infinite line crossing both r k + 1 s and r k + 1 t . In the inertial reference frame, let I L denote the 3D coordinates of the intersection between L k and L ¯ k + 1 . For instance, if the ship is static, then L k and L ¯ k + 1 meet at the ship position r k s = r k + 1 s .
Suppose that r 0 , k lies on l ( r k t , r k s ) . Let c k t denote the point on L ¯ k + 1 , which is the closest to r k t . Let l k = c k r 0 , k for convenience.
Figure 3 depicts the case where L k and L ¯ k + 1 meet at I L . Using the geometry in this figure, we have
c k t r k t = l k I L r k t I L r 0 , k .
Since the target speed is v k t , we have
v k t T c k t r k t .
Using Equations (28) and (29), we have
v k t T I L r 0 , k I L r k t l k .
Suppose that the drone’s maximum speed v m a x satisfies
v m a x > v k t I L r 0 , k I L r k t .
The next theorem addresses the condition for remaining in the lineState at every sample-index.
Theorem 2. 
Suppose that L k and L ¯ k + 1 meet at a point, say I L . Suppose that r 0 , k lies on l ( r k t , r k s ) . The target speed is v k t . If the drone’s maximum speed satisfies Equation (31), then r 0 , k + 1 lies on l ( r k + 1 t , r k + 1 s ) .
Proof. 
Suppose the drone’s maximum speed satisfies Equation (31). Then, Equations (30) and (31) lead to
v m a x T > l k .
This implies that Equation (21) is met. In the case where Equation (21) is met, Theorem 1 makes r 0 , k + 1 exist on l ( r k + 1 t , r k + 1 s ) . The proof is complete. □
In practice, the ship moves much slower than the target. Thus, the ship position is close to I L , as plotted in Figure 3. Consider the case where the virtual agent is sufficiently close to the ship. In this case, I L r 0 , k is small to satisfy Equation (31). Then, using Corollary 2, r 0 , k + 1 lies on l ( r k + 1 t , r k + 1 s ) . Thus, the virtual agent remains in the lineState at sample-index k + 1 .
In the case where the virtual agent stays in the lineState at all times, the target cannot reach the ship without being captured by the virtual agent. In order to stay in the lineState at each sample-index, it is desirable that the virtual agent does not become too far from the ship.
In the case where r k s r 0 , k     β is met, the guidance point is set using Equation (24) instead of Equation (22). In this way, drones stay close to the ship while staying in the lineState at all times.
Note that Equation (31) can be satisfied even when v m a x < v k t . This implies that even slow drones can capture a fast target when the drones stay close to the ship while staying in the lineState at all times.

5.3. Guidance Law of Every Drone

We let multiple drones form a planar grid formation, such that the formation is perpendicular to the line segment connecting the target and the formation center. The grid formation can be considered a “net” structure for capturing the incoming target. By maximizing the grid formation size, we can increase the capture rate, even when there exists error in the prediction of the target’s position. Since the target is guided to hit its goal (ship), the drones can effectively block the target using this grid formation.
We next handle the guidance law of a drone for the generation of the grid formation. At sample-index k, let grid formation denote a planar formation composed of G × G cells, each with side length  s k . The grid formation is centered at the virtual agent, and we adjust the grid formation so that it is normal to l ( r k t , r 0 , k ) at each sample-index k. We change the pitch and yaw of the virtual agent, but we do not change the roll of the virtual agent. Thus, the grid formation does not roll either.
Let r k R = r k t r 0 , k denote the relative position of the target with respect to the virtual agent at sample-index k. At each sample-index k, the planar grid formation is oriented, such that the formation plane is perpendicular to r k R .
For i { 0 , 1 , 2 ,   ,   G 1 } and j { 0 , 1 ,   ,   G 1 } , let n [ i , j ] be defined as
n [ i , j ] = 1 + i + G j .
Since i { 0 , 1 , 2 ,   ,   G 1 } and j { 0 , 1 , 2 ,   ,   G 1 } , we have
n [ i , j ] { 1 , 2 ,   , M = G 2 } .
Here, Equation (4) is used. Using Equations (33) and (34), the drone index n { 1 , 2 ,   ,   M = G 2 } has its associated n [ i , j ] .
For i { 0 , 1 , 2 ,   ,   G 1 } and j { 0 , 1 ,   ,   G 1 } , let
w n [ i , j ] , k B = ( 0 , s k G 2 + i s k , s k G 2 + j s k ) T
denote the n-th drone’s waypoint in the body-fixed frame at sample-index k. See that these G × G waypoints are generated on the y z plane in the body-fixed frame. From now on, n denotes n [ i , j ] for notation simplicity.
In the case where we set G = 1 , we use only one drone. In this case, the grid formation cannot be generated. If we use only one drone, then the drone’s waypoint in the body-fixed frame is set as
w 1 , k B = ( 0 , 0 , 0 ) T ,
instead of Equation (35). Equation (36) is used to make the single drone move towards the virtual agent. In other words, the waypoint of the drone is set as the virtual agent.
At sample-index 0, all drones are located inside the grid cell of the ship. At sample-index 0, the planar grid formation’s orientation (normal vector) is represented as initial yaw ψ 0 = 0 and initial pitch θ 0 = π / 2 , respectively. Under Equation (3), the n-th drone’s initial position is located at w n , 0 , which is given as
w n , 0 = M R ( 0 , π / 2 ) w n , 0 B .
Equation (37) implies that all drones are initially located to form the grid formation with side length s 0 .
Once the drones are launched from the ship, the planar grid formation at each sample-index k is oriented, such that it becomes perpendicular to r k R = r k t r 0 , k . The unit vector associated with r k R is
u R = ( u R ( 1 ) , u R ( 2 ) , u R ( 3 ) ) T ,
where u R = r k R r k R .
At each sample-index k, the planar grid formation’s orientation (normal vector) is represented as ψ k and θ k , respectively. Since the virtual agent does not rotate, the formation’s orientation does not include rolling motions. Under Equation (3), the n-th drone’s waypoint in the inertial frame is
w n , k = r 0 , k + M R ( ψ k , θ k ) w n , k B .
The axis of the grid formation is oriented towards the target r k t since u R = r k R r k R , where r k R = r k t r 0 , k . In other words, the planar grid formation is oriented such that it is perpendicular to r k R .
We next calculate the planar grid formation’s orientation (normal vector), ψ k , and θ k in Equation (39), associated with u R in Equation (38). We apply u R ( 1 ) = c ( ψ k ) c ( θ k ) , u R ( 2 ) = s ( ψ k ) c ( θ k ) , and u R ( 3 ) = s ( θ k ) . Here, u R ( j ) indicates the j-th element of u R .
Under Equation (38), we obtain
θ k = a t a n 2 ( u R ( 3 ) , u R ( 1 ) 2 + u R ( 2 ) 2 ) .
Under Equation (38), we derive ψ k as follows. If c ( θ k ) 0 , then we use
ψ k = a t a n 2 ( u R ( 2 ) , u R ( 1 ) ) .
If c ( θ k ) < 0 , then we use
ψ k = a t a n 2 ( u R ( 2 ) , u R ( 1 ) ) .
Recall that w n , k defines the waypoint assigned to the n-th drone at sample-index k. The heading command of the n-th drone is set towards w n , k at every sample-index k. The heading vector u n , k from Equation (5) is set as follows.
u n , k = w n , k r n , k w n , k r n , k .
At every sample-index k, the n-th drone sets its speed command v n , k from Equation (5) as
v n , k = m i n ( w n , k r n , k T , v m a x ) .
The control commands, Equations (43) and (44), are used to satisfy
r n , k + 1 = w n , k ,
if possible. Once Equation (45) is met for all n { 1 , 2 ,   ,   M } , then all drones form a grid formation at sample-index k + 1 .
Consider the case where r n , k w n , k < v m a x T at every sample-index k. In this case, Equation (45) is met at every sample-index k.
At sample-index 0, all drones are located inside the ship. At sample-index 0, all drones form the initial grid formation, as presented in Equation (37). Note that each drone is assigned to a waypoint that is in the body-fixed frame of the virtual agent. See Equation (39) for waypoint assignments.
Moreover, [48] can be used to assign a drone to each waypoint, such that the makespan (time for all robots to reach their waypoints) is minimized while also preventing collisions among drones. The authors of [48] mentioned that their assignment algorithm scales well, such that it can compute the mapping for 1000 robots in less than half a second.
In the worst case, there may be a case where a drone meets another drone while moving toward its assigned waypoint. This case can happen due to localization errors or environment disturbance, such as wind. The drone then uses reactive collision avoidance controls, such as [49,50,51,52], to avoid an abrupt collision with another drone. Under reactive collision avoidance controls, the drone can change its speed and heading to avoid a sudden collision. We acknowledge that a drone may not reach its waypoint, as it performs evasive maneuvers to avoid a sudden collision with another drone. For instance, suppose that a drone slows downs to avoid a sudden collision at sample-index k. At the next sample-index k + 1 , the drone needs to speed up to reach its new waypoint at sample-index k + 1 .

Control of the Side Length in the Grid Formation

Considering the uncertainty in the target position, it is desirable to make the formation cover as large of an area as possible. However, in the case where the formation is too sparse, the target can get through the formation without being captured by a drone. Furthermore, in the case where the formation is too dense, a large number of drones must be used to cover a large area.
Recall that s k denotes the side length at sample-index k. Initially, all drones are stored in the grid cells of the ship, such that s 0 = 1 meters in Equation (37).
Let s u denote the upper bound for the side length. In order to capture a target, the side length is increased gradually using
s k = η s k 1 + ( 1 η ) s u .
Here, 0 < η < 1 is a positive constant. η is a tuning parameter indicating the sensitivity for the radius update. In MATLAB simulations, η = 0.9 is used. As time elapses, s k monotonically increases to s u . This implies that the formation size increases as time elapses.
Recall that a target is captured when the relative distance between the target and a drone is less than Δ . In the simulation section, Δ = 10 m is used. If s u Δ , then the target cannot get through the grid “net” generated by the drones. Thus, s u = Δ is set in our simulations.

6. MATLAB Simulation

This section demonstrates the effectiveness of our multi-agent guidance law through MATLAB simulations. The sample interval is T = 0.5 s. The safety distance β is set as 100 m.
In Equation (12), the measurement noise v k is generated with v k N ( 0 , R k ) , where R k is the identity matrix such that every diagonal element is 1. This implies that the standard deviation of measurement noise is 1 meter.
Considering the process noise, the motion model of the i-th drone ( i { 1 , 2 ,   ,   M } ) is
r i , k + 1 = r i , k + T v i , k u i , k + n i .
Here, n i indicates the process noise of the i-th drone, such that each term in n i has a Gaussian distribution with a mean of 0 and a standard deviation of 1 meter. Because Equation (47) contains the process noise term n i , Equation (47) is distinct from Equation (5).
At sample-index 0, the target is at (0, 10,000, 5000) in meters, and the virtual agent is at the origin. The ship is located at (0,0,0) at sample-index 0. Furthermore, the maximum speed of a drone is v m a x = 20 meters per second. At every sample-index k, the target’s speed, v k t , is 200 meters per second. See that the target moves much faster than a drone.
We say that a target is captured when the distance between a drone and the target is less than Δ = 10 meters. Moreover, a target is captured at sample-index k when l ( r k t , r k 1 t ) crosses the grid formation between sample-index k 1 and k. Because s u = Δ , the target is considered to be captured between sample-index k 1 and k.
Furthermore, a simulation ends when the distance between the ship and the target is less than Δ . This implies that the ship is hit by the target.

6.1. Monte-Carlo Simulations

Multiple Monte-Carlo (MC) simulations are needed to prove the effectiveness of the proposed method rigorously since the measurements are noisy. We run m C = 50 MC simulations while changing generated noise.
At the end of each MC simulation, the target hits the ship or is captured by a drone. We use three metrics ( c a p t u r e R a t e , e n d D i s t , and s i m T i m e ) to analyze the proposed controls.
Let c a p t u r e N denote the number of MC simulations where the target is captured by a drone. Considering the analysis of m C MC simulations, c a p t u r e R a t e = c a p t u r e N m C presents the capture rate (in percents) as we run m C MC simulations. It is desirable that the c a p t u r e R a t e is as large as possible.
Let e n d D i s t (m) represent the average distance between the ship and all drones’ center position when an MC simulation ends. In the case where a single drone is considered, e n d D i s t represents the average distance between the ship and the drone when an MC simulation ends. Large e n d D i s t implies that at the end of a simulation, a drone is far from the ship.
Let s i m T i m e denote the average time (in seconds) spent until each MC simulation ends. Note that an MC simulation ends when the target hits the ship or is captured by a drone.

6.2. 3D PNG Law

The target applies the PNG laws in [20] to approach the ship as time elapses. We briefly introduce the 3D PNG law in [20]. The rotation vector of the line-of-sight at sample-index k is
Ω s t , k = R s t , k V s t , k / ( r 2 ) ,
where V s t , k is the relative velocity of the ship with respect to the target. Furthermore, R s t , k = r k s r k t , and r = R s t , k . The PNG law is set as
c p n g , k = N p V s t , k Ω s t , k ,
where N p = 3 is a constant. The target’s velocity is updated as
v k + 1 t = v k t + T c p n g , k .
Then, the target’s position is updated using
r k + 1 t = r k t + T v k t v k t v k t .

6.3. Scenario 1

In this scenario, the ship moves at a velocity of (−5,0,0) in m/s.
We set G = 3 in Equation (4). This implies that we use M = G 2 = 9 drones in total. Figure 4 shows the result of one MC simulation using nine drones. In Figure 4, the target’s position at every 3 s is marked as blue circles. The position of every drone at every 3 s is depicted as circles with distinct colors. A red asterisk depicts the position of the virtual agent at every 3 s. The position of the ship at every 3 s is indicated by red diamonds.
Figure 5 is the enlarged figure of Figure 4. The position of every drone at every 3 s is depicted as circles with distinct colors. The drones generate a grid formation for protection against the incoming target.
Considering the scenario in Figure 5, Figure 6 removes the plots for the target’s positions in order to see the drones’ maneuvers clearly. The position of every drone at every 3 s is depicted as circles with distinct colors. A red asterisk depicts the position of the virtual agent at every 3 s. The position of the ship at every 3 s is indicated by red diamonds. A black asterisk presents the target’s position when the target is hit by a drone.
Figure 7a plots the distance between the virtual agent and the target as time elapses. The relative distance keeps decreasing as time elapses. Figure 7b depicts the distance between the virtual agent and the ship as time elapses. You can see that the virtual agent stays close to the safety distance β while protecting the ship. Figure 7c depicts the side length as time elapses. As time elapses, the side length increases to Δ .

6.3.1. The Effect of Changing System PARAMETERS (Number of Drones and Noise Strength)

We discuss the effect of the number of drones. We also present the effect of noise on the control performance. Let N s denote the standard deviation of measurement noise. In Equation (12), the measurement noise v k is generated with v k N ( 0 , R k ) , where R k is the diagonal matrix such that every diagonal element is N s 2 . This implies that the standard deviation of measurement noise is N s in meters.
Table 1 summarizes the MC simulation results representing the effect of system parameters. We apply three metrics ( c a p t u r e R a t e ( c R ) , e n d D i s t ( e D ) , and s i m T i m e ( s T ) ) to analyze the proposed controls. As the number of drones increases, c a p t u r e R a t e increases in general. This is due to the fact that as we deploy more drones, the area covered by the drones increases. Furthermore, as the measurement noise N s increases to 10 meters, the c a p t u r e R a t e decreases. Note that an MC simulation ends when the distance between a drone and the target is less than Δ = 10 m.

6.3.2. Comparison with Other State-of-the-Art Guidance Laws (Scenario 1)

To the best of our knowledge, this article is novel in developing a ship defense approach using clustered multiple drones. For comparison, we consider the case where only one drone is used, and the virtual agent applies the 3D PNG law in Section 6.2 to capture the target. We also consider the case where only one drone is used, and the drone applies the 3D Motion Camouflage Guidance (MCG) law in [18] to chase the target. We further consider the case where only one drone is used, and the drone applies the 3D Command to Line-Of-Sight (CLOS) guidance law in [53].
Table 2 shows the MC simulation comparison results of Scenario 1. We run m C MC simulations per each control law. Let P R O q indicate the proposed guidance law using G = q in Equation (4). Note that only one drone is used in P R O 1 . Moreover, nine drones are used in P R O 3 .
In Table 2, M C G presents the case where the 3D MCG law is used. P N G presents the case where the 3D PNG law is used. C L O S presents the case where the 3D CLOS law is used. See that the proposed control outperforms all other state-of-the-art controls.
Note that Equation (31) can be satisfied even when v m a x < v k t . This implies that even slow drones can capture a fast target when the drones stay close to the ship while staying in the lineState at all times. However, other state-of-the-art guidance laws ( M C G , P N G , and C L O S ) make a drone continue to chase the target. This maneuver makes the drone move away from the ship, which is not desirable for capturing a fast target using a slow interceptor.
Other state-of-the-art guidance laws ( M C G , P N G , and C L O S ) make a single drone chase the target. In our paper, we let multiple drones form a planar grid formation, which can be considered a “net” structure to capture the incoming target. The target may perform elusive maneuvers, and there may be measurement noise in measuring the target position. By maximizing the grid formation size, the capture rate is 100, even when there exists error in the prediction of the target’s position. Since the target is guided to hit its goal (ship), the drones can effectively block the target using this grid formation.

6.4. Scenario 2

We introduce Scenario 2 in Figure 8. The distinctions of Figure 8 from Figure 4 are as follows. As the relative distance between the target and the ship is less than 3000 m, the target moves towards the ship directly while increasing its speed to 240 m/s.
Figure 8 depicts the result of one MC simulation. We set G = 5 in Equation (4), i.e., we use G 2 = 25 drones in total. The position of every drone at every 3 s is depicted as circles with distinct colors. A red asterisk depicts the position of the virtual agent at every 3 s. Figure 8 shows the case where the target is captured by a drone.
Figure 9 is the enlarged figure of Figure 8. The position of every drone at every 3 s is depicted as circles with distinct colors. See that the grid formation is generated to protect the ship from the incoming target.
Considering the scenario in Figure 9, Figure 10 removes the plots for the target’s positions in order to see the drones’ maneuvers clearly. The position of every drone at every 3 s is depicted as circles with distinct colors. A red asterisk depicts the position of the virtual agent at every 3 s. The position of the ship at every 3 s is indicated by red diamonds. A black asterisk presents the target’s position when the target is hit by a drone.
Figure 11a depicts the distance between the virtual agent and the target as time elapses. See that the relative distance continuously decreases over time. Figure 11b depicts the distance between the virtual agent and the ship as time elapses. See that the virtual agent stays close to the safety distance β while protecting the ship. Figure 11c depicts the side length as time elapses. As time elapses, the side length increases to Δ .

Comparison with Other State-of-the-Art Guidance Laws (Scenario 2)

Table 3 shows the MC comparison results of Scenario 2. In this table, N s denotes the measurement noise. We run m C MC simulations per each control law.
In Table 3, P R O q indicates the proposed guidance law using G = q in (4). Note that only one drone is used in P R O 1 . Moreover, 25 drones are used in P R O 5 .
Table 3 shows that the proposed control outperforms all other controls. M C G , P N G , and C L O S make a drone keep chasing the target. This maneuver makes the drone move away from the ship, which is not desirable for capturing a fast target using a slow interceptor.
Other state-of-the-art guidance laws ( M C G , P N G , and C L O S ) make a single drone chase the target. Our strategy is to let multiple drones form a planar grid formation, which can be considered as a “net” for capturing the incoming target. By maximizing the grid formation size, the captureRate is 100 even when there exists error in the prediction of the target’s position.

7. Conclusions

This article introduces a multi-agent guidance law so that a formation of drones protects the ship from an incoming high-speed target. The drones generate a planar grid formation, whose center is guided to remain on the line connecting the target and the ship. Moreover, the planar formation is generated to be perpendicular to the line segment connecting the target and the formation center. Since a target heads towards its goal at least in the terminal phase, maintaining a position on this line segment is effective in protecting the ship.
We enable slow drones to capture a fast target by letting the drones stay close to the ship while staying in the lineState at all times. This blocking strategy is desirable considering the energy consumption of the interceptor since an interceptor does not have to move far from the ship.
We control the drone formation based on the prediction of the target’s position after one sample-index in the future. Since we use a grid formation of drones, we can increase the capture rate even when the target prediction is erroneous.
As far as we know, this article is novel in developing a ship defense approach using multiple clustered drones. In addition, our paper is novel in addressing the 3D formation control that can handle uncertainty in the target prediction. The effectiveness of our multi-agent guidance law is shown by comparing it with other state-of-the-art guidance laws under MATLAB simulations. We verify that the proposed multi-drone guidance scheme increases the capture probability significantly compared to the case where a single interceptor is used. In the future, we will do experiments to verify our multi-agent guidance law using real drones.
In practice, the presence of wind can affect and modify a drone’s path. Many papers handled how to control a drone under the effect of wind [54,55,56,57,58]. The authors of [54] improved the accelerated A-star algorithm with a converted wind vector, and [55] addressed the problem of a drone’s path planning operating in complex four-dimensional (time and spatially varying) wind-fields. Additionally, refs. [56,57,58] handled adaptive path planning in windy conditions. The authors of [58] added a wind model to the existing path planning algorithm and combined it with a drone’s control systems. In the future, we will combine the proposed guidance scheme with a wind model so that multiple drones can safely maneuver in time-varying wind-fields.
Note that the proposed guidance scheme can be applied to protect an entity other than a ship, as long as the goal of the target is known a priori. For instance, the proposed multi-agent guidance law can be generalized to protect any vehicles, such as tanks or ground stations.

Funding

This research was supported by the faculty research fund of Sejong university in 2023. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (Grant Number: 2022R1A2C1091682).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Kim, J.; Gadsden, S.A.; Wilkerson, S.A. A Comprehensive Survey of Control Strategies for Autonomous Quadrotors. Can. J. Electr. Comput. Eng. 2020, 43, 3–16. [Google Scholar] [CrossRef]
  2. Santana, L.V.; Brandao, A.S.; Sarcinelli-Filho, M. Outdoor waypoint navigation with the AR. Drone quadrotor. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015; pp. 303–311. [Google Scholar]
  3. Mellinger, D.; Michael, N.; Kumar, V. Trajectory generation and control for precise aggressive maneuvers with quadrotors. Int. J. Robot. Res. 2012, 31, 664–674. [Google Scholar] [CrossRef]
  4. Bar-Shalom, Y.; Fortmann, T.E. Tracking and Data Association; Academic Press: Orlando, FL, USA, 1988. [Google Scholar]
  5. Minaeian, S.; Liu, J.; Son, Y. Vision-Based Target Detection and Localization via a Team of Cooperative UAV and UGVs. IEEE Trans. Syst. Man Cybern. Syst. 2016, 46, 1005–1016. [Google Scholar] [CrossRef]
  6. Svec, P.; Thakur, A.; Raboin, E.; Shah, B.C.; Gupta, S.K. Target following with motion prediction for unmanned surface vehicle operating in cluttered environments. Auton. Robot. 2014, 36, 383–405. [Google Scholar] [CrossRef]
  7. Kim, J.; Kim, S.; Choo, Y. Stealth Path Planning for a High Speed Torpedo-Shaped Autonomous Underwater Vehicle to Approach a Target Ship. Cyber Phys. Syst. 2018, 4, 1–16. [Google Scholar] [CrossRef]
  8. Kim, J. Target Following and Close Monitoring Using an Unmanned Surface Vehicle. IEEE Trans. Syst. Man, Cybern. Syst. 2018, 50, 4233–4242. [Google Scholar] [CrossRef]
  9. Xu, Y.; Basset, G. Sequential virtual motion camouflage method for nonlinear constrained optimal trajectory control. Automatica 2012, 48, 1273–1285. [Google Scholar] [CrossRef]
  10. Srinivasan, M.V.; Davey, M. Strategies for Active Camouflage of Motion. Proc. R. Soc. B Biol. Sci. 1995, 259, 19–25. [Google Scholar]
  11. Mizutani, A.; Chahl, J.; Srinivasan, M. Insect behaviour: Motion camouflage in dragonflies. Nature 2003, 423, 604. [Google Scholar] [CrossRef]
  12. Rano, I. Direct collocation for two dimensional motion camouflage with non-holonomic, velocity and acceleration constraints. In Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China, 13–14 December 2013; pp. 109–114. [Google Scholar] [CrossRef]
  13. Justh, E.; Krishnaprasad, P. Steering laws for motion camouflage. Proc. R. Soc. A 2006, 462, 3629–3643. [Google Scholar] [CrossRef] [Green Version]
  14. Ghose, K.; Horiuchi, T.K.; Krishnaparasad, P.S.; Moss, C.F. Ecolocating bats use a nearly time-optimal strategy to intercept prey. PLoS Biol. 2006, 4, 865–873. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Anderson, A.J.; McOwan, P.W. Model of a predatory stealth behaviour camouflaging motion. Proc. R. Soc. B 2003, 270, 489–495. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Kim, J. Controllers to Chase a High-Speed Evader Using a Pursuer with Variable Speed. Appl. Sci. 2018, 8, 1976. [Google Scholar] [CrossRef] [Green Version]
  17. Galloway, K.S.; Justh, E.W.; Krishnaprasad, P.S. Motion camouflage in a stochastic setting. In Proceedings of the IEEE International Conference on Decision and Control (CDC), New Orleans, LA, USA, 12–14 December 2007; pp. 1652–1659. [Google Scholar]
  18. Reddy, P.V.; Justh, E.W.; Krishnaprasad, P.S. Motion camouflage in three dimensions. In Proceedings of theDecision and Control, San Diego, CA, USA, 13–15 December 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 3327–3332. [Google Scholar]
  19. Halder, U.; Dey, B. Biomimetic Algorithms for Coordinated Motion: Theory and Implementation. In Proceedings of the International Conference on Robotics and Automation (ICRA), Seattle, DC, USA, 26–30 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 5426–5432. [Google Scholar]
  20. Oh, J.; Ha, I. Capturability of the 3-dimensional pure PNG law. IEEE Trans. Aerosp. Electron. Syst. 1999, 35, 491–503. [Google Scholar]
  21. Song, S.; Ha, I. A lyapunov-like approach to performance analysis of 3-dimensional pure PNG laws. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 238–248. [Google Scholar] [CrossRef]
  22. Liu, D.; Lee, M.C.; Pun, C.M.; Liu, H. Analysis of Wireless Localization in Non-Line-of-Sight Conditions. IEEE Trans. Veh. Technol. 2013, 62, 1484–1492. [Google Scholar] [CrossRef]
  23. Prasanna, H.M.; Ghose, D. Retro-Proportional-Navigation: A new guidance law for interception of high-speed targets. J. Guid. Control Dyn. 2012, 35, 377–386. [Google Scholar] [CrossRef]
  24. Mischiati, M.; Krishnaprasad, P.S. Mutual motion camouflage in 3D. In Proceedings of the 18th IFAC World Congress, Milan, Italy, 28 August–2 September 2011; pp. 4483–4488. [Google Scholar]
  25. Alonso-Mora, J.; Montijano, E.; N?geli, T.; Hilliges, O.; Schwager, M.; Rus, D. Distributed multi-robot formation control in dynamic environments. Auton. Robot. 2019, 43, 1079–1100. [Google Scholar] [CrossRef] [Green Version]
  26. Ji, M.; Muhammad, A.; Egerstedt, M. Leader-based multi-agent coordination: Controllability and optimal control. In Proceedings of the American Control Conference, Minneapolis, MN, USA, 14–16 June 2006; pp. 1358–1363. [Google Scholar]
  27. Kim, J. Cooperative Exploration and Networking While Preserving Collision Avoidance. IEEE Trans. Cybern. 2017, 47, 4038–4048. [Google Scholar] [CrossRef]
  28. Kim, J. Capturing intruders based on Voronoi diagrams assisted by information networks. Int. J. Adv. Robot. Syst. 2017, 14, 1729881416682693. [Google Scholar] [CrossRef]
  29. Kim, J. Cooperative Exploration and Protection of a Workspace Assisted by Information Networks. Ann. Math. Artif. Intell. 2014, 70, 203–220. [Google Scholar] [CrossRef]
  30. Parker, L.E.; Kannan, B.; Fu, X.; Tang, Y. Heterogeneous mobile sensor net deployment using robot herding and line-of-sight formations. In Proceedings of the Intelligent Robots and Systems, Las Vegas, NV, USA, 27 October–1 November 2003; Volume 3, pp. 2488–2493. [Google Scholar]
  31. Mirzaei, F.M.; Mourikis, A.I.; Roumeliotis, S.I. On the Performance of Multi-robot Target Tracking. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3482–3489. [Google Scholar] [CrossRef]
  32. Hausman, K.; Muller, J.; Hariharan, A.; Ayanian, N.; Sukhatme, G.S. Cooperative multi-robot control for target tracking with onboard sensing. Int. J. Robot. Res. 2015, 34, 1660–1677. [Google Scholar] [CrossRef]
  33. Fonod, R.; Shima, T. Wingman-based Estimation and Guidance for a Sensorless PN-Guided Pursuer. IEEE Trans. Aerosp. Electron. Syst. 2019, 56, 1754–1766. [Google Scholar] [CrossRef] [Green Version]
  34. Fonod, R.; Shima, T. Blinding Guidance Against Missiles Sharing Bearings-Only Measurements. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 205–216. [Google Scholar] [CrossRef]
  35. Jeon, I.; Lee, J.; Tahk, M. Homing guidance law for cooperative attack of multiple missiles. J. Guid. Control Dyn. 2010, 33, 275–280. [Google Scholar] [CrossRef]
  36. Zhang, T.; Yang, J. Guidance law of multiple missiles for cooperative simultaneous attack against maneuvering target. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 4536–4541. [Google Scholar] [CrossRef]
  37. Lee, C.H.; Tsourdos, A. Cooperative Control for Multiple Interceptors to Maximize Collateral Damage. IFAC-PapersOnLine 2018, 51, 56–61. [Google Scholar] [CrossRef]
  38. Wang, L.; Liu, K.; Yao, Y.; He, F. A Design Approach for Simultaneous Cooperative Interception Based on Area Coverage Optimization. Drones 2022, 6, 156. [Google Scholar] [CrossRef]
  39. Fossen, T.I. Guidance and Control of OCEAN Vehicles; John Wiley and Sons: Hoboken, NJ, USA, 1994. [Google Scholar]
  40. Garcia de Marina, H.; Cao, M.; Jayawardhana, B. Controlling Rigid Formations of Mobile Agents Under Inconsistent Measurements. IEEE Trans. Robot. 2015, 31, 31–39. [Google Scholar] [CrossRef] [Green Version]
  41. Krick, L.; Broucke, M.E.; Francis, B.A. Stabilization of infinitesimally rigid formations of multi-robot networks. In Proceedings of the 2008 47th IEEE Conference on Decision and Control, Cancun, Mexico, 9–11 December 2008; pp. 477–482. [Google Scholar]
  42. Paley, D.A.; Zhang, F.; Leonard, N.E. Cooperative Control for Ocean Sampling: The Glider Coordinated Control System. IEEE Trans. Control Syst. Technol. 2008, 16, 735–744. [Google Scholar] [CrossRef]
  43. Ji, M.; Egerstedt, M. Distributed Coordination Control of Multiagent Systems While Preserving Connectedness. IEEE Trans. Robot. 2007, 23, 693–703. [Google Scholar] [CrossRef]
  44. Kim, J. Constructing 3D Underwater Sensor Networks without Sensing Holes Utilizing Heterogeneous Underwater Robots. Appl. Sci. 2021, 11, 4293. [Google Scholar] [CrossRef]
  45. Kim, J.; Kim, S. Motion control of multiple autonomous ships to approach a target without being detected. Int. J. Adv. Robot. Syst. 2018, 15, 1729881418763184. [Google Scholar] [CrossRef]
  46. Luo, S.; Kim, J.; Parasuraman, R.; Bae, J.H.; Matson, E.T.; Min, B.C. Multi-robot rendezvous based on bearing-aided hierarchical tracking of network topology. Ad Hoc Netw. 2019, 86, 131–143. [Google Scholar] [CrossRef]
  47. Ristic, B.; Arulampalam, S.; Gordon, N. Beyond the Kalman Filter: Particle Filters for Tracking Applications; Artech House Radar Library: Boston, MA, USA, 2004. [Google Scholar]
  48. MacAlpine, P.; Price, E.; Stone, P. SCRAM: Scalable Collision-avoiding Role Assignment with Minimal-Makespan for Formational Positioning. In Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; Volume 29. [Google Scholar]
  49. Chakravarthy, A.; Ghose, D. Obstacle avoidance in a dynamic environment: A collision cone approach. IEEE Trans. Syst. Man Cybern. 1998, 28, 562–574. [Google Scholar] [CrossRef] [Green Version]
  50. Svec, P.; Shah, B.C.; Bertaska, I.R.; Alvarez, J.; Sinisterra, A.J.; von Ellenrieder, K.; Dhanak, M.; Gupta, S.K. Dynamics-aware target following for an automomous surface vehicle pperating under Colregs in civilian traffic. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 3–7 November 2013; pp. 3871–3878. [Google Scholar]
  51. Chang, D.E.; Shadden, S.C.; Marsden, J.E.; Olfati-Saber, R. Collision Avoidance for Multiple Agent Systems. In Proceedings of the IEEE International Conference on Decision and Control, Maui, HI, USA, 9–12 December 2003; pp. 539–543. [Google Scholar]
  52. Lalish, E.; Morgansen, K. Distributed reactive collision avoidance. Auton. Robot 2012, 32, 207–226. [Google Scholar] [CrossRef]
  53. Zahra, B.F.T.; Shah, S.I.A. Integrated CLOS and PN Guidance for Increased Effectiveness of Surface to Air Missiles. INCAS Bull. 2017, 9, 141–156. [Google Scholar]
  54. Selecký, M.; Váňa, P.; Rollo, M.; Meiser, T. Wind Corrections in Flight Path Planning. Int. J. Adv. Robot. Syst. 2013, 10, 248. [Google Scholar] [CrossRef] [Green Version]
  55. Chakrabarty, A.; Langelaan, J. UAV flight path planning in time varying complex wind-fields. In Proceedings of the 2013 American Control Conference, Washington, DC, USA, 17–19 June 2013; pp. 2568–2574. [Google Scholar]
  56. Coombes, M.; Chen, W.H.; Liu, C. Boustrophedon coverage path planning for UAV aerial surveys in wind. In Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 1563–1571. [Google Scholar]
  57. Coombes, M.; Fletcher, T.; Chen, W.H.; Liu, C. Optimal Polygon Decomposition for UAV Survey Coverage Path Planning in Wind. Sensors 2018, 18, 2132. [Google Scholar] [CrossRef] [Green Version]
  58. McGee, T.; Hedrick, J. Path planning and control for multiple point surveillance by an unmanned aircraft in wind. In Proceedings of the 2006 American Control Conference, Minneapolis, MN, USA, 14–16 June 2006; pp. 4261–4266. [Google Scholar]
Figure 1. The case where the ship moves as the time index changes from k to k + 1 . The ship positions are indicated by crosses. In this figure, L k , L ¯ k + 1 , and c k are depicted.
Figure 1. The case where the ship moves as the time index changes from k to k + 1 . The ship positions are indicated by crosses. In this figure, L k , L ¯ k + 1 , and c k are depicted.
Applsci 13 04397 g001
Figure 2. Block diagram of the proposed multi-drone guidance law.
Figure 2. Block diagram of the proposed multi-drone guidance law.
Applsci 13 04397 g002
Figure 3. The case where L k and L ¯ k + 1 meet at I L .
Figure 3. The case where L k and L ¯ k + 1 meet at I L .
Applsci 13 04397 g003
Figure 4. The result of one MC simulation using the proposed multi-agent guidance law. The proposed multi-agent guidance law is applied to hit the target. The position of every drone at every 3 s is depicted as circles with distinct colors. A red asterisk depicts the position of the virtual agent at every 3 s. The target’s position at every 3 s is marked as blue circles. The position of the ship at every 3 s is indicated by red diamonds.
Figure 4. The result of one MC simulation using the proposed multi-agent guidance law. The proposed multi-agent guidance law is applied to hit the target. The position of every drone at every 3 s is depicted as circles with distinct colors. A red asterisk depicts the position of the virtual agent at every 3 s. The target’s position at every 3 s is marked as blue circles. The position of the ship at every 3 s is indicated by red diamonds.
Applsci 13 04397 g004
Figure 5. The enlarged figure of Figure 4. The position of every drone at every 3 s is depicted as circles with distinct colors. The drones generate a grid formation for protection against the incoming target.
Figure 5. The enlarged figure of Figure 4. The position of every drone at every 3 s is depicted as circles with distinct colors. The drones generate a grid formation for protection against the incoming target.
Applsci 13 04397 g005
Figure 6. Considering the scenario in Figure 5, we remove the plots for the target’s positions in order to see the drones’ maneuvers clearly. The position of every drone at every 3 s is depicted as circles with distinct colors. A red asterisk depicts the position of the virtual agent at every 3 s. The position of the ship at every 3 s is indicated by red diamonds. A black asterisk presents the target’s position when the target is hit by a drone.
Figure 6. Considering the scenario in Figure 5, we remove the plots for the target’s positions in order to see the drones’ maneuvers clearly. The position of every drone at every 3 s is depicted as circles with distinct colors. A red asterisk depicts the position of the virtual agent at every 3 s. The position of the ship at every 3 s is indicated by red diamonds. A black asterisk presents the target’s position when the target is hit by a drone.
Applsci 13 04397 g006
Figure 7. The result of one MC simulation using grid formation. The proposed guidance law is applied to hit the target. (a) depicts the distance between the virtual agent and the target as time elapses. The relative distance keeps decreasing as time elapses. (b) plots the distance between the virtual agent and the ship as time elapses. See that the virtual agent stays close to the safety distance β while protecting the ship. (c) depicts the side length as time elapses.
Figure 7. The result of one MC simulation using grid formation. The proposed guidance law is applied to hit the target. (a) depicts the distance between the virtual agent and the target as time elapses. The relative distance keeps decreasing as time elapses. (b) plots the distance between the virtual agent and the ship as time elapses. See that the virtual agent stays close to the safety distance β while protecting the ship. (c) depicts the side length as time elapses.
Applsci 13 04397 g007
Figure 8. The result of one MC simulation using the proposed grid formation. The position of every drone at every 3 s is depicted as circles with distinct colors. A red asterisk depicts the position of the virtual agent at every 3 s.
Figure 8. The result of one MC simulation using the proposed grid formation. The position of every drone at every 3 s is depicted as circles with distinct colors. A red asterisk depicts the position of the virtual agent at every 3 s.
Applsci 13 04397 g008
Figure 9. The enlarged figure of Figure 8. The position of every drone at every 3 s is depicted as circles with distinct colors. See that the grid formation is generated to protect the ship from the incoming target.
Figure 9. The enlarged figure of Figure 8. The position of every drone at every 3 s is depicted as circles with distinct colors. See that the grid formation is generated to protect the ship from the incoming target.
Applsci 13 04397 g009
Figure 10. Considering the scenario in Figure 9, we remove the plots for the target’s positions in order to see the drones’ maneuvers clearly. The position of every drone at every 3 s is depicted as circles with distinct colors. A red asterisk depicts the position of the virtual agent at every 3 s. The position of the ship at every 3 s is indicated by red diamonds. A black asterisk presents the target’s position when the target is hit by a drone.
Figure 10. Considering the scenario in Figure 9, we remove the plots for the target’s positions in order to see the drones’ maneuvers clearly. The position of every drone at every 3 s is depicted as circles with distinct colors. A red asterisk depicts the position of the virtual agent at every 3 s. The position of the ship at every 3 s is indicated by red diamonds. A black asterisk presents the target’s position when the target is hit by a drone.
Applsci 13 04397 g010
Figure 11. (a) depicts the distance between the virtual agent and the target as time elapses. See that the relative distance continuously decreases over time. (b) plots the distance between the virtual agent and the ship as time elapses. See that the virtual agent stays close to the safety distance β while protecting the ship. (c) depicts the side length as time elapses.
Figure 11. (a) depicts the distance between the virtual agent and the target as time elapses. See that the relative distance continuously decreases over time. (b) plots the distance between the virtual agent and the ship as time elapses. See that the virtual agent stays close to the safety distance β while protecting the ship. (c) depicts the side length as time elapses.
Applsci 13 04397 g011
Table 1. MC simulation results. The effect of changing system parameters, G and N s .
Table 1. MC simulation results. The effect of changing system parameters, G and N s .
G N s cReDsT
311009557
1110010857
3101009457
1106010557
Table 2. Comparison with other state-of-the-art controls (Scenario 1).
Table 2. Comparison with other state-of-the-art controls (Scenario 1).
Control N s cR eD sT
P R O 3 11009557
P R O 1 110010857
P N G 1028057
M C G 1028057
C L O S 12027656
Table 3. Comparison with other state-of-the-art controls (Scenario 2).
Table 3. Comparison with other state-of-the-art controls (Scenario 2).
Control N s cReDsT
P R O 5 1010024767
P R O 1 109526067
P N G 10033968
M C G 10033968
C L O S 10033968
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, J. Ship Defense Strategy Using a Planar Grid Formation of Multiple Drones. Appl. Sci. 2023, 13, 4397. https://doi.org/10.3390/app13074397

AMA Style

Kim J. Ship Defense Strategy Using a Planar Grid Formation of Multiple Drones. Applied Sciences. 2023; 13(7):4397. https://doi.org/10.3390/app13074397

Chicago/Turabian Style

Kim, Jonghoek. 2023. "Ship Defense Strategy Using a Planar Grid Formation of Multiple Drones" Applied Sciences 13, no. 7: 4397. https://doi.org/10.3390/app13074397

APA Style

Kim, J. (2023). Ship Defense Strategy Using a Planar Grid Formation of Multiple Drones. Applied Sciences, 13(7), 4397. https://doi.org/10.3390/app13074397

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop