Next Article in Journal
A New Precise Point Positioning with Ambiguity Resolution (PPP-AR) Approach for Ground Control Point Positioning for Photogrammetric Generation with Unmanned Aerial Vehicles
Previous Article in Journal
Tiny-Object Detection Based on Optimized YOLO-CSQ for Accurate Drone Detection in Wildfire Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Drone Swarm Robust Cooperative Formation Pursuit through Relative Positioning in a Location Denial Environment

by
Huanli Gao
1,2,3,
Aixin Zhang
1,
Wei Li
1 and
He Cai
1,2,3,*
1
School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
2
Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Guangzhou 510640, China
3
Guangdong Engineering Technology Research Center of Unmanned Aerial Vehicle Systems, Guangzhou 510640, China
*
Author to whom correspondence should be addressed.
Drones 2024, 8(9), 455; https://doi.org/10.3390/drones8090455
Submission received: 4 August 2024 / Revised: 30 August 2024 / Accepted: 1 September 2024 / Published: 2 September 2024

Abstract

:
This paper considers the pursuit problem of a moving target by a swarm of drones through a flexible-configuration formation. The drones are modeled by second-order systems subject to uncertain damping ratios, whereas the moving target follows a polynomial-type trajectory whose coefficient vectors are fully unknown. Due to location denial, drones cannot obtain their absolute positions, but they can obtain their positions relative to other neighboring drones and the target. To achieve a robust formation pursuit, a robust cooperative control protocol is synthesized, which comprises three key components, namely, the pseudo drone position estimator, the pseudo target position estimator, and the local internal model control (IMC) law. The pseudo drone position estimator and the pseudo target position estimator aim to recover for each drone the position of itself and the target, respectively, but are subject to some common unknown constant bias in a distributed manner. By subtracting the pseudo target position from the pseudo drone position, each drone can acquire its position relative to the target, which facilitates the design of a local IMC law to fulfill formation pursuit in the presence of system parametric uncertainties. Both pure numerical simulation and hardware-in-the-loop (HIL) simulation are performed to verify the effectiveness of the proposed control protocol.

1. Introduction

The cooperative control of multi-agent systems has attracted significant interest from scholars because of its broad applications in both civil and military fields, such as cooperative load transportation [1], flocking Moshtagh and Jadbabaie [2], formation [3], cooperative hunting [4], and so on.
In recent years, the cooperative pursuit problem has gained prominence as a popular research direction. The goal of the cooperative pursuit problem is to guide a group of pursuers in a cooperative way to capture an evader or a group of evaders. Benda [5] proposed the pursuit problem for the first time, where multiple pursuers cooperate to round up a single evader in a grid discrete environment. Isaacs [6] extended game theory to the study of pursuit–evasion problems by introducing differential games, addressing mathematical challenges in dynamic systems. Vidal et al. [7] explored UAV and UGV coordination for evader pursuit and mapping in unknown environments using a probabilistic game-theoretical framework. Xu et al. [8] examined multiplayer pursuit-evasion games with malicious pursuers, analyzed their effects on capture time and equilibrium within an N-player nonzero-sum game framework, and quantified these effects using a pursuit coefficient derived from Riccati differential equations. Kobayashi et al. [9] divided the target capture task into two parts: the encircling behavior and the grasping behavior. Moreover, based on gradient descent, a decentralized control method for multiple mobile robots to capture target objects is proposed. Kim and Sugie [10] studied a distributed control method for multi-agent systems in three-dimensional space by adopting a cyclic pursuit strategy, addressing a target-capturing task without inter-agent communication. Mas et al. [11] utilized a multi-robot cluster space control method to address tasks such as trapping, escorting, and patrolling around an autonomous target. Deghat et al. [12] proposed an estimator and controller to address the circular motion of a non-holonomic agent around a target with an unknown position. Hafez et al. [13] used model predictive control to address the dynamic encirclement issue of a team of UAVs. In [14], a control framework was first proposed, followed by three variations of the fundamental control strategy to solve different versions of the problem of a multi-robot system encircling a moving target in three-dimensional space. Chen [15] introduced an autonomous controller with attraction, repulsion, and rotation components to solve the target-fencing problem, enabling vehicles to asymptotically surround a target without collision or fixed formation. Hu et al. [16] proposed a distributed estimator and control law for cooperative equal-distance encirclement and collision avoidance in second-order nonlinear multi-agent systems. Based on distance formation control and collision-avoidance potential function, Ringbäck et al. [17] designed a formation control protocol for surface vehicles to track an underwater target. Yang et al. [18] developed an estimator-controller framework for target capture using only bearing measurements, extending it to multiple robots and moving targets. Shao et al. [19] introduced a robust control method for quadrotor encircling, using event-triggered observers and cooperative guidance to achieve precise encircling without target speed or acceleration data. Zheng et al. [20] solved the multi-agent circle formation problem with restricted communication scopes and time-varying delays by implementing an innovative gradient-descent-based controller. Lu et al. [21] developed an innovative cooperative constrained control scheme that enables nonholonomic multi-robot systems to effectively track and surround a moving target in an obstacle environment. Pei et al. [22] introduced a distributed approach to multi-flocking and circle formation control to address the multi-target consensus pursuit problem in multi-agent systems. Deghat et al. [23,24] explored multi-target localization and encirclement, addressing these issues for a single agent and a multi-agent system, respectively. Liu et al. [25,26,27] all focused on developing advanced control strategies to solve vibration and tracking problems in flexible mechanical systems. Numerous studies have explored solutions to the cooperative pursuit problem for drone systems, and some classic drone operation algorithms are summarized by Table 1.
The cooperative output regulation method serves as an effective approach to resolving cooperative control problems [51]. In the cooperative output regulation problem, the goal is to ensure that, on the condition that the stability of the closed-loop system is maintained, the output of each follower can suppress external disturbances and asymptotically track the trajectory of the external leader. To date, several approaches have explored applying the theory of cooperative output regulation to tackle the distributed formation challenge. Wang [52] developed a generalized internal model-based controller and realized the formation of uncertain linear heterogeneous multi-agent systems under varying topologies. Li et al. [53] developed a nonlinear distributed dynamic feedback controller to realize the formation of affine nonlinear dynamic multi-agent systems. Haghshenas et al. [54] treated a leader as an exosystem and developed a distributed full-information controller with a compensator to address the containment control problem in linear heterogeneous multi-agent systems. Lu and Liu [55] considered the leader-following consensus issue in linear multi-agent systems, accounting for connectivity changes in switching networks and nonuniform time-varying communication delays. Seyboth et al. [56] presented an innovative distributed controller for both homogeneous and heterogeneous linear agents to enhance cooperative behavior and transient synchronization in multi-agent systems, addressing coordination tasks such as reference tracking, disturbance rejection, and output synchronization. Li et al. [57] considered both scenarios where all agents have access to the leader or only some followers have such access, proposing adaptive fuzzy control laws in both cases. Cai et al. [58] proposed a distributed control strategy based on an adaptive distributed observer to address the situation where not every follower is aware of the leader system’s system matrix. In [59], a distributed feedforward control scheme is proposed for heterogeneous linear multi-agent systems, introducing an innovative distributed dynamic compensator that reduces communication load by eliminating the need for state information exchange. Lu and Liu [60,61] each proposed a novel adaptive distributed control scheme to address the leader–follower consensus issue in multiple uncertain Euler–Lagrange systems. Specifically, Lu and Liu [60] addressed switching networks and communication delays, while Lu and Liu [61] concentrated on an unknown dynamic leader. Zhang et al. [62] proposed a controller utilizing a distributed adaptive observer, which can surround the targets with heterogeneous dynamics and unknown inputs in a certain formation. Yan et al. [63] proposed a robust distributed control approach to heterogeneous multi-agent systems to attain time-varying formations under both switching and static topologies, addressing disturbances and uncertainties through a distributed observer and robust controllers based on internal models. Yuan et al. [64] proposed distributed event-triggered control protocols to achieve time-varying formations around a virtual leader without continuous communication in linear heterogeneous multi-agent systems. Huang et al. [65] proposed a distributed dynamic observer utilizing the measurable output of the leader to approximate a convex combination of the leaders’ states and designed an innovative distributed dynamic output feedback control law to tackle the robust output containment problem in linear heterogeneous multi-agent systems with fixed directed networks. This method was applicable not only to followers with identical state dimensions but also to those with different state dimensions. Duan et al. [66] proposed a fixed-time time-varying output formation–containment control method for heterogeneous multi-agent systems, ensuring leader formation and follower convergence within a convex hull. Wu et al. [67] introduced a distributed observer leveraging the relative information from neighborhood agents and proposed distributed fault-tolerant formation tracking strategies using an adaptive updating mechanism, which solved the problem of followers achieving the desired spatial formation and tracking reference signals generated from one or more leaders when actual state information is not available. Lu et al. [68] addressed the issue of distributed formation tracking in discrete-time linear heterogeneous multi-agent systems with directed communication networks by integrating the state augmentation technique with a feedforward-based output regulation approach. Under the signed communication topology, Han and Zheng [69], Jiang et al. [70], Li et al. [71], Zhang et al. [72] all studied bipartite formation control.
This paper explores a swarm of second-order drones with uncertain parameters that cooperate in a flexible-configuration formation to pursue a moving target that follows a polynomial-type uncertain trajectory. The drone swarm does not have access to its own absolute position but can obtain the relative positions between neighboring drones through communication. In addition, there is no communication between the target and the drone swarm, and only some drones can acquire their positions relative to the target. Through the application of the cooperative output regulation approach, an innovative robust cooperative control protocol is presented to address the formation pursuit problem. First, a pseudo drone position estimator is designed, which aims to recover the position of each drone for itself but is subject to some common unknown constant bias. Then, based on the estimated pseudo drone position, a pseudo target position estimator is proposed, which is capable of recovering the position of the target for each drone but is subject to the same unknown constant bias. In this way, by subtracting the pseudo target position from the pseudo drone position, each drone can obtain its position relative to the target, which facilitates the design of a local internal model control (IMC) law to fulfill formation pursuit despite system parametric uncertainties.
The innovative aspects of this paper are summarized as follows.
  • Unlike most existing results such as [54,55,58,59,64,66,67,69,70,71], the innovation of the robust cooperative control protocol proposed in this paper is that by taking the advantages of robust output regulation theory, it allows for parameter uncertainty in the drone model, which enhances the robustness to system parameter uncertainty and prevents performance degradation or instability.
  • Unlike the studies [11,12,23] that require knowledge of the agents’ absolute positions, the innovation of the robust cooperative control protocol proposed in this paper is its use of relative position measurements between neighboring drones, thereby eliminating the need for global positioning, which can be costly or even infeasible in practical applications, to achieve effective target pursuit.
  • Unlike [11,14,17,19,21], the innovation of the robust cooperative control protocol proposed in this paper is that even in scenarios where there is no communication between the target and the drones, an effective target pursuit can be achieved by relying only on some drones to obtain their position information relative to the target.
  • Unlike [10,15,18,20,21,22,23,24,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72], which involve only numerical simulations, this study considered a more practical scenario, i.e., demonstrating the effectiveness of the proposed control protocol by conducting experiments on the Links-RT UAV flight control hardware-in-the-loop (HIL) simulation platform that adopts real control implementations.
Notation: R represents the set of real numbers, while C represents the set of complex numbers. The Kronecker product of matrices is denoted by ⊗. 1 N signifies a column vector of dimension N, where each element is 1. For matrices A i R n i × n i , where i = 1 , , K , the matrix D ( A 1 , , A K ) is a block diagonal matrix, i.e., block diag { A 1 , , A K } . For vectors x i R m i , where i = 1 , , m , the notation col x 1 , , x m denotes the column vector obtained by stacking the vectors x i , i.e., x 1 T , , x m T T . σ ( A ) indicates the eigenvalue spectrum of a square matrix A, while ( σ ( A ) ) refers to the real parts of these eigenvalues. Define δ ¯ A as the maximum real part of the eigenvalues of A, and δ ̲ A as the minimum real part. A directed graph (digraph) G = ( V , E ) is defined by a set of nodes V = { 1 , , N } and a set of directed edges E = { ( i , j ) | i , j V , i j } . An edge ( i , j ) represents a directed connection from node i to node j, where node i is considered a neighbor of node j. A path from node i 1 to node i k + 1 is present if there is a sequence of edges i 1 , i 2 , i 2 , i 3 , , i k , i k + 1 . In such a case, node i k + 1 is deemed reachable from node i 1 . A digraph has a spanning tree if there is a node i from which all other nodes are reachable. This node i is known as the root of the spanning tree. In digraph G , when the diagonal elements a i i = 0 and the off-diagonal elements a i j > 0 if and only if ( j , i ) E , the non-negative matrix A = [ a i j ] R N × N is referred to as the weighted adjacency matrix. The Laplacian matrix L is represented as L = [ l i j ] R N × N , with diagonal elements l i i equal to the sum of all elements in the ith row of the weighted adjacency matrix A , i.e., l i i = j = 1 N a i j , and off-diagonal elements l i j = a i j for i j .

2. Problem Formulation

This paper considers the problem of a swarm of N drones pursuing a moving target in a flexible-configuration formation. As verified by real experiments such as Dong et al. [73,74], the drone model can be approximately simplified into the second-order system described as follows:
p ˙ i ( t ) = v i ( t ) v ˙ i ( t ) = k i v i ( t ) + u i ( t ) , i = 1 , , N
where p i ( t ) , v i ( t ) , u i ( t ) R 3 represent the ith drone’s position, velocity, and control input. k i R represents the uncertain damping ratio and is expressed as k i = k i * + w i , where k i * R is the nominal value and w i R is the uncertainty. The existence of the uncertain part w i is due to environment uncertainties, such as the variation in wind resistance effects. For i = 1 , , N , let x i ( t ) = col p i ( t ) , v i ( t ) . Thus, (1) can be reformulated as
x ˙ i ( t ) = 0 1 0 k i I 3 x i ( t ) + 0 1 I 3 u i ( t ) A i x i ( t ) + B i u i ( t ) p i ( t ) = 1 0 I 3 x i ( t ) C i x i ( t ) .
Additionally, let the nominal component of the system matrix A i be defined as A i o = 0 1 0 k i * I 3 .
The position of the moving target r 0 ( t ) R 3 is described by
r 0 ( t ) = α n t n + + α 1 t + α 0 ,
where for i = 0 , 1 , , n , α i R 3 are the coefficient vectors. In this paper, it is considered the case that the coefficient vectors α i are fully unknown, whereas the order n is known. On the one hand, polynomial functions can approximate most common trajectories for moving targets in the real world, and on the other hand, they can be generated in a neat way by the following deterministic linear time-invariant system, even with parametric uncertainties:
v ˙ 0 ( t ) = Υ v 0 ( t ) r 0 ( t ) = Π v 0 ( t )
where v 0 ( t ) R 3 ( n + 1 ) , and
Υ = 0 n × 1 I n 0 0 1 × n I 3 , Π = 1 0 1 × n I 3 .
Clearly, Π , Υ is observable, and thus, the following algebraic Riccati equation
Θ Υ T + Υ Θ Θ Π T Π Θ + I 3 ( n + 1 ) = 0
admits a unique positive definite solution Θ > 0 .
The communication network for all the drones of (1) is described by a digraph G = ( V , E ) , where V = { 1 , , N } , with node i standing for the ith drone, and E = { ( i , j ) | i , j V , i j } , indicating that there is a directed edge from the ith drone to the jth drone if the jth drone can receive information from the ith drone. Let the weighted adjacency matrix of G be A = a i j R N × N , and the Laplacian of G be L . In this paper, drones cannot obtain their absolute positions due to location denial but can obtain their positions relative to their neighboring drones and the target. Depending on whether the drones can acquire their position relative to the target, they are categorized into two groups, without loss of generality: M = 1 , 2 , , M and N = M + 1 , M + 2 , , N . Only the drones in the set of N can acquire their position relative to the target, while others, belonging to the set of M , cannot. To illustrate this situation, an example is provided by Figure 1. In this example, M = 1 , 2 , 3 , 4 , N = 5 . To achieve a target pursuit, it must hold that N . Regarding the communication network, the following two standard and necessary assumptions are made in this paper.
Assumption 1.
The digraph G contains a spanning tree.
Assumption 2.
For every j M , there exists at least one node i of N , such that node j is reachable from node i.
Assumption 1 is made so that the whole drone swarm can achieve relative positioning, which will be seen shortly, yet subject to some common unknown bias, and Assumption 2 is to guarantee that the position relative to the target can finally be obtained by the drones that cannot acquire such information directly, through dynamic cooperative estimation over the communication network. Partition the Laplacian matrix L as described below:
L = L 1 L 2 L 3 L 4
where L 1 R M × M , L 2 R M × ( N M ) , L 3 R ( N M ) × M , L 4 R ( N M ) × ( N M ) . Then, under Assumption 2, the lemma stated below is valid.
Lemma 1
(Lemma 1 of [75]). Under Assumption 2, the real parts of all eigenvalues of L 1 are positive.
For i = 1 , , N , the flexible-configuration formation is specified by the local formation vector r f i ( t ) , which is produced by the local bias system described below:
h ˙ i ( t ) = Φ i h i ( t ) r f i ( t ) = ϕ i h i ( t )
where h i ( t ) R n i , r f i ( t ) R 3 , and Φ i , ϕ i are constant matrices. Without loss of generality, the eigenvalues of Φ i are assumed to have non-negative real parts.
Then, for each drone, the desired trajectory p r i ( t ) R 3 is defined as follows:
p r i ( t ) = r 0 ( t ) + r f i ( t ) ,
Consequently, the trajectory tracking error e i ( t ) R 3 would be
e i ( t ) = p i ( t ) r 0 ( t ) r f i ( t ) .
Next, the robust cooperative formation pursuit problem addressed in this paper can be stated below.
Problem 1.
Given the uncertain second-order drone system (2), the moving target system (4), the local bias system (6), and the communication digraph G , a local IMC law u i ( t ) is designed that utilizes only relative positions, such that there exists W R , where, for any w i W , the closed-loop system is asymptotically stable regardless of the initial condition, and for each drone
lim t e i ( t ) = 0

3. Main Results

Here, a robust cooperative control protocol is synthesized to achieve formation pursuit, which comprises three key components, namely, the pseudo drone position estimator, the pseudo target position estimator, and the local IMC law. The detailed design and analysis are given as follows.

3.1. Pseudo Drone Position Estimator

For each drone, the pseudo drone position estimator is specified below:
η ˙ i ( t ) = v i ( t ) + μ η j = 1 N a i j η j ( t ) η i ( t ) p j i ( t ) p j i ( t ) = p j ( t ) p i ( t ) , i = 1 , , N
where μ η > 0 . Note that the pseudo drone position estimator (10) depends only on the relative position for each neighboring pair of drones.
Let η ¯ i ( t ) = η i ( t ) p i ( t ) , and then it follows that
η ¯ ˙ i ( t ) = η ˙ i ( t ) p ˙ i ( t ) = v i ( t ) + μ η j = 1 N a i j η j ( t ) η i ( t ) p j i ( t ) v i ( t ) = μ η j = 1 N a i j η j ( t ) η i ( t ) ( p j ( t ) p i ( t ) ) = μ η j = 1 N a i j η ¯ j ( t ) η ¯ i ( t ) .
Furthermore, define η ¯ ( t ) = col η ¯ 1 ( t ) , , η ¯ N ( t ) , and it can be deduced that
η ¯ ˙ ( t ) = μ η L I 3 η ¯ ( t ) .
Then, by Example 2.1 of [51], the subsequent result is obtained.
Lemma 2.
Given system (12), assuming that Assumption 1 is met, an unknown constant vector η c * R 3 can be found such that for i = 1 , , N ,
lim t η ¯ i ( t ) = η c * .
For i = 1 , , N , let p ^ i ( t ) = p i ( t ) + η c * , η ˜ i ( t ) = η i ( t ) p ^ i ( t ) . Thus, the result of (13) indicates that lim t η ˜ i ( t ) = lim t ( η i ( t ) p ^ i ( t ) ) = 0 . Note that p ^ i ( t ) is made up of two parts, namely, the absolute position of the ith drone p i ( t ) , and some unknown constant vector η c * , which is the same for all the drones. In this sense, p ^ i ( t ) is referred to as the pseudo drone position, while η i ( t ) is called the pseudo drone position estimator for the ith drone.

3.2. Pseudo Target Position Estimator

Next, we define r ^ 0 ( t ) = r 0 ( t ) + η c * , i.e.,
r ^ 0 ( t ) = α n t n + + α 1 t + α 0 + η c * .
Note that r ^ 0 ( t ) has the same structure as r 0 ( t ) . Therefore, r ^ 0 ( t ) is capable of being generated through
v ^ ˙ 0 ( t ) = Υ v ^ 0 ( t ) r ^ 0 ( t ) = Π v ^ 0 ( t )
where v ^ 0 ( t ) R 3 ( n + 1 ) is the state variable of the system (15). In what follows, we call r ^ 0 ( t ) the pseudo target position, and it is shown below in two steps how to recover r ^ 0 ( t ) for each drone.
Step 1: Note that for i N , p i ( t ) r 0 ( t ) is accessible. Then, for i N , let
r ^ i ( t ) = η i ( t ) ( p i ( t ) r 0 ( t ) ) = η i ( t ) p i ( t ) + r 0 ( t )
which is available. Letting r ˜ i ( t ) = r ^ i ( t ) r ^ 0 ( t ) for i N , by Lemma 2, it is concluded that
lim t r ˜ i ( t ) = lim t ( η i ( t ) p i ( t ) + r 0 ( t ) r 0 ( t ) η c * ) = lim t ( η ¯ i ( t ) η c * ) = 0 .
That is, r ^ i ( t ) defined by (16) functions as the pseudo target position estimator for the drones belonging to the set of N .
Step 2: For i M , we define
ϑ ˙ i ( t ) = Υ ϑ i ( t ) + μ ϑ F j = 1 N a i j r ^ j ( t ) r ^ i ( t ) r ^ i ( t ) = Π ϑ i ( t )
where ϑ i ( t ) R 3 ( n + 1 ) , F = Θ Π T , μ ϑ > 0 . Letting r ˜ i ( t ) = r ^ i ( t ) r ^ 0 ( t ) for i M , the subsequent result is obtained.
Lemma 3.
Given systems (15) and (18), assuming that Assumption 2 is met, if μ ϑ > δ ̲ L 1 1 , then, for any system initial condition, it holds that
lim t r ˜ i ( t ) = 0 , i M .
Proof. 
For i M , define ϑ ˜ i ( t ) = ϑ i ( t ) v ^ 0 ( t ) . Thus, according to (18), we obtain
ϑ ˜ ˙ i ( t ) = Υ ϑ i ( t ) + μ ϑ F Π j M a i j ϑ j ( t ) ϑ i ( t ) + μ ϑ F k N a i k r ^ k ( t ) Π ϑ i ( t ) Υ v ^ 0 ( t ) = Υ ϑ ˜ i ( t ) + μ ϑ F Π j M a i j ϑ ˜ j ( t ) ϑ ˜ i ( t ) + μ ϑ F k N a i k r ^ k ( t ) Π ( ϑ ˜ i ( t ) + v ^ 0 ( t ) ) = Υ ϑ ˜ i ( t ) + μ ϑ F Π j M a i j ϑ ˜ j ( t ) ϑ ˜ i ( t ) μ ϑ F Π k N a i k ϑ ˜ i ( t ) + μ ϑ F k N a i k r ^ k ( t ) Π v ^ 0 ( t ) = Υ ϑ ˜ i ( t ) + μ ϑ F Π j M a i j ϑ ˜ j ( t ) ϑ ˜ i ( t ) μ ϑ F Π k N a i k ϑ ˜ i ( t ) + μ ϑ F k N a i k r ^ k ( t ) r ^ 0 ( t ) = Υ ϑ ˜ i ( t ) + μ ϑ F Π j M a i j ϑ ˜ j ( t ) ϑ ˜ i ( t ) μ ϑ F Π k N a i k ϑ ˜ i ( t ) + μ ϑ F k N a i k r ˜ k ( t ) .
Let ϑ ˜ M ( t ) = col ϑ ˜ 1 ( t ) , , ϑ ˜ M ( t ) , r ˜ N ( t ) = col r ˜ M + 1 ( t ) , , r ˜ N ( t ) . Thus, we obtain
ϑ ˜ ˙ M ( t ) = I M Υ μ ϑ L 1 F Π ϑ ˜ M ( t ) μ ϑ ( L 2 F ) r ˜ N ( t ) .
By Lemma 1, under Assumption 2, δ ̲ L 1 > 0 . Consequently, according to Lemma 2.1 of [76], the system matrix I M Υ μ ϑ L 1 F Π of the closed-loop system is Hurwitz provided μ ϑ > δ ̲ L 1 1 . As stated in (17), r ˜ N ( t ) will approach zero asymptotically. Thus, by Lemma 2.7 of [51], lim t ϑ ˜ i ( t ) = 0 for i M , which, in turn, means that lim t r ˜ i ( t ) = lim t Π ϑ ˜ i ( t ) = 0 , thereby completing the proof. □
From Lemma 3, we can see that ϑ i ( t ) is the estimate of v ^ 0 ( t ) , and r ^ i ( t ) is the estimate of r ^ 0 ( t ) . That is, r ^ i ( t ) defined by (18) functions as the pseudo target position estimator for the drones belonging to the set of M .

3.3. Local IMC Law

Initially, we propose a virtual tracking error e ^ i ( t ) for each drone as follows:
e ^ i ( t ) = η i ( t ) r ^ i ( t ) r f i ( t ) if i M e i ( t ) if i N
For i = 1 , , N , the desired trajectory p r i ( t ) is obtainable from the equivalent virtual leader model shown below:
ϖ ˙ i ( t ) = Υ i ϖ i ( t ) p r i ( t ) = Π i ϖ i ( t )
where ϖ i ( t ) R n i + 3 ( n + 1 ) , ϖ i ( 0 ) = col h i ( 0 ) , v 0 ( 0 ) , Υ i = D Φ i , Υ , Π i = ϕ i Π . Since all elements in ( σ ( Υ ) ) are zero and all elements in ( σ ( Φ i ) ) are assumed to be non-negative, none of the elements in ( σ ( Υ i ) ) are negative.
Assume that the minimal polynomial of Υ i is as follows:
q Υ i m ( λ ) = λ n m i + α i 1 λ n m i 1 + + α i n m i 1 λ + α i n m i .
Let
g i 1 = 0 1 0 0 0 0 0 0 1 α i n m i α i n m i 1 α i 1 , g i 2 = 0 0 0 1
and
G i 1 = I 3 g i 1 , G i 2 = I 3 g i 2 .
Then, g i 1 , g i 2 is controllable. Additionally, the characteristic polynomial of g i 1 takes the form
q g i 1 c ( λ ) = λ n m i + α i 1 λ n m i 1 + + α i n m i 1 λ + α i n m i = q Υ i m ( λ ) .
Since q Υ i m ( λ ) divides q g i 1 c ( λ ) , according to Remark 1.23 of [77], G i 1 , G i 2 contains an internal model with three copies of Υ i . Additionally, for each λ C ,
rank A i o λ I 6 B i C i 0 = rank λ 1 0 0 k i * λ 1 1 0 0 I 3 = 9 .
Thus, for i = 1 , , N , since ( A i o , B i ) is controllable, by Lemma 1.26 of [77], there exists K i = K i 1 K i 2 such that A i o 0 G i 2 C i G i 1 + B i 0 K i is Hurwitz. On the other hand, since ( C i , A i o ) is observable, there exists L i such that A i o L i C i is Hurwitz. Now, we design the following local internal model control law:
u i ( t ) = K i z i ( t ) z ˙ i ( t ) = A i o + B i K i 1 L i C i B i K i 2 0 G i 1 z i ( t ) + L i G i 2 e ^ i ( t ) .

3.4. Stability Analysis

Theorem 1.
Given systems (2), (4), (6), assuming that Assumptions 1 and 2 are met, Problem 1 can be solved using the control law defined by (10), (18), (22), and (29).
Proof. 
Define
A ¯ c i o A i o B i K i 1 B i K i 2 L i C i A i o + B i K i 1 L i C i B i K i 2 G i 2 C i 0 G i 1
Let T i = I 6 I 6 0 6 × 3 n m i 0 6 × 6 I 6 0 6 × 3 n m i 0 3 n m i × 6 0 3 n m i × 6 I 3 n m i and T i 1 = I 6 I 6 0 6 × 3 n m i 0 6 × 6 I 6 0 6 × 3 n m i 0 3 n m i × 6 0 3 n m i × 6 I 3 n m i ; then, we have
T i A ¯ c i o T i 1 = A i o L i C i 0 0 L i C i A i o + B i K i 1 B i K i 2 G i 2 C i G i 2 C i G i 1 .
Since A i o L i C i and A i o + B i K i 1 B i K i 2 G i 2 C i G i 1 are Hurwitz, A ¯ c i o is also Hurwitz. As a result, there exists W R where, for any w i W ,
A ¯ c i A i B i K i 1 B i K i 2 L i C i A i + B i K i 1 L i C i B i K i 2 G i 2 C i 0 G i 1
is Hurwitz.
Since G i 1 , G i 2 contains an internal model with three copies of Υ i and δ ̲ Υ 0 , according to Lemma 1.27 of [77], for any Π i , the equations provided
X i Υ i = A i X i + B i K i Z i Z i Υ i = A i o + B i K i 1 L i C i B i K i 2 0 G i 1 Z i + L i G i 2 C i X i Π i
admit a unique solution X i and Z i . Additionally, X i , Z i complies with
0 = C i X i Π i .
Combining (2) and (29) gives
x ˙ i ( t ) = A i x i ( t ) + B i K i z i ( t ) .
For i M ,
z ˙ i ( t ) = A i o + B i K i 1 L i C i B i K i 2 0 G i 1 z i ( t ) + L i G i 2 η i ( t ) r ^ i ( t ) r f i ( t ) = A i o + B i K i 1 L i C i B i K i 2 0 G i 1 z i ( t ) + L i G i 2 C i x i ( t ) Π i ϖ i ( t ) + L i G i 2 η ˜ i ( t ) L i G i 2 r ˜ i ( t ) .
For i N ,
z ˙ i ( t ) = A i o + B i K i 1 L i C i B i K i 2 0 G i 1 z i ( t ) + L i G i 2 p i ( t ) r 0 ( t ) r f i ( t ) = A i o + B i K i 1 L i C i B i K i 2 0 G i 1 z i ( t ) + L i G i 2 C i x i ( t ) Π i ϖ i ( t ) .
Let x ¯ i ( t ) = x i ( t ) X i ϖ i ( t ) , z ¯ i ( t ) = z i ( t ) Z i ϖ i ( t ) . By (33), it is obtained that
x ¯ ˙ i ( t ) = A i x i ( t ) + B i K i z i ( t ) X i Υ i ϖ i ( t ) = A i x ¯ i ( t ) + B i K i z ¯ i ( t ) .
For i M ,
z ¯ ˙ i ( t ) = A i o + B i K i 1 L i C i B i K i 2 0 G i 1 z i ( t ) + L i G i 2 C i x i ( t ) Π i ϖ i ( t ) Z i Υ i ϖ i ( t ) + L i G i 2 η ˜ i ( t ) L i G i 2 r ˜ i ( t ) = A i o + B i K i 1 L i C i B i K i 2 0 G i 1 z ¯ i ( t ) + L i G i 2 C i x ¯ i ( t ) + L i G i 2 η ˜ i ( t ) L i G i 2 r ˜ i ( t ) .
Define δ ¯ i ( t ) = col x ¯ i ( t ) , z ¯ i ( t ) . It follows that
δ ¯ ˙ i ( t ) = A i B i K i 1 B i K i 2 L i C i A i o + B i K i 1 L i C i B i K i 2 G i 2 C i 0 G i 1 δ ¯ i ( t ) + 0 L i G i 2 η ˜ i ( t ) 0 L i G i 2 r ˜ i ( t ) = A ¯ c i δ ¯ i ( t ) + 0 L i G i 2 η ˜ i ( t ) 0 L i G i 2 r ˜ i ( t )
Since for all w i W , the matrix A ¯ c i is Hurwitz, and for each i M , lim t η ˜ i ( t ) = 0 and lim t r ˜ i ( t ) = 0 , we can deduce, according to Lemma 2.7 of [51], that lim t δ ¯ i ( t ) = 0 . Therefore, lim t x ¯ i ( t ) = 0 , and lim t z ¯ i ( t ) = 0 .
For i N ,
z ¯ ˙ i ( t ) = A i o + B i K i 1 L i C i B i K i 2 0 G i 1 z i ( t ) + L i G i 2 C i x i ( t ) Π ϖ i ( t ) Z i Υ i ϖ i ( t ) = A i o + B i K i 1 L i C i B i K i 2 0 G i 1 z ¯ i ( t ) + L i G i 2 C i x ¯ i ( t ) .
Similarly, by defining δ ¯ i ( t ) = col x ¯ i ( t ) , z ¯ i ( t ) , we obtain that
δ ¯ ˙ i ( t ) = A i B i K i 1 B i K i 2 L i C i A i o + B i K i 1 L i C i B i K i 2 G i 2 C i 0 G i 1 δ ¯ i ( t ) = A ¯ c i δ ¯ i ( t ) .
Since for all w i W , the matrix A ¯ c i is Hurwitz, it follows that lim t δ ¯ i ( t ) = 0 . Therefore, lim t x ¯ i ( t ) = 0 and lim t z ¯ i ( t ) = 0 .
Moreover, for i = 1 , , N , by (34), it is obtained that
e i ( t ) = p i ( t ) r 0 ( t ) r f i ( t ) = C i x i ( t ) Π i ϖ i ( t ) = C i x ¯ i ( t ) + C i X i Π i ϖ i ( t ) = C i x ¯ i ( t ) .
This concludes that lim t e i ( t ) = 0 , thereby completing the proof. □

4. Simulations

Here, the effectiveness of the proposed control protocol is demonstrated through numerical simulation and UAV flight control HIL simulation. Consider a drone swarm with N = 4 . The nominal values and uncertain values of the damping ratios in (1) are set to be k 1 * = 1 , k 2 * = 2 , k 3 * = 3 , k 4 * = 4 , and w 1 = 0.5 , w 2 = 0.5 , w 3 = 0.5 , w 4 = 0.5 , respectively. Figure 2 illustrates the communication digraph G , with node 0 denoting the target and node i denoting the ith drone. It is straightforward to confirm that both Assumptions 1 and 2 are fulfilled.
The position of the moving target is given by
r 0 ( t ) = 100 10 t 300
which can be generated by (4) with
Υ = 0 1 0 0 I 3 , Π = 1 0 I 3
and v 0 ( 0 ) = col ( 100 , 0 , 300 , 0 , 10 , 0 ) .
The local formation vectors are given by
r f 1 ( t ) = 100 cos 0.5 t 3 π 2 0 100 sin 0.5 t 3 π 2 , r f 2 ( t ) = 100 cos ( 0.5 t π ) 0 100 sin ( 0.5 t π ) r f 3 ( t ) = 100 cos 0.5 t π 2 0 100 sin 0.5 t π 2 , r f 4 ( t ) = 100 cos ( 0.5 t ) 0 100 sin ( 0.5 t )
which can be generated by the local bias system (6) with
Φ i = 0 0 0.5 0 0 0 0.5 0 0 , ϕ i = I 3
and h 1 ( 0 ) = col 0 , 0 , 100 , h 2 ( 0 ) = col 100 , 0 , 0 , h 3 ( 0 ) = col 0 , 0 , 100 , h 4 ( 0 ) = col 100 , 0 , 0 .
Let the gain of the pseudo drone position estimator (10) be μ η = 3 . For the communication digraph G , L 1 = 2 0 1 0 1 0 0 1 1 , and thus, δ ̲ L 1 1 = 1 . Therefore, let the gain of the pseudo target position estimator (18) be μ ϑ = 20 , which satisfies μ ϑ > δ ̲ L 1 1 . For the internal model control law (29), let K i be such that the poles of A i o 0 G i 2 C i G i 1 + B i 0 K i are [ 0.5 , 0.6 , 0.7 , 0.8 , 0.9 , 1 , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 , 1.6 , 1.7 , 1.8 , 1.9 , 2 , 2.1 , 2.2 ] , and L i be such that the poles of A i o L i C i are [ 1 , 2 , 2.5 , 3 , 3.5 , 4 ] . Moreover, let
G i 1 = I 3 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0.25 0 , G i 2 = I 3 0 0 0 1 .
The initial positions and the initial velocities of the drones are set to be p 1 ( 0 ) = col ( 0 , 0 , 350 ) , p 2 ( 0 ) = col ( 97 , 0 , 154 ) , p 3 ( 0 ) = col ( 15 , 0 , 200 ) , p 4 ( 0 ) = col ( 80 , 0 , 350 ) , v 1 ( 0 ) = col ( 0 , 0 , 0 ) , v 2 ( 0 ) = col ( 0 , 0 , 0 ) , v 3 ( 0 ) = col ( 0 , 0 , 0 ) , v 4 ( 0 ) = col ( 0 , 0 , 0 ) , respectively. Moreover, it is assumed that, for i = 1 , , 4 , η i ( 0 ) = 0 , z i ( 0 ) = 0 , and for i = 2 , , 4 , ϑ i ( 0 ) = 0 .

4.1. Numerical Simulation

We performed the numerical simulation on the MATLAB platform. The results of the simulation are shown in Figure 3, Figure 4, and Figure 5b. Specifically, Figure 3 illustrates the control inputs of the four drones in each dimension at each moment. Figure 4 illustrates the three-dimensional trajectories of the drones and the target. Finally, Figure 5b illustrates the tracking error of each drone in each dimension, demonstrating that the tracking error of each drone eventually converges to zero as required by the control objective (9).
In order to better demonstrate the superiority of the proposed robust cooperative protocol, we compare the modified feedforward non-robust control method in [66] with the control protocol proposed in this paper. Figure 2 is still used as the communication digraph G . The simulation results of the modified feedforward non-robust control method based on [66] are shown in Figure 6, where Figure 6a,b depict the tracking error for each drone when the damping ratio in (1) excludes and includes, respectively, the uncertainty part w i . It can be seen that the control method is effective when there are no system uncertain parameters, but when there are uncertainties in the system parameters, the tracking error of each drone cannot converge to zero, which shows that the method is not robust. When the robust cooperative control protocol proposed in this paper is adopted, the simulation results are shown in Figure 5. It can be seen that the control protocol can well achieve the pursuit of the target regardless of whether there are system parameters, which reflects the superiority of the proposed control protocol.
The numerical simulation results show that the proposed robust cooperative control protocol has excellent performance in solving the drone swarm pursuit problem with the moving object through the pseudo drone position estimator, the pseudo target position estimator, and a local internal model control law. The control system exhibits rapid response and good error convergence performance, underscoring its capability in handling uncertainty and adapting to the target trajectory.

4.2. UAV Flight Control HIL Simulation

Besides the pure numerical simulation, we further conducted a simulation on the Links-RT UAV flight control HIL simulation platform, which is an experimental method that integrates the PX4 flight control board into the simulation loop. The UAV flight control HIL simulation platform is capable of simulating the dynamics of the aircraft.

4.2.1. Discretization of the Control Law

As the PX4 flight control board uses discrete-time signals for processing and control, it is necessary to discretize the control law. Let the sampling time be T = 0.01 s. Then, for i = 1 , , N , the pseudo drone position estimator (10) can be discretized in the following form:
η i ( k + 1 ) = η i ( k ) + T v i ( k ) + T μ η j = 1 N a i j η j ( k ) η i ( k ) p j i ( k ) p j i ( k ) = p j ( k ) p i ( k ) .
For i M , the pseudo target position estimator (18) can be discretized in the following form:
ϑ i ( k + 1 ) = ϑ i ( k ) + T Υ ϑ i ( k ) + T μ ϑ F j = 1 N a i j j ( k ) i ( k ) i ( k ) = Π ϑ i ( k ) .
In addition, for i = 1 , , N , the internal model control law (29) can be discretized using the zero-order hold method in the following form:
u i ( k ) = K i z i ( k ) z i ( k + 1 ) = G i 1 d z i ( k ) + G i 2 d e ^ i ( k )
where G i 1 d and G i 2 d are constant matrices after discretization.

4.2.2. UAV Flight Control HIL Simulation Setup

The Links-RT UAV flight control HIL simulation platform is shown in Figure 7. Based on this platform, we established the simulation architecture shown in Figure 8. We use τ to represent the system timestamp, which records the sampling time point for each data point. Figure 9 presents the control inputs of the four drones in each dimension at each moment. Figure 10 illustrates the tracking error of each drone in each dimension. Figure 11 illustrates the position information of the drones during the entire simulation process. The simulation results demonstrate that the proposed robust cooperative control protocol effectively addresses the formation pursuit problem.
The results of the HIL simulation show that the proposed control protocol also exhibits good stability and reliability in a HIL environment that is closer to the actual flight control system, further verifying the potential of the proposed control protocol in practical applications.

5. Conclusions

This paper addresses the problem of a swarm of second-order drones with uncertain parameters cooperating to pursue a moving target in a flexible formation. The proposed distributed control protocol synthesizes a pseudo drone position estimator, a pseudo target position estimator, and a local internal model approach, where the pseudo drone position estimator and the pseudo target position estimator are used to estimate the positions of each drone and the target, respectively, both subject to a common unknown constant deviation. By calculating the difference between these estimates, each drone can obtain its relative position with respect to the target. This capability allows for the development of a local IMC strategy to overcome constraints such as the absence of communication between drones and the target, unavailability of absolute positions for the drones, and uncertain system parameters. The numerical simulation result and UAV flight control HIL simulation result demonstrate the effectiveness of the proposed control protocol. However, this protocol is currently only applicable to fixed communication network topologies with reliable communication. In the future, the switching of communication network topologies with unreliable communication requires further research.

Author Contributions

Conceptualization, H.G. and H.C.; methodology, A.Z., W.L., and H.C.; software, A.Z.; validation, A.Z.; formal analysis, A.Z. and H.C.; investigation, A.Z.; writing—original draft preparation, A.Z.; writing—review and editing, A.Z. and H.C.; visualization, A.Z.; supervision, H.G. and H.C.; funding acquisition, H.G. and H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by National Natural Science Foundation of China under grant number 62173149, 62276104, U22A2062; in part by Guangdong Natural Science Foundation under grant number 2021A1515012584, 2022A1515011262; and in part by Fundamental Research Funds for the Central Universities.

Data Availability Statement

The data of this paper are available upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bai, H.; Wen, J.T. Cooperative load transport: A formation-control perspective. IEEE Trans. Robot. 2010, 26, 742–750. [Google Scholar] [CrossRef]
  2. Moshtagh, N.; Jadbabaie, A. Distributed geodesic control laws for flocking of nonholonomic agents. IEEE Trans. Autom. Control 2007, 52, 681–686. [Google Scholar] [CrossRef]
  3. Wen, G.; Chen, C.P.; Feng, J.; Zhou, N. Optimized multi-agent formation control based on an identifier–actor–critic reinforcement learning algorithm. IEEE Trans. Fuzzy Syst. 2017, 26, 2719–2731. [Google Scholar] [CrossRef]
  4. Zheng, R.; Liu, Y.; Sun, D. Enclosing a target by nonholonomic mobile robots with bearing-only measurements. Automatica 2015, 53, 400–407. [Google Scholar] [CrossRef]
  5. Benda, M.; Jagannathan, V.; Dodhiawala, R. On optimal cooperation of knowledge sources: An empirical investigation. In Technical Report BCS-G2010-28; Boeing Advanced Technology Center, Boeing Computing Services: Seattle, WA, USA, 1986. [Google Scholar]
  6. Isaacs, R. Differential Games: A Mathematical Theory with Applications to Warfare and Pursuit, Control and Optimization; Courier Corporation: Chelmsford, MA, USA, 1999. [Google Scholar]
  7. Vidal, R.; Shakernia, O.; Kim, H.J.; Shim, D.H.; Sastry, S. Probabilistic pursuit-evasion games: Theory, implementation, and experimental evaluation. IEEE Trans. Robot. Autom. 2002, 18, 662–669. [Google Scholar] [CrossRef]
  8. Xu, Y.; Yang, H.; Jiang, B.; Polycarpou, M.M. Multiplayer pursuit-evasion differential games with malicious pursuers. IEEE Trans. Autom. Control 2022, 67, 4939–4946. [Google Scholar] [CrossRef]
  9. Kobayashi, Y.; Otsubo, K.; Hosoe, S. Design of decentralized capturing behavior by multiple mobile robots. In Proceedings of the IEEE Workshop on Distributed Intelligent Systems: Collective Intelligence and Its Applications (DIS’06), Prague, Czech Republic, 15–16 June 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 13–18. [Google Scholar] [CrossRef]
  10. Kim, T.H.; Sugie, T. Cooperative control for target-capturing task based on a cyclic pursuit strategy. Automatica 2007, 43, 1426–1431. [Google Scholar] [CrossRef]
  11. Mas, I.; Li, S.; Acain, J.; Kitts, C. Entrapment/escorting and patrolling missions in multi-robot cluster space control. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 5855–5861. [Google Scholar] [CrossRef]
  12. Deghat, M.; Davis, E.; See, T.; Shames, I.; Anderson, B.D.; Yu, C. Target localization and circumnavigation by a non-holonomic robot. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1227–1232. [Google Scholar] [CrossRef]
  13. Hafez, A.T.; Marasco, A.J.; Givigi, S.N.; Iskandarani, M.; Yousefi, S.; Rabbath, C.A. Solving multi-UAV dynamic encirclement via model predictive control. IEEE Trans. Control Syst. Technol. 2015, 23, 2251–2265. [Google Scholar] [CrossRef]
  14. Franchi, A.; Stegagno, P.; Oriolo, G. Decentralized multi-robot encirclement of a 3D target with guaranteed collision avoidance. Auton. Robot. 2016, 40, 245–265. [Google Scholar] [CrossRef]
  15. Chen, Z. A cooperative target-fencing protocol of multiple vehicles. Automatica 2019, 107, 591–594. [Google Scholar] [CrossRef]
  16. Hu, B.B.; Zhang, H.T.; Wang, J. Multiple-target surrounding and collision avoidance with second-order nonlinear multiagent systems. IEEE Trans. Ind. Electron. 2020, 68, 7454–7463. [Google Scholar] [CrossRef]
  17. Ringbäck, R.; Wei, J.; Erstorp, E.S.; Kuttenkeuler, J.; Johansen, T.A.; Johansson, K.H. Multi-agent formation tracking for autonomous surface vehicles. IEEE Trans. Control Syst. Technol. 2020, 29, 2287–2298. [Google Scholar] [CrossRef]
  18. Yang, Z.; Zhu, S.; Chen, C.; Feng, G.; Guan, X. Entrapping a target in an arbitrarily shaped orbit by a single robot using bearing measurements. Automatica 2020, 113, 108805. [Google Scholar] [CrossRef]
  19. Shao, X.; Yue, X.; Zhang, W. Coordinated moving-target encircling control for networked quadrotors with event-triggered extended state observers. IEEE Syst. J. 2023, 17, 6576–6587. [Google Scholar] [CrossRef]
  20. Zheng, B.; Song, C.; Liu, L. Cyclic-pursuit-based circular formation control of mobile agents with limited communication ranges and communication delays. IEEE/CAA J. Autom. Sin. 2023, 10, 1860–1870. [Google Scholar] [CrossRef]
  21. Lu, K.; Dai, S.L.; Jin, X. Cooperative Constrained Enclosing Control of Multirobot Systems in Obstacle Environments. IEEE Trans. Control Netw. Syst. 2024, 11, 718–730. [Google Scholar] [CrossRef]
  22. Pei, H.; Chen, S.; Lai, Q. Multi-target consensus circle pursuit for multi-agent systems via a distributed multi-flocking method. Int. J. Syst. Sci. 2016, 47, 3741–3748. [Google Scholar] [CrossRef]
  23. Deghat, M.; Xia, L.; Anderson, B.D.; Hong, Y. Multi-target localization and circumnavigation by a single agent using bearing measurements. Int. J. Robust Nonlinear Control 2015, 25, 2362–2374. [Google Scholar] [CrossRef]
  24. Shao, J.; Tian, Y.P. Multi-target localisation and circumnavigation by a multi-agent system with bearing measurements in 2D space. Int. J. Syst. Sci. 2018, 49, 15–26. [Google Scholar] [CrossRef]
  25. Liu, Y.; Fu, Y.; He, W.; Hui, Q. Modeling and observer-based vibration control of a flexible spacecraft with external disturbances. IEEE Trans. Ind. Electron. 2018, 66, 8648–8658. [Google Scholar] [CrossRef]
  26. Liu, Y.; Chen, X.; Mei, Y.; Wu, Y. Observer-based boundary control for an asymmetric output-constrained flexible robotic manipulator. Sci. China. Inf. Sci. 2022, 65, 139203. [Google Scholar] [CrossRef]
  27. Liu, Y.; Yao, X.; Zhao, W. Distributed neural-based fault-tolerant control of multiple flexible manipulators with input saturations. Automatica 2023, 156, 111202. [Google Scholar] [CrossRef]
  28. Pounds, P.E.; Bersak, D.R.; Dollar, A.M. Stability of small-scale UAV helicopters and quadrotors with added payload mass under PID control. Auton. Robot. 2012, 33, 129–142. [Google Scholar] [CrossRef]
  29. Rahardian, A.R.; Nazaruddin, Y.Y.; Nadhira, V.; Bandong, S. Implementation of Parallel Navigation and PID Controller for Drone Swarm Pursuit. IFAC-PapersOnLine 2023, 56, 2513–2518. [Google Scholar] [CrossRef]
  30. Liu, H.; Lu, G.; Zhong, Y. Robust LQR attitude control of a 3-DOF laboratory helicopter for aggressive maneuvers. IEEE Trans. Ind. Electron. 2012, 60, 4627–4636. [Google Scholar] [CrossRef]
  31. Yit, K.K.; Rajendran, P. Enhanced longitudinal motion control of UAV simulation by using P-LQR method. Int. J. Micro Air Veh. 2015, 7, 203–210. [Google Scholar] [CrossRef]
  32. Bouffard, P. On-Board Model Predictive Control of a Quadrotor Helicopter: Design, Implementation, and Experiments. Electrical Engineering and Computer Sciences. Ph.D Thesis, University of California, Berkeley, CA, USA, 2012. [Google Scholar]
  33. Kamel, M.; Stastny, T.; Alexis, K.; Siegwart, R. Model predictive control for trajectory tracking of unmanned aerial vehicles using robot operating system. In Robot Operating System (ROS) the Complete Reference (Volume 2); Springer: Cham, Switzerland, 2017; pp. 3–39. [Google Scholar] [CrossRef]
  34. Romero, A.; Sun, S.; Foehn, P.; Scaramuzza, D. Model predictive contouring control for time-optimal quadrotor flight. IEEE Trans. Robot. 2022, 38, 3340–3356. [Google Scholar] [CrossRef]
  35. Dydek, Z.T.; Annaswamy, A.M.; Lavretsky, E. Adaptive control of quadrotor UAVs: A design trade study with flight evaluations. IEEE Trans. Control Syst. Technol. 2012, 21, 1400–1406. [Google Scholar] [CrossRef]
  36. Jaiton, V.; Rothomphiwat, K.; Phetpoon, T.; Manawakul, M.; Manoonpong, P. An Integrated Adaptive Control System for Obstacle Detection and Online Speed Adaptation of Autonomous Drones. In Proceedings of the 2024 IEEE/SICE International Symposium on System Integration (SII), Ha Long, Vietnam, 8–11 January 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1355–1356. [Google Scholar] [CrossRef]
  37. Liu, H.; Bai, Y.; Lu, G.; Shi, Z.; Zhong, Y. Robust tracking control of a quadrotor helicopter. J. Intell. Robot. Syst. 2014, 75, 595–608. [Google Scholar] [CrossRef]
  38. Wang, W.; Chen, X.; Jia, J.; Xing, S.; Gao, Y.; Xie, M. Distributed robust formation tracking control for multi-UAV systems. Trans. Inst. Meas. Control. 2022, 01423312221124067. [Google Scholar] [CrossRef]
  39. Bai, A.; Luo, Y.; Zhang, H.; Li, Z. L 2-gain robust trajectory tracking control for quadrotor UAV with unknown disturbance. Asian J. Control 2022, 24, 3043–3055. [Google Scholar] [CrossRef]
  40. Lee, D.; Jin Kim, H.; Sastry, S. Feedback linearization vs. adaptive sliding mode control for a quadrotor helicopter. Int. J. Control. Autom. Syst. 2009, 7, 419–428. [Google Scholar] [CrossRef]
  41. Sadiq, M.; Hayat, R.; Zeb, K.; Al-Durra, A.; Ullah, Z. Robust Feedback Linearization Based Disturbance Observer Control of Quadrotor UAV. IEEE Access 2024. [Google Scholar] [CrossRef]
  42. Doukhi, O.; Lee, D.J. Neural network-based robust adaptive certainty equivalent controller for quadrotor UAV with unknown disturbances. Int. J. Control. Autom. Syst. 2019, 17, 2365–2374. [Google Scholar] [CrossRef]
  43. Jiang, F.; Pourpanah, F.; Hao, Q. Design, implementation, and evaluation of a neural-network-based quadcopter UAV system. IEEE Trans. Ind. Electron. 2019, 67, 2076–2085. [Google Scholar] [CrossRef]
  44. Al-Mahturi, A.; Santoso, F.; Garratt, M.A.; Anavatti, S.G. Self-learning in aerial robotics using type-2 fuzzy systems: Case study in hovering quadrotor flight control. IEEE Access 2021, 9, 119520–119532. [Google Scholar] [CrossRef]
  45. Zhang, Y.; Chen, Z.; Zhang, X.; Sun, Q.; Sun, M. A novel control scheme for quadrotor UAV based upon active disturbance rejection control. Aerosp. Sci. Technol. 2018, 79, 601–609. [Google Scholar] [CrossRef]
  46. Yuan, Y.; Cheng, L.; Wang, Z.; Sun, C. Position tracking and attitude control for quadrotors via active disturbance rejection control method. Sci. China Inf. Sci. 2019, 62, 1–10. [Google Scholar] [CrossRef]
  47. Li, J.; Liu, J.; Huangfu, S.; Cao, G.; Yu, D. Leader-follower formation of light-weight UAVs with novel active disturbance rejection control. Appl. Math. Model. 2023, 117, 577–591. [Google Scholar] [CrossRef]
  48. Zheng, E.H.; Xiong, J.J.; Luo, J.L. Second order sliding mode control for a quadrotor UAV. ISA Trans. 2014, 53, 1350–1356. [Google Scholar] [CrossRef]
  49. Fu, X.; He, J. Robust adaptive sliding mode control based on iterative learning for quadrotor UAV. IETE J. Res. 2023, 69, 5484–5496. [Google Scholar] [CrossRef]
  50. Mughees, A.; Ahmad, I. Multi-optimization of novel conditioned adaptive barrier function integral terminal SMC for trajectory tracking of a quadcopter System. IEEE Access 2023. [Google Scholar] [CrossRef]
  51. Cai, H.; Su, Y.; Huang, J. Cooperative Control of Multi-Agent Systems: Distributed-Observer and Distributed-Internal-Model Approaches; Springer Nature: Berlin, Germany, 2022. [Google Scholar]
  52. Wang, X. Distributed formation output regulation of switching heterogeneous multi-agent systems. Int. J. Syst. Sci. 2013, 44, 2004–2014. [Google Scholar] [CrossRef]
  53. Li, W.; Chen, Z.; Liu, Z. Output regulation distributed formation control for nonlinear multi-agent systems. Nonlinear Dyn. 2014, 78, 1339–1348. [Google Scholar] [CrossRef]
  54. Haghshenas, H.; Badamchizadeh, M.A.; Baradarannia, M. Containment control of heterogeneous linear multi-agent systems. Automatica 2015, 54, 210–216. [Google Scholar] [CrossRef]
  55. Lu, M.; Liu, L. Distributed feedforward approach to cooperative output regulation subject to communication delays and switching networks. IEEE Trans. Autom. Control 2016, 62, 1999–2005. [Google Scholar] [CrossRef]
  56. Seyboth, G.S.; Ren, W.; Allgöwer, F. Cooperative control of linear multi-agent systems via distributed output regulation and transient synchronization. Automatica 2016, 68, 132–139. [Google Scholar] [CrossRef]
  57. Li, S.; Er, M.J.; Zhang, J. Distributed adaptive fuzzy control for output consensus of heterogeneous stochastic nonlinear multiagent systems. IEEE Trans. Fuzzy Syst. 2017, 26, 1138–1152. [Google Scholar] [CrossRef]
  58. Cai, H.; Lewis, F.L.; Hu, G.; Huang, J. The adaptive distributed observer approach to the cooperative output regulation of linear multi-agent systems. Automatica 2017, 75, 299–305. [Google Scholar] [CrossRef]
  59. Lu, M.; Liu, L. Cooperative Output Regulation of Linear Multi-Agent Systems by a Novel Distributed Dynamic Compensator. IEEE Trans. Autom. Control 2017, 62, 6481–6488. [Google Scholar] [CrossRef]
  60. Lu, M.; Liu, L. Leader-following consensus of multiple uncertain Euler–Lagrange systems subject to communication delays and switching networks. IEEE Trans. Autom. Control 2017, 63, 2604–2611. [Google Scholar] [CrossRef]
  61. Lu, M.; Liu, L. Leader–following consensus of multiple uncertain Euler–Lagrange systems with unknown dynamic leader. IEEE Trans. Autom. Control 2019, 64, 4167–4173. [Google Scholar] [CrossRef]
  62. Zhang, Y.; Wen, Y.; Chen, G.; Chen, Y.Y. Distributed adaptive observer-based output formation-containment control for heterogeneous multi-agent systems with unknown inputs. IET Control Theory Appl. 2020, 14, 2205–2212. [Google Scholar] [CrossRef]
  63. Yan, B.; Shi, P.; Lim, C.C.; Wu, C. Robust formation control for multiagent systems based on adaptive observers. IEEE Syst. J. 2021, 16, 3139–3150. [Google Scholar] [CrossRef]
  64. Yuan, C.; Yan, H.; Wang, Y.; Chang, Y.; Zhan, X. Formation-containment control of heterogeneous linear multi-agent systems with adaptive event-triggered strategies. Int. J. Syst. Sci. 2022, 53, 1942–1957. [Google Scholar] [CrossRef]
  65. Huang, W.; Liu, H.; Huang, J. Distributed robust containment control of linear heterogeneous multi-agent systems: An output regulation approach. IEEE/CAA J. Autom. Sin. 2022, 9, 864–877. [Google Scholar] [CrossRef]
  66. Duan, J.; Duan, G.; Cheng, S.; Cao, S.; Wang, G. Fixed-time time-varying output formation–containment control of heterogeneous general multi-agent systems. ISA Trans. 2023, 137, 210–221. [Google Scholar] [CrossRef]
  67. Wu, Y.; Li, J.; Liu, L.; Wu, C. Distributed adaptive practical formation tracking for multi-agent systems with actuator faults. Int. J. Robust Nonlinear Control 2023, 33, 1633–1654. [Google Scholar] [CrossRef]
  68. Lu, Y.; Xu, Z.; Li, L.; Zhang, J.; Chen, W. Formation preview tracking for heterogeneous multi-agent systems: A dynamical feedforward output regulation approach. ISA Trans. 2023, 133, 102–115. [Google Scholar] [CrossRef]
  69. Han, T.; Zheng, W.X. Bipartite output consensus for heterogeneous multi-agent systems via output regulation approach. IEEE Trans. Circuits Syst. II Express Briefs 2020, 68, 281–285. [Google Scholar] [CrossRef]
  70. Jiang, D.; Wen, G.; Peng, Z.; Wang, J.L.; Huang, T. Fully distributed pull-based event-triggered bipartite fixed-time output control of heterogeneous systems with an active leader. IEEE Trans. Cybern. 2022, 53, 3089–3100. [Google Scholar] [CrossRef]
  71. Li, W.; Zhang, H.; Mu, Y.; Wang, Y. Bipartite Time-Varying Output Formation Tracking for Multiagent Systems with Multiple Heterogeneous Leaders under Signed Digraph. IEEE Trans. Ind. Informatics 2023, 19, 11070–11079. [Google Scholar] [CrossRef]
  72. Zhang, J.; Yao, Y.; Wang, J.A.; Li, Z.; Feng, P.; Bai, W. Distributed Bipartite Output Formation Control for Heterogeneous Discrete-Time Linear Multi-Agent Systems. IEEE Access 2024, 12, 18901–18912. [Google Scholar] [CrossRef]
  73. Dong, X.; Yu, B.; Shi, Z.; Zhong, Y. Time-varying formation control for unmanned aerial vehicles: Theories and applications. IEEE Trans. Control Syst. Technol. 2014, 23, 340–348. [Google Scholar] [CrossRef]
  74. Dong, X.; Zhou, Y.; Ren, Z.; Zhong, Y. Time-varying formation tracking for second-order multi-agent systems subjected to switching topologies with application to quadrotor formation flying. IEEE Trans. Ind. Electron. 2016, 64, 5014–5024. [Google Scholar] [CrossRef]
  75. Dong, X.; Meng, F.; Shi, Z.; Lu, G.; Zhong, Y. Output containment control for swarm systems with general linear dynamics: A dynamic output feedback approach. Syst. Control Lett. 2014, 71, 31–37. [Google Scholar] [CrossRef]
  76. Cai, H.; Huang, J. Output based adaptive distributed output observer for leader–follower multiagent systems. Automatica 2021, 125, 109413. [Google Scholar] [CrossRef]
  77. Huang, J. Nonlinear Output Regul; Theory Appl. SIAM: Philadelphia, PA, USA, 2004. [Google Scholar]
Figure 1. An example of two types of drones. Node 0 represents the moving target. Node 5 represents the drone that can measure its position relative to the target, and nodes i , i = 1 , , 4 represent the drones that cannot.
Figure 1. An example of two types of drones. Node 0 represents the moving target. Node 5 represents the drone that can measure its position relative to the target, and nodes i , i = 1 , , 4 represent the drones that cannot.
Drones 08 00455 g001
Figure 2. The communication digraph G . In this digraph, node 0 denotes the target and node i denotes the ith drone. Only drone #1 can obtain its position relative to the target.
Figure 2. The communication digraph G . In this digraph, node 0 denotes the target and node i denotes the ith drone. Only drone #1 can obtain its position relative to the target.
Drones 08 00455 g002
Figure 3. The control input of each drone.
Figure 3. The control input of each drone.
Drones 08 00455 g003
Figure 4. The trajectories of the drones and the target.
Figure 4. The trajectories of the drones and the target.
Drones 08 00455 g004
Figure 5. Tracking error of each drone using the control protocol proposed in this paper.
Figure 5. Tracking error of each drone using the control protocol proposed in this paper.
Drones 08 00455 g005
Figure 6. Tracking error of each drone using the control method in [66].
Figure 6. Tracking error of each drone using the control method in [66].
Drones 08 00455 g006
Figure 7. The Links-RT UAV flight control HIL simulation platform.
Figure 7. The Links-RT UAV flight control HIL simulation platform.
Drones 08 00455 g007
Figure 8. Architecture of the UAV flight control HIL simulation platform.
Figure 8. Architecture of the UAV flight control HIL simulation platform.
Drones 08 00455 g008
Figure 9. The control input of each drone.
Figure 9. The control input of each drone.
Drones 08 00455 g009
Figure 10. The tracking error of each drone.
Figure 10. The tracking error of each drone.
Drones 08 00455 g010
Figure 11. Trajectories of the drones.
Figure 11. Trajectories of the drones.
Drones 08 00455 g011
Table 1. Classic drone operation algorithms.
Table 1. Classic drone operation algorithms.
AlgorithmCharacteristic
PID [28,29]Simple, widely used, relies on precise tuning
LQR [30,31]Optimal control, minimizes a quadratic cost function, suitable for linear systems
Model predictive control [32,33,34]Predictive, optimization-based, robust, real-time, handles constraints
Adaptive control [35,36]Adjusts parameters in real time, handles system uncertainties
Robust control [37,38,39]Ensures stability under uncertainties, strong disturbance rejection
Feedback linearization [40,41]Transforms nonlinear system into linear ones, precise control, requires an accurate model
Intelligent control [42,43,44]Uses AI techniques, adaptive, handles complex and uncertain environments
Active disturbance rejection control [45,46,47]Strong disturbance rejection, model-free, robust to uncertainties
Sliding-mode control [48,49,50]High robustness, good disturbance rejection, may cause chattering
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, H.; Zhang, A.; Li, W.; Cai, H. Drone Swarm Robust Cooperative Formation Pursuit through Relative Positioning in a Location Denial Environment. Drones 2024, 8, 455. https://doi.org/10.3390/drones8090455

AMA Style

Gao H, Zhang A, Li W, Cai H. Drone Swarm Robust Cooperative Formation Pursuit through Relative Positioning in a Location Denial Environment. Drones. 2024; 8(9):455. https://doi.org/10.3390/drones8090455

Chicago/Turabian Style

Gao, Huanli, Aixin Zhang, Wei Li, and He Cai. 2024. "Drone Swarm Robust Cooperative Formation Pursuit through Relative Positioning in a Location Denial Environment" Drones 8, no. 9: 455. https://doi.org/10.3390/drones8090455

APA Style

Gao, H., Zhang, A., Li, W., & Cai, H. (2024). Drone Swarm Robust Cooperative Formation Pursuit through Relative Positioning in a Location Denial Environment. Drones, 8(9), 455. https://doi.org/10.3390/drones8090455

Article Metrics

Back to TopTop