Next Article in Journal
A Reinforcement Learning-Based Traffic Engineering Algorithm for Enterprise Network Backbone Links
Next Article in Special Issue
Timed Automata-Based Strategy for Controlling Drone Access to Critical Zones: A UPPAAL Modeling Approach
Previous Article in Journal
Towards a State of Health Definition of Lithium Batteries through Electrochemical Impedance Spectroscopy
Previous Article in Special Issue
Towards Fully Autonomous UAV: Damaged Building-Opening Detection for Outdoor-Indoor Transition in Urban Search and Rescue
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Distributed Containment Control with Adaptive Performance and Collision Avoidance for Multi-Agent Systems

by
Charalampos P. Bechlioulis
1,2
1
Division of Systems and Automatic Control, Department of Electrical and Computer Engineering, University of Patras, 26504 Patras, Greece
2
Athena Research Center, Robotics Institute, Artemidos 6 & Epidavrou, 15125 Maroussi, Greece
Electronics 2024, 13(8), 1439; https://doi.org/10.3390/electronics13081439
Submission received: 15 March 2024 / Revised: 8 April 2024 / Accepted: 9 April 2024 / Published: 11 April 2024
(This article belongs to the Special Issue Control and Applications of Intelligent Unmanned Aerial Vehicle)

Abstract

:
This paper deals with the containment control problem for multi-agent systems. The objective is to develop a distributed control scheme that leads a sub-group of the agents, called followers, within the convex hull that is formed by the leaders, which operate autonomously. Towards this direction, we propose a twofold approach comprising the following: (i) a cyber layer, where the agents establish, through the communication network, a consensus on a reference trajectory that converges exponentially fast within the convex hull of the leaders and (ii) a physical layer, where each agent tracks the aforementioned trajectory while avoiding collisions with other members of the multi-agent team. The main contributions of this work lie in the robustness of the proposed framework in both the trajectory estimation and the tracking control tasks, as well as the guaranteed collision avoidance, despite the presence of dynamic leaders and bounded but unstructured disturbances. A simulation study of a multi-agent system composed of five followers and four leaders demonstrates the applicability of the proposed scheme and verifies its robustness against both external disturbances that act on the follower model and the dynamic motion of the leaders. A comparison with a related work is also included to outline the strong properties of the proposed approach.

1. Introduction

During the last two decades, the field of multi-agent systems has witnessed a surge in interest and innovation, driven by the pursuit of intelligent, cooperative behavior among autonomous entities. Within this realm, the containment control problem has emerged as a pivotal challenge, captivating researchers across various disciplines due to its broad applicability in real-world scenarios. The multi-agent containment control problem addresses the coordination of autonomous agents with the goal of constraining a subset of agents, called followers, within predefined regions. On the other hand, the leaders are assigned the responsibility of guiding and containing the followers. This problem encapsulates a diverse array of applications, spanning from robotic swarms and environmental monitoring to surveillance missions, where the effective coordination and confinement of agents are paramount.
In particular, research on the containment control of multi-agent systems shows great promise in advancing the concepts of urban air mobility (UAM) in the coming years [1]. Drawing upon principles from this domain, UAM systems can improve their capacity to coordinate and oversee the movement of numerous airborne vehicles within densely populated urban areas and highly cluttered environments, thereby ensuring operational efficiency and safety [2,3,4]. Containment control algorithms play a pivotal role in enabling optimal routing, collision avoidance, and airspace management, facilitating the seamless integration of aerial mobility services into established urban infrastructures. Moreover, these research endeavors can contribute to the development of resilient and scalable control methodologies capable of accommodating the expected surge in air traffic volume accompanying the expansion of UAM services [5,6], ultimately fostering the realization of more sustainable and inclusive urban transportation solutions.

1.1. Related Works

The containment control problem for multi-agent systems has been studied extensively in the past, particularly during the last 10 years. A recent comprehensive review paper [7] provides an in-depth examination of containment control strategies in multi-agent systems. It covers various aspects including the fundamental concepts, communication requirements, and dynamics modeling, as well as the control methodologies associated with the containment control problem, and summarizes existing research, highlights key findings, and identifies challenges and future directions in the field of containment control within multi-agent systems. Within the vast containment control literature, in this work, we focus particularly on three main aspects: (a) predefined performance, (b) robustness against exogenous disturbances and dynamic leaders, and (c) inter-agent collision avoidance. In this light, we present, in the following, an in-depth examination of the relevant literature and discuss the open issues that the proposed work aims to address.
The work in [8] addresses distributed fixed-time consensus tracking and containment control for second-order multi-agent systems with a directed topology. It introduces a novel non-singular sliding-mode control method incorporating a time-varying scaling function. Sufficient conditions for fixed-time consensus tracking and containment control are derived and the results demonstrate the independence of the convergence time regarding the initial values. Similarly, the authors in [9] explore fixed-time consensus tracking and containment control for second-order heterogeneous nonlinear multi-agent systems with and without velocity measurements under directed communication topologies. In particular, a distributed protocol is proposed that employs time-varying scaling functions and radial basis function neural networks to approximate unknown dynamics to facilitate fixed-time convergence. Alternatively, the paper [10] explores prescribed-time containment control for multi-agent systems with high-order nonlinear dynamics and directed communication. It introduces a distributed observer to estimate the leaders’ states, enabling a novel control method for followers to converge to the leaders’ convex hull with a preassigned convergence time. The work in [11] introduces a robust prescribed-time containment controller and an extended state observer for high-order multi-agent systems, addressing model uncertainties and external disturbances. It develops distributed observers for the followers to estimate the leaders’ states, generates reference tracking signals, and employs extended state observers to mitigate disturbances. The authors in [12] address the distributed containment control problem for non-strict-feedback switched nonlinear multi-agent systems with time-varying parameters. They employ a novel variable merging scheme, Gaussian basis function neural networks, and a common Lyapunov function for the switched dynamics. The work in [13] addresses the adaptive containment control problem for nonlinear multi-agent systems with unknown disturbances and full-state constraints. It utilizes radial basis function neural networks to approximate the unknown dynamics and dynamic surface control to manage the complexity. Nonlinear disturbance observers estimate the disturbances, while barrier Lyapunov functions and prescribed performance control guarantee the objectives. The theoretical analysis establishes bounded signals and predetermined convergence properties. The prescribed-time containment control problem for second-order multi-agent systems with multiple leaders is also addressed in [14]. It aims to ensure that the followers enter the convex hull formed by the leaders at a specified time. Using distributed observers, it estimates the leader state to enable the design of a containment controller. The results demonstrate the convergence of the tracking error close to zero within a predetermined time frame.
In the same direction, the paper [15] introduces a novel prescribed-time distributed control approach for the consensus and containment of networked multiple systems and offers a pre-specified convergence time, independent of the initial conditions. Specifically, it guarantees prescribed-time consensus and extends the theoretical findings to the containment control problem with multiple leaders. In [16], the limitations of the current prescribed-time control studies are discussed, especially in the presence of nonlinear functions lacking Lipschitz growth conditions, and a novel method using fuzzy logic systems to handle unknown nonlinear functions is proposed to ensure containment control with the prescribed performance. The approach employs appropriate Lyapunov functions to establish convergence within predefined regions for a predetermined transient period. The authors in [17] tackle the containment control of nonlinear multi-agent systems under unknown disturbances and dead-zone nonlinearities. They employ fuzzy logic systems to approximate uncertainties, as well as a nonlinear disturbance observer and a distributed containment control scheme with adaptive compensation to ensure bounded signals and the convergence of the containment errors in a small region of the origin. The work in [18] discusses the challenges of containment control for uncertain nonlinear multi-agent systems with unknown hysteresis. It introduces a novel approach involving prescribed-time convergence techniques, Nussbaum functions, fuzzy logic systems, and backstepping to reduce the containment error so that it converges within a predefined zone. The paper [19] explores adaptive containment funnel control with predefined-time convergence for uncertain nonlinear heterogeneous multi-agent systems with multiple leaders. Addressing simultaneous sensor and actuator failures without imposing any hard assumptions on the system dynamics, it introduces adaptive laws to mitigate faults. The nonlinear filtering of the intermediate control signals prevents the complexity explosion and achieves convergence within a predefined time period. Alternatively, the work in [20] proposes a novel containment control scheme for multi-agent systems using reinforcement learning and neural networks. It addresses unmeasurable states with an adaptive observer and filtered signals. An actor–critic reinforcement learning architecture optimizes the control protocol via a gradient descent policy, whereas appropriately selected prescribed performance functions ensure the predetermined evolution of the containment errors. Finally, the authors in [21] introduce a fixed-time containment control protocol for uncertain nonlinear multi-agent systems (MASs) with unknown leader dynamics under switching communication topologies. They utilize a Markov jumping process and a fixed-time extended state observer (FTESO) to achieve uniform fast estimation without broadcasting the velocity states, thus ensuring robust containment with a reduced communication burden. Nevertheless, none of the aforementioned works has considered the inter-agent collision avoidance issue in the containment control problem, which is of paramount importance for practical applications, since it is in direct conflict with the enforced performance specifications. Only the article [22] has explored a time-varying version of the containment problem with collision avoidance for multi-agent systems. Specifically, it introduces a decentralized control strategy where agents lack global knowledge of the goals but track the prescribed trajectories and achieve formations. Collision avoidance is secured by repulsive vector fields, without, however, dealing with any performance specifications or robustness issues.
On the other hand, there exist other works that deal with the robustness as well as the intermittent communication issues under the containment control framework for multi-agent systems. Specifically, the paper [23] introduces a distributed adaptive fault-tolerant formation-containment controller for heterogeneous multi-agent systems involving unmanned aerial vehicles and unmanned ground vehicles. It develops strategies for trajectory tracking and containment control, compensating for actuator faults and unknown parameters. The authors in [24] address the formation-containment tracking control problem for heterogeneous linear multi-agent systems with unbounded distributed transmission delays. They introduce a novel distributed control protocol for both the leaders and the followers, thus achieving the desired trajectory tracking and formation maintenance. Similarly, the work in [25] addresses event-triggered adaptive containment control for nonlinear multi-agent systems with partially measurable states. It employs radial basis function neural networks to approximate uncertain functions and neural-network-based observers for state estimation. It also introduces a switching-threshold-based control strategy to optimize the resource consumption and system performance. The authors in [26] focus on adaptive tracking containment control for nonlinear multi-agent systems with unmodeled dynamics. They employ Gaussian functions and novel dynamic signals to address design challenges, as well as an event-triggered mechanism to reduce the communication burden. The proposed protocol ensures the convergence of the containment errors and avoids Zeno behavior. Furthermore, the work in [27] explores the control of second-order multi-agent systems with intermittent communication. It proposes distributed coordination algorithms for aperiodic communication, achieving formation tracking for the leaders and the convergence of the followers to the convex hull. Convergence conditions are derived using Lyapunov functions, addressing simultaneously time-varying delays. Finally, robust containment control for diverse linear multi-agent systems under structured uncertainties and external disturbances is addressed in [28]. Introducing a novel neighborhood error concept, this work transforms the containment control problem into an output regulation problem, and, via distributed compensator and robust control strategies, it establishes conditions that ensure that the followers converge to the leaders’ convex hull with the specified performance.

1.2. Contributions

In this work, we consider the containment control problem for multiple followers obeying the double integrator model perturbed by bounded, piece-wise continuous but completely unknown external disturbances. The goal of each follower is to converge within a dynamic region that is formed by the convex hull of the leading agents, which operate autonomously, while avoiding imminent collisions with other nearby agents. Each follower tracks at least one of the leaders and cooperates with the other followers via an underlying fixed communication network. In particular, we propose a two-layer approach that enables (a) the high-level distributed estimation of a reference trajectory that converges within the convex hull of the leaders and (b) multiple safety and performance specifications at the low level by employing the notion of adaptive performance control. The contributions of this work are summarized as follows.
  • We propose a novel distributed estimation algorithm for a reference trajectory that converges within the convex hull of the leaders with the prescribed transient and steady-state performance, even though the leaders may be dynamic. To the best of the authors’ knowledge, this is the first time that a distributed dynamic average consensus algorithm is adopted to calculate a reference trajectory that enters the convex hull of the leaders and remains there, despite the state evolution of the leaders, thus yielding a rather robust trajectory estimation scheme.
  • We enforce robust trajectory tracking with adaptive performance despite the presence of bounded, piece-wise continuous but unmodeled disturbances. Notice that since the control action of each follower is rendered local, owing to the distributed estimation of a reference trajectory that lies within the convex hull of the leaders, the application of the adaptive performance tracking control technique reinforces the robustness against model uncertainties.
  • Imminent collisions among the agents are avoided by adopting a novel adaptation of the performance specifications when agents approach each other. As collision avoidance among the agents and the trajectory tracking of a common trajectory become eventually conflicting, we introduce, for the first time, a relaxation of the performance envelope based on the distances of closely related agents.
The rest of the paper is organized as follows. Section 2 formulates the problem and Section 3 presents the distributed reference trajectory estimator and the local trajectory tracking controller with collision avoidance. The simulation results are provided in Section 4, while we conclude in Section 5 along with future research directions.

2. Problem Formulation

Consider a multi-agent system that comprises N followers and M leaders with radius R (see Figure 1). The dynamics of the following agents follow a perturbed double integrator model:
p ˙ i f = v i f v ˙ i f = u i f + d i ( t ) , i = 1 , , N
where p i f 3 and v i f 3 , i = 1 , , N denote the position and the velocity of each agent and are measurable, whereas u i f 3 , i = 1 , , N denote the acceleration of the agents, which are the control signals to be designed, and d i : + 3 , i = 1 , , N is an external bounded and piece-wise continuous disturbance. An undirected fixed graph G f = { V f , E f } is used to model the communication network established among the followers, where V f = { v 1 f , , v N f } denotes the set of vertices that correspond to each follower and E f V f × V f denotes the set of edges that correspond to the available communication links between pairs of connected agents, i.e., if ( v i f , v j f ) E f , then the agents i and j exchange information among each other via the underlying communication network. Finally, let us define the neighboring set of each follower as N i f = { v j f : ( v i f , v j f ) E f } , i = 1 , , N .
In our work, the group of the leaders acts as an exosystem that generates a dynamic reference area within which the following agents have to converge and remain. The time-varying position and velocity of the leaders are denoted by p i l : + 3 , v i l : + 3 , i = 1 , , M and the access of the followers to the state of the leaders is modeled by a directed fixed graph G l = { [ V f , V l ] , E l } , where V l = { v 1 l , , v M l } denotes the set of vertices that correspond to each leader and E l V f × V l denotes the set of edges that correspond to the access of the followers to the state of the leaders. Finally, we define the corresponding neighboring sets as N i l = { v j l : ( v i f , v j l ) E f } , i = 1 , , N .
The goal of this work is to design a distributed control protocol for the followers, based on the information acquired by their neighbors (either followers or leaders), such that they converge within a predetermined transient period and remain in the convex hull that is formed by the leaders’ position, while avoiding inter-agent collisions. It should be noted that the followers exchange information among them based only on the underlying communication network that is described by the graph G f . On the other hand, they detect imminent collisions by employing appropriate onboard sensors (e.g., lidar, proximity) and they obtain the positions of their neighboring leaders, as dictated by the graph G l , by observing them via a dedicated target tracking system (e.g., vision).
To solve the aforementioned containment control problem, the following assumptions regarding the motion of the leaders and the underlying graph topologies are required.
Assumption 1. 
The fixed undirected graph G f is connected.
Assumption 2. 
For each follower, there exists at least one leader with a directed edge to it in the graph G l .
Assumption 3. 
The position of the leaders p i l ( t ) , i = 1 , , M is bounded with bounded derivatives.
Remark 1. 
Assumption 1 concerns the fact that the followers have a connected communication network through which a consensus may be achieved by the underlying information flow and appropriately designed collaborating protocols. Assumption 2 is very common in the related literature and dictates that every follower tracks the position of at least one leader. Finally, it should be noted that the double integrator dynamics considered for the followers is a common model to study the dynamics of multirotor UAVs. Owing to the fact that the trajectory dynamics have much larger time constants than the attitude dynamics, the control of multirotor UAVs can be implemented with an inner-loop/outer-loop structure [29], i.e., the outer-loop drives the UAV toward the desired position, while the inner-loop tracks the attitude. The containment control problem concerns only position trajectories and thus the dynamics of the followers can be approximately modeled by (1).

3. Control Design

The proposed approach comprises two layers: (i) a cyber layer, where the followers communicate among each other explicitly, through the established network that is modeled by the graph G f , to estimate a reference trajectory that evolves within the convex hull of the leaders, employing a dynamic average consensus protocol, and (ii) a physical layer, where a local motion controller aims at tracking the aforementioned reference trajectory within the convex hull of the leaders while avoiding imminent collisions with other agents based on the information acquired by the onboard sensors (proximity sensors and localization); see Figure 2.

3.1. Distributed Reference Trajectory Estimation

To estimate a reference trajectory that evolves within the convex hull of the leaders, each follower implements a nonlinear dynamic average consensus estimator,
p ˙ i d = k r p i d 1 | N i l | j N i l p j l ( t ) + 1 | N i l | j N i l v j l ( t ) k j N i f ρ i j 1 ( t ) J T ρ i j 1 ( t ) p i d p j d T ρ i j 1 ( t ) p i d p j d , i = 1 , , N
with positive gains k r , k, where p i d 3 denotes the local estimate of the average of 1 | N i l | j N i l p j l ( t ) , i = 1 , , N , which lies strictly within the leaders’ convex hull as a convex combination of their positions, i.e., 1 N i = 1 N 1 | N i l | j N i l p j l ( t ) . Moreover, T : ( 1 , 1 ) 3 3 is an element-wise smooth bijective mapping, e.g., T ( ξ 1 , ξ 2 , ξ 3 ) = 1 2 ln 1 + ξ 1 1 ξ 1 , 1 2 ln 1 + ξ 2 1 ξ 2 , 1 2 ln 1 + ξ 3 1 ξ 3 T , with J T : ( 1 , 1 ) 3 3 × 3 denoting its Jacobian (derivative), e.g., J T ( ξ 1 , ξ 2 , ξ 3 ) = diag 1 1 ξ 1 2 , 1 1 ξ 2 2 , 1 1 ξ 3 2 , and ρ i j ( t ) = diag ρ i j 1 ( t ) , ρ i j 2 ( t ) , ρ i j 3 ( t ) are element-wise exponential performance functions (e.g., ρ i j m = ρ i j 0 , m ρ exp ( λ t ) + ρ , m = 1 , 2 , 3 ) that incorporate transient and steady-state performance specifications on the evolution of the consensus errors p i d p j d , j N i f and i = 1 , , N , via the appropriate selection of λ , ρ , and satisfy ρ i j 0 , m > p i d , m ( 0 ) p j d , m ( 0 ) for m = 1 , 2 , 3 and j N i f , i = 1 , , N .
Theorem 1. 
The proposed distributed reference trajectory estimator (2) with k r > λ establishes, under Assumptions 1–3, an average consensus with exponential rate λ and steady-state error
lim t p i d ( t ) 1 N i = 1 N 1 | N i l | j N i l p j l ( t ) 3 2 ρ Diam ( G f ) , i = 1 , , N
where Diam ( G f ) denotes the diameter of the graph G f .
Proof. 
The proof proceeds in two steps. First, we establish a consensus with a predefined transient and steady-state response for the reference estimations p i d ( t ) = p i d , 1 ( t ) , p i d , 2 ( t ) , p i d , 3 ( t ) T , i = 1 , , N and then we prove that they all converge arbitrarily fast and close to the point 1 N i = 1 N 1 | N i l | j N i l p j l ( t ) 3 that lies within the leaders’ convex hull. Moreover, the proof of the aforementioned properties for each element p i d , m ( t ) , m = 1 , 2 , 3 is identical; hence, we present it only once.
Consequently, let us first denote by Q the overall number of distinct edges of the undirected graph G f , i.e., E f = ϵ 1 f , , ϵ Q f , as well as assigning arbitrarily a tail and a head for every edge. Let us also define the N × Q incidence matrix B G f = b i q , where
b i q = 1 , i f   v i f   i s   t h e   t a i l   o f   ϵ q f 1 , i f   v i f   i s   t h e   h e a d   o f   ϵ q f 0 , o t h e r w i s e , i = 1 , , N & q = 1 , , Q
We now define the stack vector δ ¯ m = δ q m Q of the consensus errors, with δ q m = p i d , m p j d , m , q = 1 , , Q and m = 1 , 2 , 3 , for all distinct pairs of neighboring agents v i f , v j f , associated with the corresponding performance functions ρ q ( t ) = ρ i j m ( t ) , q = 1 , , Q and m = 1 , 2 , 3 . Let us also formulate the normalized error vector
ξ ¯ δ m = ξ δ 1 m , , ξ δ Q m T = δ 1 m ρ 1 ( t ) , , δ Q m ρ Q ( t ) T = P 1 ( t ) δ ¯ m
where P ( t ) = diag ρ 1 ( t ) , , ρ Q ( t ) Q × Q . Employing the error mapping T ( · ) and its Jacobian J T ( · ) , we also define the transformed error vector
ε ¯ δ m = [ ε δ 1 m , . . . , ε δ Q m ] T = T ( ξ δ 1 m ) , . . . , T ( ξ δ Q m ) T
as well as the diagonal matrix
J T δ m = diag [ J T ( ξ δ 1 m ) , . . . , J T ( ξ δ Q m ) ] .
In this way, the dynamics (2) of the overall estimate vector p ¯ d , m [ p 1 d , m , . . . , p N d , m ] T R N are written as
p ¯ ˙ d , m = k r ( p ¯ d , m z ¯ m ( t ) ) + z ¯ ˙ m ( t ) k B P 1 ( t ) J T δ m ε δ m ,
where z ¯ m ( t ) [ z 1 m ( t ) , . . . , z N m ( t ) ] T R N denotes the average of the positions of the leaders that are tracked by each follower based on the directed graph G l , with z i m ( t ) = 1 | N i l | j N i l p j l , m ( t ) , i = 1 , , N . Hence, differentiating ξ ¯ δ m with respect to time, employing δ ¯ m = B T p ¯ d , m with B denoting the incidence matrix of the underlying graph G f , and substituting (5), we obtain
ξ ¯ ˙ δ m = P 1 ( t ) B T k r p ¯ d , m z ¯ m ( t ) + z ¯ ˙ m ( t ) k B P 1 ( t ) J T δ m ε ¯ δ m P ˙ ( t ) ξ ¯ δ m .
To study the stability properties of the proposed distributed reference trajectory estimator, let us select the following candidate Lyapunov function of the transformed consensus errors:
V ε = 1 2 ε ¯ δ m T ε ¯ δ m .
Hence, differentiating with respect to time, invoking ε ¯ ˙ δ m = J T δ m ξ ¯ ˙ δ m , and substituting (6), we obtain
V ˙ = ε ¯ δ m T J T δ m P 1 ( t ) B T k r ( p ¯ d , m z ¯ m ( t ) ) + z ¯ ˙ m ( t ) k B P 1 ( t ) J T δ m ε ¯ δ m P ˙ ( t ) ξ ¯ δ m = k r ε ¯ δ m T J T δ m P 1 ( t ) B T p ¯ d , m ε ¯ δ m T J T δ m P 1 ( t ) P ˙ ( t ) ξ ¯ δ m k ε ¯ δ m T J T δ m P 1 ( t ) B T B P 1 ( t ) J T δ m ε ¯ δ m + ε ¯ δ m T J T δ m P 1 ( t ) B T k r z ¯ m ( t ) + z ¯ ˙ m ( t ) .
Furthermore, substituting P 1 ( t ) B T p ¯ d , m = ξ ¯ δ m and employing the boundedness of the leaders’ position and velocity in z ¯ m ( t ) and z ¯ ˙ m ( t ) , the sign-preserving property of the mapping T ( · ) in (3) along with the strict positiveness of its Jacobian J T ( · ) in (4), as well as the fact that, for exponential performance functions, it holds that P 1 ( t ) P ˙ ( t ) > λ I Q × Q , we obtain
V ˙ k r ε ¯ δ m T J T δ m ξ ¯ δ m + λ ε ¯ δ m T J T δ m ξ ¯ δ m k ε ¯ δ m T J T δ m P 1 ( t ) B T 2 + ε ¯ δ m T J T δ m P 1 ( t ) B T sup t 0 k r z ¯ m ( t ) + z ¯ ˙ m ( t ) .
Thus, applying Young’s inequality, we arrive at
V ˙ ( k r λ ) ε ¯ δ m T J T δ m ξ ¯ δ m + sup t 0 k r z ¯ m ( t ) + z ¯ ˙ m ( t ) 2 4 k ,
from which, owing to k r > λ , we conclude on the boundedness of ε ¯ δ m ( t ) . Finally, invoking the inverse of the error transformation (3) (i.e., the hyperbolic tangent function), we obtain | ξ δ q m ( t ) | < 1 , q = 1 , , Q , which, by multiplication by ρ q ( t ) , leads to
| δ q m ( t ) | < ρ q ( t ) , q = 1 , , Q ,
which indicates that all consensus errors δ q m ( t ) , q = 1 , , Q meet the transient and steady-state performance specifications that are encapsulated by the corresponding performance functions ρ q ( t ) , q = 1 , , Q , thus concluding the first part of the proof.
To prove that all estimates p i d , m with i = 1 , , N and m = 1 , 2 , 3 converge arbitrarily closely to a point within the convex hull of the leaders’ position, we multiply from the left both sides of (5) by 1 T N R N . Invoking the property 1 T B = 0 R Q and subtracting z ˜ ˙ m ( t ) , where z ˜ m ( t ) = 1 N i = 1 N z i m ( t ) , we have p ˜ ˙ d , m = k r p ˜ d , m , where p ˜ d , m = 1 N i = 1 N p i d , m z ˜ m ( t ) , from which we conclude on the exponential convergence of the average tracking error p ˜ d , m to zero with rate k r . Moreover, since the underlying graph G f is connected, the ultimate bound ρ of all consensus errors dictates that
lim t p i d , m ( t ) p ˜ d , m ( t ) z ˜ m ( t ) ρ Diam ( G f ) 2 , i = 1 , , N and m = 1 , 2 , 3
with exponential convergence rate λ . Finally, owing to the exponential convergence of p ˜ d , m ( t ) to zero with rate k r and the relationship between the 2-norm and the ∞-norm, a steady-state average consensus ultimate bound,
lim t p i d ( t ) 1 N i = 1 N 1 | N i l | j N i l p j l ( t ) 3 2 ρ Diam ( G f ) , i = 1 , , N
with an exponential convergence rate equal to λ is concluded, which completes the proof. □
Remark 2. 
It should be noted that the suggested reference trajectory estimator operates in a distributed manner, relying solely on information from nearby agents (followers as well as leaders), as outlined in (2), whereas only the average estimate needs to be transmitted through the underlying communication network. Moreover, its transient and steady-state performance can be independently and a priori adjusted (without relying on discontinuous laws, which inherently exhibit chattering) by selecting appropriately the design parameters ρ , λ, k r , and k. In particular, increasing λ and decreasing ρ leads to faster convergence and a more accurate consensus, regardless of how large the leaders’ velocity is. Additionally, the gain k plays a role in the consensus performance, as increasing it results in faster convergence. On the other hand, the quality of the average tracking can be improved by increasing k r , without, however, compromising the consensus performance. Finally, in the event that the velocity of the leaders is not available (notice that the dynamic average consensus estimator (2) employs v j l , j = 1 , , M ), a tracking differentiator [30] may be employed to reconstruct it.

3.2. Local Motion Control

Based on the previous subsection, a reference trajectory p i d ( t ) , i = 1 , , N that lies within the convex hull of the leaders is available for each follower. Therefore, the goal of this subsection is to propose a local motion controller that tracks the aforementioned reference trajectory and simultaneously avoids collisions with other nearby agents. The design of the local motion controller is divided into two steps: first, a reference velocity profile is derived at the kinematic level assuming that the control signals are the linear body velocities; subsequently, the kinematic controller is extended to the dynamic model, considering the actual control signals, which are the accelerations. In particular, the control design will rely on the prescribed performance control technique, which will be equipped with an adaptive mechanism that adjusts the underlying distance performance functions based on Khatib’s repulsive field [31] in order to avoid imminent collisions with nearby agents.
Step 1. Let us denote the distance performance functions ρ i d ( t ) , i = 1 , , N that encapsulate the transient and steady-state performance specifications on the local tracking error e i d ( t ) = p i d ( t ) p i f ( t ) of each follower i = 1 , , N . Moreover, let us define the set of nearby followers as N i d , f and the set of nearby leaders as N i d , l , which are detected by the onboard sensors. We assume that a nearby agent is detected if it is located within d r > 2 R distance from the robot, where R denotes the radius of the agents, i.e., j N i d , f if p i f p j f d r or j N i d , l if p i f p j l d r .
Step 2. Design the reference velocity profile for each follower as
v i r = k d ln 1 1 e i d ρ i d e i d + p ˙ i d + λ ρ i d ρ d e i d ρ i d e i d + η v i c o l , with k d , λ , ρ d , η > 0
where e i d p i d p i f p i d p i f denotes the unit vector pointing towards the reference trajectory p i d and
v i c o l j N i d , f 1 p i f p j f 2 R 1 d r 2 R p i f p j f + j N i d , l 1 p i f p j l 2 R 1 d r 2 R p i f p j l
denotes the repulsive velocity that keeps nearby agents away, thus avoiding imminent collisions. Moreover, let us equip each follower with the following adaptive law that dictates the evolution of the distance performance function:
ρ ˙ i d = λ ρ i d ρ d γ min 0 , ln 1 1 e i d ρ i d e i d T v i c o l
where λ , ρ d denote the parameters that encapsulate the transient and steady-state performance specifications and γ > 0 .
Step 3. Define the velocity errors
e i v ( t ) = v i r ( t ) v i f ( t ) , i = 1 , , N
and select the corresponding velocity performance functions ρ i v ( t ) = ρ i v , 0 ρ v exp ( λ t ) + ρ v , i = 1 , , N such that e i v ( 0 ) < ρ i v ( 0 ) = ρ i v , 0 and lim t ρ i v ( t ) = ρ v > 0 for i = 1 , , N .
Step 4. Finally, design the control input as
u i f = k v ln 1 1 e i v ρ i v e i v , with k v > 0
where e i v v i r v i f v i r v i f denotes the unit vector pointing towards the reference velocity v i r .
Remark 3. 
Initializing from a collision-free configuration, the proposed scheme guarantees that the distance and velocity errors evolve within the corresponding performance functions ρ i d ( t ) and ρ i v ( t ) , i = 1 , , N and avoid inter-agent collisions by relaxing online the distance performance functions according to (9) so that safety is guaranteed for all time. More specifically, notice that when the reference tracking task leads to collisions, i.e., when the vectors e i d and v i c o l do not point towards the same direction, namely e i d T v i c o l < 0 , the second term in (9) relaxes the distance performance function ρ i d ( t ) , allowing the distance error to increase so that a collision is avoided, until both goals become compatible (i.e., e i d T v i c o l > 0 ), which nullifies the second term and restores the exponential convergence exhibited by the first term in (9).
Remark 4. 
The proposed local motion controller is fully distributed since it requires only the position and the velocity of each follower, as well as the positions of the nearby agents, to avoid collisions. Moreover, although the performance specifications on the velocity errors are not dictated by the problem formulation, we enforce them so that we can compensate for the disturbances acting on model (1) and achieve appropriate velocities to avoid imminent collisions with nearby agents. Finally, although the selection of the control gains/parameters does not affect the achieved performance of the closed-loop system, it should be noted that their actual values affect the evolution of the control signals. Thus, they should be cautiously determined via a trial-and-error procedure to avoid high peaks in the control signals and oscillatory behavior within the prescribed performance functions.
The following theorem summarizes the main results of this subsection.
Theorem 2. 
Consider the perturbed double integrator model (1) of each following agent, along with the results of Theorem 1 on the corresponding reference trajectories (2). The proposed distributed control scheme (7)–(10) guarantees that p i d ( t ) p i f ( t ) < ρ i d ( t ) , i = 1 , , N , as well as that p i f ( t ) p j f ( t ) > 2 R , i , j = 1 , , N and p i f ( t ) p j l ( t ) > 2 R , i = 1 , , N and j = 1 , , M for all t 0 .
Proof. 
The proof proceeds identically for every agent. Hence, let us define the transformed distance error ε i d = ln 1 1 e i d ρ i d , as well as the corresponding candidate Lyapunov function V i d = 1 2 ε i d 2 . Differentiating V i d with respect to time and substituting e ˙ i d e i d T p ˙ i d v i f and v i f = v i r ρ i v 1 exp ε i v e i v , where ε i v = ln 1 1 e i v ρ i v denotes the transformed velocity error, we obtain
V ˙ i d = ε i d exp ε i d ρ i d e i d T p ˙ i d v i r + ρ i v 1 exp ε i v e i v e i d ρ i d ρ ˙ i d .
Furthermore, substituting (7) and (9), we obtain
V ˙ i d = ε i d exp ε i d ρ i d k d ε i d η e i d T v i c o l + γ 1 exp ε i d min 0 , ε i d e i d T v i c o l + ρ i v 1 exp ε i v e i d T e i v
Subsequently, it should be noted that the last term is bounded by construction. Moreover, when e i d T v i c o l > 0 (i.e., the trajectory tracking and collision avoidance goals are compatible), the second term is negative, whereas the third term becomes null. On the other hand, when e i d T v i c o l < 0 , the third term that becomes negative dominates the second one for large ε i d . Thus, V ˙ i d is rendered negative for large ε i d , which establishes the ultimate boundedness property of ε i d .
Moreover, employing a similar candidate Lyapunov function V i v = 1 2 ε i v 2 for the velocity-transformed error ε i v = ln 1 1 e i v ρ i v , differentiating with respect to time, and substituting (10), we obtain
V ˙ i v = ε i v exp ε i v ρ i v k v ε i v + e i v T v ˙ i r d i ( t ) + λ 1 exp ε i v ρ i v , 0 ρ v exp ( λ t ) .
Notice that the second and third terms are bounded either by construction or by assumption; thus, V ˙ i v is rendered negative for large ε i v , which establishes the ultimate boundedness property of ε i v and consequently of the control signal (10).
Finally, notice that the boundedness of ε i d , invoking the inverse of the transformed distance error ε i d = ln 1 1 e i d ρ i d , leads to e i d ( t ) < ρ i d ( t ) , t 0 . Additionally, from the boundedness of ε i d , it should be noted that the collision avoidance term v i c o l dominates in (7) when a collision is imminent (i.e., p i f ( t ) p j f ( t ) 2 R or p i f ( t ) p j l ( t ) 2 R ), thus moving them away. Consequently, we deduce that p i f ( t ) p j f ( t ) > 2 R , i , j = 1 , , N and p i f ( t ) p j l ( t ) > 2 R , i = 1 , , N and j = 1 , , M for all t 0 , which concludes the proof. □

4. Simulation Results

Consider a multi-agent system comprising M = 4 leaders and N = 5 followers moving in 3D space. The radius of the agents is R = 0.2 m . The leaders maintain a regular triangular pyramid formation of edge 3.6 m , with its center following a periodic spline trajectory interpolating 10 points within the set [ 4 , 4 ] × [ 4 , 4 ] × [ 4 , 4 ] (see Figure 3). The followers start randomly within the set [ 4 , 4 ] × [ 4 , 4 ] × [ 4 , 4 ] and aim at converging within the dynamically moving pyramid with rate exp 2 t . The underlying communication graphs G f and G l are defined by N 1 f = { 2 , 3 , 5 } , N 2 f = { 1 , 5 } , N 3 f = { 1 , 4 } , N 4 f = { 3 , 5 } , N 5 f = { 1 , 2 , 4 } , N 1 l = { 1 } , N 2 l = { 1 , 3 } , N 3 l = { 2 } , N 4 l = { 4 } and N 5 l = { 2 , 3 , 4 } . Moreover, sinusoidal disturbances with a randomly selected amplitude, frequency, and phase within [ 0.1 , 0.5 ] m / s 2 , [ 0.1 , 1 ] rad / s , and [ 0 , 2 π ] rad , respectively, affect the dynamics of the followers, whereas uniform pseudo-random noise within [ 0.1 , 0.1 ] is injected in the measurement of the leaders’ position. Finally, the control gains/parameters are given in the Table 1.
The simulation results are illustrated in Figure 4, Figure 5, Figure 6 and Figure 7. More specifically, snapshots of six consecutive time instants are provided in Figure 4. Apparently, the followers, which initially start outside the convex hull of the leaders, converge within it with exponential rate exp 2 t and remain there despite the external disturbances that perturb the agents’ dynamics (see Figure 5). In particular, the evolution of the distance errors is illustrated in Figure 6, where it is verified that the performance specifications that are encapsulated in the performance functions ρ i d ( t ) , i = 1 , , 5 are relaxed so that safety is guaranteed, especially during the steady state. Moreover, no inter-agent collision occurs, as depicted in Figure 7. Finally, the performance of the distributed estimator is illustrated in Figure 8. Notice that all agents eventually establish a consensus towards a reference trajectory that lies within the convex hull of the leaders.

Comparative Simulation Study

To the best of the authors’ knowledge, there do not exist related papers that achieve similar results as in our work in terms of collision avoidance, robustness against external disturbances and dynamic leaders, and adaptive performance (the authors in [22] consider a unicycle kinematic robot model for 2D motion and not a dynamic model, as we do in this work, without also studying any external disturbances affecting it). Hence, we shall compare the proposed approach with one of the most cited prescribed performance containment controllers [15] that poses similar assumptions, for which, however, collision avoidance is not guaranteed. It should be noted that neither drift nonlinearities nor Nussbaum functions will be considered; thus, the fuzzy logic approximation structures are not required, rendering the complexity comparable to that of the proposed scheme. Afterwards, safety will be enforced by adopting identical repulsive vector fields as in our work but without employing the adaptation mechanism on the evolution of the prescribed performance function to verify its critical role in the whole process.
The simulation scenario is identical to the one reported in the previous subsection, with the same performance specifications imposed by the parameters given in Table 1. The results are illustrated in Figure 9, Figure 10 and Figure 11. More specifically, the followers, which initially start outside the convex hull of the leaders, converge within it with exponential rate exp 2 t and remain there, owing to the prescribed performance controller, despite the external disturbances that perturb the agents’ dynamics (see Figure 9). Additionally, the evolution of the errors along the x-coordinate is illustrated in Figure 10, where it is verified that the performance specifications that are encapsulated in the corresponding performance functions are guaranteed. It should be noted that the agents remain closer to the center of the convex hull with significantly smaller errors during the steady state than in our case. However, such a property is attributed to the absence of collision avoidance. Notice particularly that the agents coincide, ultimately leading to inevitable collisions, as depicted in Figure 11. Finally, to verify that the proposed adaptation of the performance functions, as introduced in (9), is critical for the viability of the whole scheme, since the containment control action and the collision avoidance are conflicting when the agents move towards the convex hull of the leaders, we augment the control scheme of [15] with an identical repulsive potential field to the proposed one. As expected (see Figure 12), owing to the conflicting nature of these two control goals, the aforementioned scheme fails as the control signal becomes singular when two agents approach each other, and, at the same time, the error approaches the corresponding performance function.

5. Conclusions and Future Directions

We present a containment control scheme for multi-agent systems, composed of a distributed reference trajectory estimator and a local trajectory tracking controller with guaranteed collision avoidance. The proposed framework was tested on a multi-agent scenario and exhibited high robustness against bounded external disturbances and dynamically moving leaders. Future research efforts will be devoted towards handling complex and unknown system dynamics, as well as input saturation constraints. Additionally, studying intermittent communication among the followers within the cyber-layer would increase the applicability of the proposed scheme. Finally, another direction that would increase the potential of the proposed framework is to study the design of the control scheme of the leaders as they collaboratively execute a specific task (e.g., motion planning within an obstacle-cluttered environment).

Funding

This work was supported by the project “Applied Research for Autonomous Robotic Systems” (MIS 5200632), which is implemented within the framework of the National Recovery and Resilience Plan “Greece 2.0” (Measure: 16618—Basic and Applied Research) and is funded by the European Union—NextGenerationEU.

Data Availability Statement

The data presented in this study are available in this article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Cohen, A.P.; Shaheen, S.A.; Farrar, E.M. Urban Air Mobility: History, Ecosystem, Market Potential, and Challenges. IEEE Trans. Intell. Transp. Syst. 2021, 22, 6074–6087. [Google Scholar] [CrossRef]
  2. Wurman, P.R.; D’Andrea, R.; Mountz, M. Coordinating hundreds of cooperative, autonomous vehicles in warehouses. AI Mag. 2008, 29, 9–19. [Google Scholar]
  3. Zhou, X.; Wang, Z.; Ye, H.; Xu, C.; Gao, F. EGO-Planner: An ESDF-Free Gradient-Based Local Planner for Quadrotors. IEEE Robot. Autom. Lett. 2021, 6, 478–485. [Google Scholar] [CrossRef]
  4. Zhou, X.; Wen, X.; Wang, Z.; Gao, Y.; Li, H.; Wang, Q.; Yang, T.; Lu, H.; Cao, Y.; Xu, C.; et al. Swarm of micro flying robots in the wild. Sci. Robot. 2022, 7, eabm595. [Google Scholar] [CrossRef]
  5. Quan, L.; Han, L.; Zhou, B.; Shen, S.; Gao, F. Survey of UAV motion planning. IET Cyber-Syst. Robot. 2020, 2, 14–21. [Google Scholar] [CrossRef]
  6. Mueller, M.W.; Lee, S.J.; D’Andrea, R. Design and Control of Drones. Annu. Rev. Control. Robot. Auton. Syst. 2022, 5, 161–177. [Google Scholar] [CrossRef]
  7. Thummalapeta, M.; Liu, Y.C. Survey of containment control in multi-agent systems: Concepts, communication, dynamics, and controller design. Int. J. Syst. Sci. 2023, 54, 2809–2835. [Google Scholar] [CrossRef]
  8. Chen, C.; Han, Y.; Zhu, S.; Zeng, Z. Distributed Fixed-Time Tracking and Containment Control for Second-Order Multi-Agent Systems: A Nonsingular Sliding-Mode Control Approach. IEEE Trans. Netw. Sci. Eng. 2023, 10, 687–697. [Google Scholar] [CrossRef]
  9. Chen, C.; Han, Y.; Zhu, S.; Zeng, Z. Neural Network-Based Fixed-Time Tracking and Containment Control of Second-Order Heterogeneous Nonlinear Multiagent Systems. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–15. [Google Scholar] [CrossRef]
  10. Zhao, G.; Liu, Q.; Hua, C. Prescribed-time containment control of high-order nonlinear multi-agent systems based on distributed observer. J. Frankl. Inst. 2023, 360, 6736–6756. [Google Scholar] [CrossRef]
  11. Chang, S.; Wang, Y.; Zuo, Z.; Yang, H.; Luo, X. Robust prescribed-time containment control for high-order uncertain multi-agent systems with extended state observer. Neurocomputing 2023, 559, 126782. [Google Scholar] [CrossRef]
  12. Yi, J.; Li, J. Distributed Containment Control with Prescribed Accuracy for Nonstrict-Feedback Switched Nonlinear Multiagent Systems. IEEE Syst. J. 2023, 17, 5671–5682. [Google Scholar] [CrossRef]
  13. Sui, J.; Liu, C.; Niu, B.; Zhao, X.; Wang, D.; Yan, B. Prescribed Performance Adaptive Containment Control for Full-State Constrained Nonlinear Multiagent Systems: A Disturbance Observer-Based Design Strategy. IEEE Trans. Autom. Sci. Eng. 2024, 1–12. [Google Scholar] [CrossRef]
  14. Lin, Q.; Zhou, Y.; Jiang, G.P.; Ge, S.; Ye, S. Prescribed-time containment control based on distributed observer for multi-agent systems. Neurocomputing 2021, 431, 69–77. [Google Scholar] [CrossRef]
  15. Wang, Y.; Song, Y.; Hill, D.J.; Krstic, M. Prescribed-time consensus and containment control of networked multiagent systems. IEEE Trans. Cybern. 2019, 49, 1138–1147. [Google Scholar] [CrossRef] [PubMed]
  16. Liu, D.; Liu, Z.; Chen, C.P.; Zhang, Y. Prescribed-time containment control with prescribed performance for uncertain nonlinear multi-agent systems. J. Frankl. Inst. 2021, 358, 1782–1811. [Google Scholar] [CrossRef]
  17. Wang, W.; Liang, H.; Pan, Y.; Li, T. Prescribed Performance Adaptive Fuzzy Containment Control for Nonlinear Multiagent Systems Using Disturbance Observer. IEEE Trans. Cybern. 2020, 50, 3879–3891. [Google Scholar] [CrossRef] [PubMed]
  18. Liu, D.; Liu, Z.; Chen, C.L.P.; Zhang, Y. Distributed adaptive fuzzy control approach for prescribed-time containment of uncertain nonlinear multi-agent systems with unknown hysteresis. Nonlinear Dyn. 2021, 105, 257–275. [Google Scholar] [CrossRef]
  19. Yang, T.; Dong, J. Funnel-Based Predefined-Time Containment Control of Heterogeneous Multiagent Systems With Sensor and Actuator Faults. IEEE Trans. Syst. Man Cybern. Syst. 2023, 54, 1903–1913. [Google Scholar] [CrossRef]
  20. Luo, A.; Xiao, W.; Li, X.M.; Yao, D.; Zhou, Q. Performance-guaranteed containment control for pure-feedback multi agent systems via reinforcement learning algorithm. Int. J. Robust Nonlinear Control. 2022, 32, 10180–10200. [Google Scholar] [CrossRef]
  21. Biao, T.; Xingling, S.; Wei, Y.; Wendong, Z. Fixed time output feedback containment for uncertain nonlinear multiagent systems with switching communication topologies. ISA Trans. 2021, 111, 82–95. [Google Scholar] [CrossRef] [PubMed]
  22. Santiaguillo-Salinas, J.; Aranda-Bricaire, E. Containment problem with time-varying formation and collision avoidance for multiagent systems. Int. J. Adv. Robot. Syst. 2017, 14, 1–13. [Google Scholar] [CrossRef]
  23. Gong, J.; Jiang, B.; Ma, Y.; Mao, Z. Distributed Adaptive Fault-Tolerant Formation-Containment Control With Prescribed Performance for Heterogeneous Multiagent Systems. IEEE Trans. Cybern. 2023, 53, 7787–7799. [Google Scholar] [CrossRef]
  24. Bi, C.; Xu, X.; Liu, L.; Feng, G. Formation-Containment Tracking for Heterogeneous Linear Multiagent Systems under Unbounded Distributed Transmission Delays. IEEE Trans. Control. Netw. Syst. 2023, 10, 822–833. [Google Scholar] [CrossRef]
  25. Xu, R.; Wang, X.; Zhou, Y. Observer-based event-triggered adaptive containment control for multiagent systems with prescribed performance. Nonlinear Dyn. 2022, 107, 2345–2362. [Google Scholar] [CrossRef]
  26. Jiang, H.; Wang, X.; Niu, B.; Wang, H.; Liu, X. Event-triggered adaptive tracking containment control of nonlinear multiagent systems with unmodeled dynamics and prescribed performance. Int. J. Robust Nonlinear Control. 2023, 33, 2629–2650. [Google Scholar] [CrossRef]
  27. Xia, M.D.; Liu, C.L.; Liu, F. Formation-Containment Control of Second-Order Multiagent Systems via Intermittent Communication. Complexity 2018, 2018, 2501427. [Google Scholar] [CrossRef]
  28. Atrianfar, H.; Karimi, A. Robust H∞ containment control of heterogeneous multi-agent systems with structured uncertainty and external disturbances. Int. J. Robust Nonlinear Control. 2022, 32, 698–714. [Google Scholar] [CrossRef]
  29. Dong, X.; Yu, B.; Shi, Z.; Zhong, Y. Time-varying formation control for unmanned aerial vehicles: Theories and applications. IEEE Trans. Control. Syst. Technol. 2015, 23, 340–348. [Google Scholar] [CrossRef]
  30. Guo, B.Z.; Zhao, Z.L. On convergence of tracking differentiator. Int. J. Control. 2011, 84, 693–701. [Google Scholar] [CrossRef]
  31. Khatib, O. Real-Time Obstacle Avoidance for Manipulators and Mobile Robots. Int. J. Robot. Res. 1986, 5, 90–98. [Google Scholar] [CrossRef]
Figure 1. The blue nodes correspond to the followers and the red nodes to the leaders. The blue bidirectional arrows correspond to E f and the red unidirectional arrows to E l . The followers should converge and remain within the grey shaded area.
Figure 1. The blue nodes correspond to the followers and the red nodes to the leaders. The blue bidirectional arrows correspond to E f and the red unidirectional arrows to E l . The followers should converge and remain within the grey shaded area.
Electronics 13 01439 g001
Figure 2. The control architecture of agent i.
Figure 2. The control architecture of agent i.
Electronics 13 01439 g002
Figure 3. The initial configuration of the multi-agent system. The leaders are depicted with black squares and their convex hull is indicated by black solid lines. The followers are depicted with colored circles. The center of the pyramid is depicted with a blue dot that moves along the trajectory illustrated in blue.
Figure 3. The initial configuration of the multi-agent system. The leaders are depicted with black squares and their convex hull is indicated by black solid lines. The followers are depicted with colored circles. The center of the pyramid is depicted with a blue dot that moves along the trajectory illustrated in blue.
Electronics 13 01439 g003
Figure 4. The evolution of the proposed containment control scheme for six consecutive instants. The leaders are depicted with black squares and their convex hull is indicated by black solid lines. The followers are depicted with colored circles. The center of the pyramid is depicted with a blue dot that moves along the trajectory illustrated in blue.
Figure 4. The evolution of the proposed containment control scheme for six consecutive instants. The leaders are depicted with black squares and their convex hull is indicated by black solid lines. The followers are depicted with colored circles. The center of the pyramid is depicted with a blue dot that moves along the trajectory illustrated in blue.
Electronics 13 01439 g004
Figure 5. The maximum distance from the center of the convex hull is depicted with the black solid line. The black dashed line illustrates a threshold below which the follower lies within the convex hull. Apparently, the agents converge exponentially within the convex hull and remain within it for all time.
Figure 5. The maximum distance from the center of the convex hull is depicted with the black solid line. The black dashed line illustrates a threshold below which the follower lies within the convex hull. Apparently, the agents converge exponentially within the convex hull and remain within it for all time.
Electronics 13 01439 g005
Figure 6. The performance functions ρ i d ( t ) , i = 1 , , 5 and the corresponding distance errors e i d ( t ) , i = 1 , , 5 are depicted in red and blue solid lines, respectively, for each agent.
Figure 6. The performance functions ρ i d ( t ) , i = 1 , , 5 and the corresponding distance errors e i d ( t ) , i = 1 , , 5 are depicted in red and blue solid lines, respectively, for each agent.
Electronics 13 01439 g006
Figure 7. The minimum inter-agent distance is depicted with the black solid line. The black dashed line illustrates the ultimate distance 2 R , when two agents meet each other. Apparently, the agents move safely without colliding with each other.
Figure 7. The minimum inter-agent distance is depicted with the black solid line. The black dashed line illustrates the ultimate distance 2 R , when two agents meet each other. Apparently, the agents move safely without colliding with each other.
Electronics 13 01439 g007
Figure 8. The performance of the distributed estimation Algorithm (2). The distance of the reference trajectory estimate from the center of the convex hull is depicted with the colored solid lines. The black dashed line illustrates a threshold below which the the reference trajectory lies within the convex hull. Apparently, the agents establish a consensus exponentially fast to a reference trajectory that lies within the convex hull of the leaders.
Figure 8. The performance of the distributed estimation Algorithm (2). The distance of the reference trajectory estimate from the center of the convex hull is depicted with the colored solid lines. The black dashed line illustrates a threshold below which the the reference trajectory lies within the convex hull. Apparently, the agents establish a consensus exponentially fast to a reference trajectory that lies within the convex hull of the leaders.
Electronics 13 01439 g008
Figure 9. Comparison with [15]. The maximum distance from the center of the convex hull is depicted with the black solid line. The black dashed line illustrates a threshold below which the follower lies within the convex hull. Apparently, the agents converge exponentially within the convex hull and remain within it for all time.
Figure 9. Comparison with [15]. The maximum distance from the center of the convex hull is depicted with the black solid line. The black dashed line illustrates a threshold below which the follower lies within the convex hull. Apparently, the agents converge exponentially within the convex hull and remain within it for all time.
Electronics 13 01439 g009
Figure 10. Comparison with [15]. The performance functions and the corresponding errors in the x-coordinate are depicted in red and blue solid lines, respectively, for each agent.
Figure 10. Comparison with [15]. The performance functions and the corresponding errors in the x-coordinate are depicted in red and blue solid lines, respectively, for each agent.
Electronics 13 01439 g010
Figure 11. Comparison with [15]. The minimum inter-agent distance is depicted with the black solid line. The black dashed line illustrates the ultimate distance 2 R , when two agents meet each other. Apparently, the agents collide with each other.
Figure 11. Comparison with [15]. The minimum inter-agent distance is depicted with the black solid line. The black dashed line illustrates the ultimate distance 2 R , when two agents meet each other. Apparently, the agents collide with each other.
Electronics 13 01439 g011
Figure 12. Comparison with [15]. When the repulsive potential field is incorporated into the design of [15] without the adaptation scheme proposed in (9), the closed-loop system is rendered singular as two agents approach each other and the corresponding error approaches the performance function.
Figure 12. Comparison with [15]. When the repulsive potential field is incorporated into the design of [15] without the adaptation scheme proposed in (9), the closed-loop system is rendered singular as two agents approach each other and the corresponding error approaches the performance function.
Electronics 13 01439 g012
Table 1. The values of the control gains/parameters.
Table 1. The values of the control gains/parameters.
k k r k d η γ k v
424124
λ ρ i j 0 , m ρ ρ i d , 0 ρ d ρ i v , 0 ρ v
2 1.5 | p i d , m ( 0 ) p j d , m ( 0 ) | + 0.1 0.01 1.5 e i d ( 0 ) + 0.1 0.01 1.5 e i v ( 0 ) + 0.1 0.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bechlioulis, C.P. Robust Distributed Containment Control with Adaptive Performance and Collision Avoidance for Multi-Agent Systems. Electronics 2024, 13, 1439. https://doi.org/10.3390/electronics13081439

AMA Style

Bechlioulis CP. Robust Distributed Containment Control with Adaptive Performance and Collision Avoidance for Multi-Agent Systems. Electronics. 2024; 13(8):1439. https://doi.org/10.3390/electronics13081439

Chicago/Turabian Style

Bechlioulis, Charalampos P. 2024. "Robust Distributed Containment Control with Adaptive Performance and Collision Avoidance for Multi-Agent Systems" Electronics 13, no. 8: 1439. https://doi.org/10.3390/electronics13081439

APA Style

Bechlioulis, C. P. (2024). Robust Distributed Containment Control with Adaptive Performance and Collision Avoidance for Multi-Agent Systems. Electronics, 13(8), 1439. https://doi.org/10.3390/electronics13081439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop