Next Article in Journal
Procedures for the Integration of Drones into the Airspace Based on U-Space Services
Next Article in Special Issue
Influence of Satellite Motion Control System Parameters on Performance of Space Debris Capturing
Previous Article in Journal
Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leader–Follower Synchronization of Uncertain Euler–Lagrange Dynamics with Input Constraints

by
Muhammad Ridho Rosa
School of Electrical Engineering, Telkom University, Bandung 40257, Indonesia
Aerospace 2020, 7(9), 127; https://doi.org/10.3390/aerospace7090127
Submission received: 27 July 2020 / Revised: 23 August 2020 / Accepted: 27 August 2020 / Published: 30 August 2020
(This article belongs to the Special Issue Small Satellite Formation Flying Motion Control and Attitude Dynamics)

Abstract

:
This paper addresses the problem of leader–follower synchronization of uncertain Euler–Lagrange systems under input constraints. The problem is solved in a distributed model reference adaptive control framework that includes positive μ -modification to address input constraints. The proposed design has the distinguishing features of updating the gains to synchronize the uncertain systems and of providing stable adaptation in the presence of input saturation. By using a matching condition assumption, a distributed inverse dynamics architecture is adopted to guarantee convergence to common dynamics. The design is studied analytically, and its performance is validated in simulation using spacecraft dynamics.

1. Introduction

The main task of synchronization is to achieve coherent collective behavior in a network of agents. The objective of synchronization can be achieved by using a centralized approach or a distributed approach. In centralized schemes, agents have access to global information, while in distributed schemes, only access to local information from a few neighboring agents is available [1,2,3]. The synchronization problem is sometimes referred to as the consensus problem where the behavior to be achieved is a constant value [4,5]. The distributed approach gives more advantages due to its applicability in the presence of communication constraints [6,7,8].
There is a wide range of applications that require distributed synchronization such as spacecraft formation flying [9], distributed sensor networks [10], cooperative cruise adaptive control [11], power grid synchronization [12], synchronization of multiple unmanned aerial, ground and underwater robots [13,14,15], and many more applications. The distributed synchronization plays an important role in the cyber-physical system in which the nature of the system is physically distributed and contains uncertainties. The uncertainties caused by the attack on the network can be handled by proposing the adaptive controller framework [16].
The synchronization of homogeneous agents can be achieved by introducing fixed coupling gains [17]. In the synchronization of heterogeneous agents, the adaptive coupling gains are necessary where the uncertainty is a big concern. In the presence of a matched system, these adaptive coupling gains can be designed to synchronize the agents that utilize the approach of model reference adaptive control [18]. The synchronization of linear heterogeneous uncertain agents via distributed model reference adaptive control has been proposed, leading to asymptotic synchronization without any sliding mode [19]. The distributed model reference adaptive control framework allows the states/output and the input to be shared between the neighbors [20,21]. The extended version of the framework in the nonlinear domain has been proposed to synchronize uncertain heterogenous Euler–Lagrange (EL) in the directed acyclic networks [22]. In the presence of communication constraints such as time-varying delay and packet dropout, the distributed synchronization algorithm has been designed to synchronize the EL agents [23,24,25]. In the presence of cyclic networks, it was shown that distributed model reference adaptive control can still work with suitable modifications [26].
The model reference adaptive control design in the presence of input saturation has attracted many researchers [27,28]. This problem arises because saturation may create instability. This case has been solved by introducing the positive μ -modification that extends the capability of model reference adaptive control to handle input saturation [29]. In a distributed scheme, adaptive mechanisms properly designed against saturation are missing, with the recent exception of [30], which discusses a saturation mechanism tailored to cooperative vehicles.
In this work, we focus on a class of heterogeneous uncertain EL dynamics with input saturation due to the actuator model. We obtain that the distributed model reference adaptive control with positive μ -modification, gives a positive answer to the synchronization of the entire network in the presence of input saturation.
The article is organized as follows: Section 2 introduces preliminary results to support the proposed methodology. Section 3 presents the proposed method for leader-reference model synchronization and follower-leader synchronization in the presence of input saturation. Section 4 presents a test case based on the attitude control of the spacecraft. Section 5 presents the simulation to show the effectiveness of the proposed solution. Finally, Section 6 provides conclusions and proposes directions for further research.
Notation: The notation in this article is standard. The notation P = P T > 0 indicates a symmetric positive definite matrix. The identity matrix of compatible dimensions is denoted by 𝟙 , and diag { } represents a block-diagonal matrix. The set R represents the set of real numbers. The x R n represents a vector signal.

2. Preliminary Results

2.1. Euler–Lagrange Systems

The dynamics of the agents are described by Euler–Lagrange (EL) equations defined as
M i ( q i ) q i ¨ + C i ( q i , q i ˙ ) q i ˙ + G i ( q i ) = τ i , i = { 1 , , N }
where q i , q i ˙ R n are the vector of generalized coordinates and the vector of generalized velocities, respectively; M i ( q i ) is the mass/inertia matrix, C i ( q i , q i ˙ ) is centrifugal/Coriolis matrix, and the term G i ( q i ) is the vector of potential forces and τ i represents the generalized control input. For each EL system defined in (1) the following assumptions will be adopted [31]:
Assumption 1.
Independent control input for each degree of freedom of the system.
Assumption 2.
The mass/inertia matrix M i ( q i ) is symmetric positive definite, and both M i ( q i ) and M i ( q i ) 1 are uniformly bounded as a function of q i R n .
Assumption 3.
All the parameters such as link masses, the moment of inertia, etc. appear in the linear-in-the parameter form, and the value is constant.
Remark 1.
Assumption 1 concludes that the system is fully actuated. Assumptions 2 and 3 hold for most EL systems such as robotic manipulator and mobile robot. In this work, we focus on synchronization of fully actuated EL system where the relevant topic has done in most EL synchronization literature [32,33,34,35,36,37,38,39]. For the under-actuated system, a control allocator should be used to transform the control input into the actual input of the system [40].

2.2. Inverse Dynamic Based Control

The objective of inverse dynamic based control is to cancel all the non-linearities in the system and introduce simple PD control so that the closed-loop system is linear. Let us consider the EL systems dynamics (1), the inverse dynamic controller satisfying
τ i = M i ( q i ) a i + C i ( q i , q ˙ i ) q ˙ i + G i ( q i )
where a i is defined as
a i = q ¨ d K v e i ˙ K p e i
with e i = q i q d , e ˙ i = q ˙ i q ˙ d and K p , K v being the proportional and derivative gains of the PD controller; q d , q ˙ d ,and q ¨ d are desired trajectories, velocities, and accelerations to be defined by the user. By substituting (2) into (1), it can be verified that the system becomes linear
M i ( q i ) ( q ¨ i q ¨ d + K v e ˙ i + K p e i ) = 0 e ¨ i + K v e ˙ i + K p e i = 0 .
where e ¨ i = q ¨ i q ¨ d . The result leads to second-order error equation defined as
e ˙ i e ¨ i = 0 𝟙 K p K v e i e ˙ i
or equivalently,
q ˙ i q ¨ i = 0 𝟙 K p K v q i q ˙ i + 0 𝟙 ( q ¨ d + K v q ˙ d + K p q d )
where 𝟙 is the identity matrix in the dimension of generalized vectors. The second-order closed-loop systems (6) must be Hurwitz. It can be achieved by selecting appropriate K p and K v . Note that the control law (2) requires the dynamics of EL agent to be known. In practice, due to the parametric uncertainty, the dynamics are unknown, and it may lead to an imperfect inversion of the inverse dynamics based control gives. Then, the control law (2) requires agent i to know the desired trajectories q d , q ˙ d , and q ¨ d . In a multi-agent system, the desired trajectories may not be available to all agents. Hence, one cannot implement the controller (2) in a distributed manner and in the presence of uncertainty.

2.3. Communication Graph

In this work, let us consider the network of EL agent via a communication graph that describes the allowed information flow. In the communication graph, agent 0 (the reference), defines the trajectory of the network. In our case, this node sends information (states and reference signals) to the successor node, and at the same time, it receives information (control input) from the successor node. To achieve synchronization, only control input information that is sent back to the predecessor node. In the case that the states information is sent back to the predecessor node, the distributed model reference adaptive framework can work with appropriate modifications using parameter projection [21].
The communication graph describing the information flow is defined by the pair G = ( V , E , T ) , where V = { 1 , , N } is a finite nonempty set of nodes, E V × V is a set of pairs of nodes, called edges, and T V is the set of target nodes, which receive information from agent 0. Figure 1 provides a simple communication graph where V = { 1 , 2 } , E = { ( 1 , 2 ) , ( 2 , 1 ) } , and T = { 1 } . Note that the target nodes are referred to as leaders in this work because they have access to the agent 0 or the reference node. In Figure 1, the purpose of agent 1, the leader, is to synchronize its states to agent 0 states, the reference. Simultaneously, the purpose of agent 2, is to synchronize its states to agent 1 states, the leader. The control input information, τ i , should be sent back to the predecessor agent to handle the input saturation.
Given a network G of EL heterogeneous uncertain agents (1), depicted in Figure 1, we find a distributed control strategy τ i that use local measurement, states, and control input, of the neighbors without any global knowledge of EL systems, and that leads to synchronization of the network for every agent i in the presence of input saturation.
In Section 3, we will design an adaptive distributed version of the inverse dynamic-based control, which can be implemented in the presence of uncertainty and input saturation, using only the local measurement of neighbor input and states.

3. Adaptive Synchronization with Input Constraint

3.1. System Dynamics

In consideration of our main objective, we define the modified reference dynamics satisfying the following dynamics
q ˙ 0 q ¨ 0 = 0 𝟙 K p K v A m q 0 q ˙ 0 x m + 0 𝟙 B m ( r + K τ 1 * Δ τ 1 a d * ) τ 0
where A m is Hurwitz, q 0 , q ˙ 0 R n are the states of the reference model, r = q ¨ d + K v q ˙ d + K p q d is a user-specified reference input, K τ 1 * is an ideal gain that modified the reference control input related to the control deficiency of the leader, Δ τ 1 a d . Then, let us consider the leader dynamics in the form of (2) satisfying the following equation
q ˙ 1 q ¨ 1 = 0 𝟙 0 M 1 1 C 1 A 1 q 1 q ˙ 1 x 1 + 0 M 1 1 G 1 + 0 M 1 1 B 1 ( τ 1 c + K τ 2 * Δ τ 2 a d * ) τ 1
where A 1 and B 1 are unknown matrices, q 1 , q ˙ 1 are the states of the leader, and K τ 2 * is an ideal gain that modified the leader control input related to the control deficiency of the follower, Δ τ 2 a d . Note that the leader has access to the desired trajectories q ¨ d + K v q ˙ d + K p q d . Then, let us define the dynamics of a follower agent that has no access to the desired trajectories q d , q ˙ d , and q ¨ d can still synchronize to the reference model dynamics (7) by exploiting the signals of neighboring agents for adaptation. By looking at Figure 1 and without loss of generality, the follower dynamics are denoted with subscript 2, while the dynamics of the neighboring (hierarchically superior) agent are denoted with subscript 1. The dynamics of any follower in the form (2) can be written in the state-space form
q ˙ 2 q ¨ 2 = 0 𝟙 0 M 2 1 C 2 A 2 q 2 q ˙ 2 x 2 + 0 M 2 1 G 2 + 0 M 2 1 B 2 τ 2
where A 2 and B 2 are unknown matrices, q 2 , q ˙ 2 are the states of the leader. Note that the follower with no predecessor agent leads to a dynamic without a control deficiency of the predecessor agent. In practical cases, the actuator limits the control input, τ 1 , τ 2 , which leads to a control input saturation. To support our main objective, let us define the control input for the reference and the agents using the following actuator model
τ i = τ i m a x s a t ( τ i c τ i m a x ) = τ i c | τ i c | τ i m a x τ i m a x s g n ( τ i c ) | τ i c | τ i m a x
where i = 0 , 1 , 2 , τ i c ( t ) is the commanded control law of agent i, τ i m a x > 0 is the amplitude saturation of the actuator of agent i. Due to the actuator model in (10), we can define the deficiency control as Δ τ i = τ i τ c . In the following section, we will design the control law that associated with the deficiency control.
Remark 2.
In consideration of input saturation, one should modify the agent dynamics in the presence of the predecessor agent. In our case, only the follower that does not have any predecessor agent is shown in Figure 1. The modified dynamics is associated with adaptive control deficiency.

3.2. Adaptive Synchronization of the Leader to the Reference Model

The main focus in this section is to find the control law τ 1 ( t ) of the leader that synchronizes its dynamics to the reference. The proposed control law provides stable adaptation in the presence of control input saturation/actuator defined in (10). In the presence of multiple leaders, the proposed method is a trivial extension. Then, let us propose the ideal commanded control law τ 1 c * to match the leader dynamics to the reference dynamics
τ 1 c * = F ¯ 1 * F ¯ ¯ 1 * F 1 * q 1 q ˙ 1 + D 1 * + L 1 * r + μ Δ τ 1 c * = τ 1 a d * + μ Δ τ 1 c *
where
Δ τ 1 c * = τ 1 m a x δ s a t ( τ 1 c * τ 1 m a x δ ) τ 1 c *
where F ¯ 1 * , F ¯ ¯ 1 * , D 1 * , L 1 * are the ideal gains. The term τ 1 a d * defines the ideal nonlinear version of model reference adaptive control law, μ is the design constant, and Δ τ 1 c denotes the control deficiency due to the virtual bound τ 1 m a x δ . The term τ 1 m a x δ defines the virtual bound satisfying
τ 1 m a x δ = τ 1 m a x δ , 0 < δ < τ 1 m a x .
By adding and substracting B 1 τ 1 c to (8) then substituting τ 1 c in (11), gives the following closed-loop leader dynamics
q ˙ 1 q ¨ 1 = 0 𝟙 M 1 1 F ¯ 1 * M 1 1 C 1 + M 1 1 F ¯ ¯ 1 * q 1 q ˙ 1 + 0 M 1 1 G 1 + M 1 1 D 1 * + 0 M 1 1 0 L 1 * r + Δ τ 1 a d * + K τ 2 * Δ τ 2 a d *
where adaptive control deficiency of the leader and follower satisfying Δ τ 1 a d * = τ 1 * τ 1 a d * and Δ τ 2 a d * = τ 2 * τ 2 a d * , respectively. Note that the propose of gain K τ 2 * is to handle input saturation of the follower to be defined in the next section. The following proposition tells how to find matching gains.
Proposition 1.
There exists an ideal commanded control law in the form of (11) that matches the leader dynamics (8) to the reference model dynamics (7) and also provides stable adaptation under input constraint, where the ideal gains F ¯ 1 * , F ¯ ¯ 1 * , D 1 * , L 1 * and K τ 1 * are satisfying
F ¯ 1 * = M 1 K p L 1 * = M 1 F ¯ ¯ 1 * = M 1 K v + C 1 D i * = G 1 K τ 1 * = M 1 1 .
We see that Proposition 1 is verified for the ideal commanded control law
τ 1 c * = M 1 K p q 1 M 1 K v q ˙ 1 + C 1 q ˙ 1 + G 1 + M 1 r + μ Δ τ 1 c * .
Being the system matrices in (8) unknown, the controller (11) cannot be implemented, and the synchronization task has to be achieved adaptively. Then, inspired by the ideal controller (11), we propose the controller
τ 1 c = Θ M 1 ϕ M 1 M ^ 1 ( K p q 1 K v q ˙ 1 + r ) + Θ C 1 ϕ C 1 C ^ 1 q ˙ 1 + Θ G 1 ϕ G 1 G ^ 1 + μ Δ τ 1 c = τ 1 a d + μ Δ τ 1 c
where the estimates M ^ 1 , C ^ 1 , G ^ 1 of the ideal matrices have been split in a linear-in-the-parameter form. Clearly, in view of Assumption 3, an appropriate linear-in-the-parameter form M 1 = Θ M 1 * ϕ M 1 , C 1 = Θ C 1 * ϕ C 1 and G 1 = Θ G 1 * ϕ G 1 can always be found. In case, μ 0 , the commanded control law (17) and the control deficiency (12) gives the commanded control law in term of convex combination of τ 1 m a x δ s a t ( τ 1 a d ( t ) τ 1 m a x δ ) and τ 1 a d
τ 1 c = 1 1 + μ ( τ 1 a d + μ τ 1 m a x δ s a t ( τ 1 a d ( t ) τ 1 m a x δ ) ) = τ 1 a d | τ 1 a d | τ 1 m a x δ ) , 1 1 + μ ( τ 1 a d + μ τ 1 m a x δ ) , | τ 1 a d | > τ 1 m a x δ ) , 1 1 + μ ( τ 1 a d μ τ 1 m a x δ ) , | τ 1 a d | < τ 1 m a x δ ) .
Let us define the error e 1 = x 1 x m , whose dynamics are
e ˙ 1 = A m e 1 + B 1 ( F ¯ ˜ 1 q 1 + F ¯ ¯ ˜ 1 q ˙ 1 + D ˜ 1 + L ˜ 1 r ) B m K ˜ τ 1 Δ τ 1 a d = A m e 1 + B 1 ( Θ ˜ M 1 ϕ M 1 ( K p q 1 K v q ˙ 1 + r ) + Θ ˜ C 1 ϕ C 1 q ˙ 1 + Θ ˜ G 1 ϕ G 1 ) B m K ˜ τ 1 Δ τ 1 a d
where F ¯ ˜ 1 = F ¯ 1 F ¯ 1 * , F ¯ ¯ ˜ 1 = F ¯ ¯ 1 F ¯ ¯ 1 * , L ˜ 1 = L 1 L 1 * , Θ ˜ M 1 = Θ M 1 Θ M 1 * , Θ ˜ C 1 = Θ C 1 Θ C 1 * , Θ ˜ G 1 = Θ G 1 Θ G 1 * , and K ˜ τ 1 = K τ 1 K τ 1 * .
Theorem 1.
Consider the reference model (7), the unknown leader dynamics (8), and controller (18). Under the assumption that a matrix S 1 exists such that
L 1 * S 1 = S 1 L 1 * > 0
then, the adaptive laws
Θ ˙ M 1 = S 1 B m P e 1 ( K p q 1 K v q ˙ 1 + r ) ϕ D 1 Θ ˙ C 1 = S 1 B m P e 1 q ˙ 1 ϕ C 1 Θ ˙ G 1 = S 1 B m P e 1 ϕ g 1 K ˙ τ 1 = Λ 1 B m P e 1 Δ τ 1 a d
where Λ is any positive diagonal matrix to be defined by the user. P = P > 0 is satisfying
P A m + A m P = Q , Q > 0
guarantee synchronization of the leader dynamics (8) to the reference model (7), e 1 0 , in the presence of input saturation.
Proof. 
To show the asymptotic convergence of the synchronization error between the leader and the model reference analytically, and provides stable adaptation in the presence of input constraints, let us introduce the following Lyapunov function
V 1 ( e 1 , Θ ˜ M 1 , Θ ˜ C 1 , Θ ˜ G 1 , K ˜ τ 1 ) = e 1 P e 1 + t r ( Θ ˜ M 1 S 1 1 L 1 * 1 Θ ˜ M 1 ) + t r ( Θ ˜ C 1 S 1 1 L 1 * 1 Θ ˜ C 1 ) + t r ( Θ ˜ G 1 S 1 1 L 1 * 1 Θ ˜ G 1 ) + t r ( K ˜ τ 1 Λ 1 K ˜ τ 1 ) .
Then it is possible to verify
V ˙ 1 ( e 1 , Θ ˜ M 1 , Θ ˜ C 1 , Θ ˜ G 1 , K ˜ τ 1 ) = e 1 ( P A m + A m P ) e 1 + 2 e 1 P B 1 ( Θ ˜ M 1 ϕ M 1 ( K p q 1 K v q ˙ 1 + r ) + Θ ˜ C 1 ϕ C 1 q ˙ 1 + Θ ˜ G 1 ϕ G 1 ) 2 e 1 P B m K ˜ τ 1 Δ τ 1 a d + 2 t r ( Θ ˜ M 1 S 1 1 L 1 * 1 Θ ˜ ˙ M 1 ) + 2 t r ( Θ ˜ C 1 S 1 1 L 1 * 1 Θ ˜ ˙ C 1 ) + 2 t r ( Θ ˜ G 1 S 1 1 L 1 * 1 Θ ˜ G 1 ) + 2 t r ( K ˜ τ 1 Λ 1 K ˜ τ 1 ) = e 1 Q e 1 + 2 t r ( Θ ˜ M 1 L 1 * 1 ( B m P e 1 ( K p q 1 K v q ˙ 1 + r ) ϕ M 1 + S 1 1 Θ ˜ ˙ M 1 ) ) + 2 t r ( Θ ˜ C 1 L 1 * 1 ( B m P e 1 q ˙ 1 ϕ C 1 + S 1 1 Θ ˜ ˙ C 1 ) ) + 2 t r ( Θ ˜ G 1 L 1 * 1 ( B m P e 1 ϕ G 1 + S 1 1 Θ ˜ ˙ G 1 ) ) + 2 t r ( K ˜ τ 1 ( B m P e 1 Δ τ 1 a d + Λ 1 K ˜ ˙ τ 1 ) ) = e 1 Q e 1 .
From (24), we obtain that V 1 has a finite limit, so e 1 , Θ ˜ M 1 , Θ ˜ C 1 , Θ ˜ G 1 , K ˜ τ 1 L . However, the asymptotic tracking error to zero cannot be concluded because the modification of the reference dynamics. So that we need to show that at least one of the states, x 1 or x m , stay bounded in the modified of the reference dynamics. Note that the matrix A m in (7) is Hurwitz matrix. Then, let us introduce the following Lyapunov function
V m ( x m ) = x m P x m
where P = P > 0 is such that (22) holds. Since Δ τ 1 0 , we obtain that the commanded control law of the leader exceeds the maximum/minimum control input allowed | τ 1 c | > τ 1 m a x . This may also lead to reference input saturation | τ 0 | > τ 0 m a x . In saturation case, we obtain τ i = τ i m a x s g n ( τ i c ) , and the ideal reference dynamics in (7) becomes
x ˙ m = A m x m + B m τ 0 m a x s g n ( τ 0 )
To prove asymptotic stability, we define
V ˙ m ( x m ) = x m Q x m + 2 x m P B m τ 0 m a x s g n ( τ 0 ) λ m i n ( Q ) | | x m | | 2 + 2 τ 0 m a x | | x m | | | | P B m | |
where λ m i n ( Q ) is the minimum eigenvalue of Q. We obtain that V ˙ m ( x m ) < 0 if | | x m | | > 2 τ 0 m a x | | P B m | | / λ m i n ( Q ) . Because e 1 L and x m L , then we have x 1 L . Consequently, we can obtain τ 1 c L . Therefore, all signals in the closed-loop systems are bounded. This concludes the proof of the boundedness of all closed-loop signal and convergence e 1 0 as t . □
Remark 3.
In the case of multiple leaders, one can implement for each leader, a control law in form (18) to synchronize the leader dynamics to the reference model dynamics (7) in the presence of input saturation.
Remark 4.
In most of the practical interest of EL systems, it can be verified that matrix M 1 is symmetric. In (15), it can be verified that matrix L 1 * also symmetric. Consequently, the condition (20) can be achieved by simply selecting S = γ 𝟙 for any positive scalar γ.

3.3. Adaptive Synchronization of the Follower to the Leader

The main focus in this section is to find the control law τ 2 ( t ) of the follower that synchronizes its dynamics to the leader. The proposed control laws provide stable adaptation in the presence of control input saturation. Note that the follower has no access to the desired trajectories. Then, let us propose the ideal commanded control law τ 2 c * to match the follower dynamics (9) to the leader dynamics (8)
τ 2 c * = F ¯ 21 * F ¯ ¯ 21 * F 21 * q 1 q ˙ 1 + F ¯ 2 * F ¯ ¯ 2 * F 2 * q 2 q 1 q ˙ 2 q ˙ 1 e 21 + D 2 * + L 21 * τ 1 + μ Δ τ 2 c * = τ 2 a d * + μ Δ τ 2 c *
where
Δ τ 2 c * = τ 2 m a x δ s a t ( τ 2 c * τ 2 m a x δ ) τ 2 c *
where F ¯ 21 * , F ¯ ¯ 21 * , D 2 * , L 21 * are the adaptive coupling gains. The term τ 2 a d * defined the ideal nonlinear version of model reference adaptive control law, μ is the design constant, and Δ τ 2 c denotes the control deficiency due to the virtual bound τ 2 m a x δ . The term τ 2 m a x δ defines the virtual bound satisfying
τ 2 m a x δ = τ 2 m a x δ , 0 < δ < τ 2 m a x .
By adding and substracting B 2 τ 2 c in (9) then substituting τ 2 c in (28), gives the following closed-loop follower dynamics
q ˙ 2 q ¨ 2 = 0 𝟙 M 2 1 F ¯ 2 * M 2 1 ( F ¯ ¯ 2 * C 2 ) q 2 q ˙ 2 + 0 0 M 2 1 ( F ¯ 21 * F ¯ 2 * ) M 2 1 ( F ¯ ¯ 21 * F ¯ ¯ 2 * ) q 1 q ˙ 1 + 0 M 2 1 ( G 2 + D 2 * ) + 0 M 2 1 L 21 * 0 τ 1 + Δ τ 2 a d * .
The following proposition explains how to find the matching control gains.
Proposition 2.
There exists an ideal control law in the form (28) that matches the follower dynamics (9) to the leader dynamics (8) and also provides stable adaptation under input constraint, whose gains F ¯ 2 * , F ¯ ¯ 2 * , F ¯ 21 * , F ¯ ¯ 21 * , L 21 * , D 2 * , and K τ 2 * are
F ¯ 2 * = M 2 K p F ¯ 21 * = 0 D 2 * = G 2 K τ 2 * = M 2 1 L 21 * M 1 = 𝟙 F ¯ ¯ 2 * = M 2 K v + C 2 F ¯ ¯ 21 * = C 2 M 2 M 1 1 C 1 L 21 * = M 2 M 1 1 .
It is easy to see that Proposition 2 is verified for the ideal control law
τ 2 c * = C 2 q ˙ 1 M 2 M 1 1 C 1 q ˙ 1 M 2 K p e ¯ 21 M 2 K v e ¯ ¯ 21 + C 2 e ¯ ¯ 21 + G 2 + M 2 M 1 1 τ 1 + μ Δ τ 2 c * = C 2 q ˙ 2 + M 2 M 1 1 τ 1 M 2 M 1 1 C 1 q ˙ 1 M 2 ( K p e ¯ 21 + K v e ¯ ¯ 21 ) + G 2 + μ Δ τ 2 c *
where e ¯ 21 = q 2 q 1 , e ¯ ¯ 21 = q ˙ 2 q ˙ 1 .
Remark 5.
Proposition 2 gives us matching conditions among follower agent. The Equation (33) implies the existence of coupling gains F ¯ 21 * , F ¯ ¯ 21 * , L 21 * satisfying
F ¯ 21 * = F ¯ 2 * L 21 * F ¯ 1 * F ¯ ¯ 21 * = F ¯ ¯ 2 * L 21 F ¯ ¯ 1 * L 21 * = L 2 * ( L 1 * ) 1
where L 2 * = M 2 . Therefore, Proposition 2 can be interpreted as a distributed matching condition among neighboring agents.
Being the system matrices in (9) unknown, the control (28) cannot be implemented, and the synchronization task has to be achieved adaptively. Then, inspired by the ideal controller (33), we propose the controller
τ 2 c = Θ M 2 ϕ M 2 M ^ 2 ( K p e ¯ 21 + K v e ¯ ¯ 21 ) + Θ C 2 ϕ C 2 C ^ 2 q ˙ 2 + Θ M 2 M 1 ϕ M 2 M 1 M 2 M 1 ^ τ 1 Θ M 2 M 1 C 1 ϕ M 2 M 1 C 1 M 2 M 1 C 1 ^ q ˙ 1 + Θ g 2 ϕ g 2 g ^ 2 + μ Δ τ 2 c = τ 2 a d + μ Δ τ 2 c
where the estimates M ^ 2 , C ^ 2 , M 2 M 1 ^ , M 2 M 1 C 1 ^ , G ^ 2 of the ideal matrices have been split in a linear-in-the-parameter form. In fact, Assumption 3 guarantees M 2 = Θ M 2 * ϕ M 2 , C 2 = Θ C 2 * ϕ C 2 , g 2 = Θ G 2 * ϕ g 2 , M 2 M 1 = Θ M 2 M 1 * ϕ M 2 M 1 and M 2 M 1 C 1 = Θ M 2 M 1 C 1 * ϕ M 2 M 1 C 1 . In case, μ 0 , the control law (35) and the control deficiency (29) gives the commanded control law in term of convex combination of τ 2 m a x δ s a t ( τ 2 a d ( t ) τ 2 m a x δ ) and τ 2 a d
τ 2 c = 1 1 + μ ( τ 2 a d + μ τ 2 m a x δ s a t ( τ 2 a d ( t ) τ 2 m a x δ ) ) = τ 2 a d | τ 2 a d | τ 2 m a x δ ) , 1 1 + μ ( τ 2 a d + μ τ 2 m a x δ ) , | τ 2 a d | > τ 2 m a x δ ) , 1 1 + μ ( τ 2 a d μ τ 2 m a x δ ) , | τ 2 a d | < τ 2 m a x δ ) .
Then, let us define the error e 21 = x 2 x 1 , whose dynamics are
e ˙ 21 = A m e 21 + B 2 ( F 2 ˜ e 21 + F 21 ˜ x 1 + L ˜ 21 τ 1 + D ˜ 2 ) B 1 K ˜ τ 2 Δ τ 2 a d = A m e 21 + B 2 ( F ¯ ˜ 2 e ¯ 21 + F ¯ ¯ ˜ 2 e ¯ ¯ 21 + F ¯ ˜ 21 q 1 + F ¯ ¯ ˜ 21 q ˙ 1 + L ˜ 21 τ 1 + D ˜ 2 ) B 1 K ˜ τ 2 Δ τ 2 a d = A m e 21 + B 2 ( Θ ˜ C 2 ϕ C 2 q ˙ 2 + Θ ˜ M 2 M 1 ϕ M 2 M 1 τ 1 Θ ˜ M 2 M 1 C 1 ϕ M 2 M 1 C 1 q ˙ 1 Θ ˜ M 2 ϕ M 2 ( K p e ¯ 21 + K v e ¯ ¯ 21 ) + Θ ˜ G 2 ϕ G 2 ) B 1 K ˜ τ 2 Δ τ 2 a d
where F 2 ˜ = F 2 F 2 * , F ˜ 21 = F 21 F 21 * , L ˜ 21 = L 21 L 21 * , Θ ˜ M 2 = Θ M 2 Θ M 2 * , Θ ˜ C 2 = Θ C 2 Θ C 2 * , Θ ˜ G 2 = Θ G 2 Θ G 2 * , Θ ˜ M 2 M 1 = Θ M 2 M 1 Θ M 2 M 1 * and Θ ˜ M 2 M 1 C 1 = Θ M 2 M 1 C 1 Θ M 2 M 1 C 1 * , K ˜ τ 2 = K τ 2 K τ 2 * . The following theorem provides the follower-leader synchronization.
Theorem 2.
Consider the reference model (7), the unknown leader dynamics (8), the unknown follower dynamics (9), and controller (36). Provided that there exists a matrix S 2 such that
L 2 * S 2 = S 2 L 2 * > 0
then, the adaptive laws
Θ ˙ C 2 = S 2 B m P e 21 q ˙ 2 ϕ C 2 Θ ˙ M 2 = S 2 B m P e 21 ( K p e ¯ 21 + K v e ¯ ¯ 21 ) ϕ M 2 Θ ˙ M 2 M 1 = S 2 B m P e 21 τ 1 ϕ M 2 M 1 Θ ˙ G 2 = S 2 B m P e 21 ϕ G 2 Θ ˙ M 2 M 1 C 1 = S 2 B m P e 21 q ˙ 1 ϕ M 2 M 1 C 1 K ˙ τ 2 = Λ 2 B m P e 1 Δ τ 2 a d
where Λ 2 is any positive diagonal matrix to be defined by the user and P = P > 0 is such that (18) holds, guarantee synchronization of the follower dynamics (9) to the leader dynamics (8) in the presence of input constraint, i.e., e 21 0 .
Proof. 
To show the asymptotic convergence of the synchronization error between the follower and the leader analytically, and provides stable adaptation in the presence of input constraints, let us introduce the following Lyapunov function
V 2 = e 21 P e 21 + t r ( Θ ˜ C 2 S 2 1 L 2 * 1 Θ ˜ C 2 ) + t r ( Θ ˜ M 2 M 1 S 2 1 L 2 * 1 Θ ˜ M 2 M 1 ) + t r ( Θ ˜ M 2 M 1 C 1 S 2 1 L 2 * 1 Θ ˜ M 2 M 1 C 1 ) + t r ( Θ ˜ M 2 S 2 1 L 2 * 1 Θ ˜ M 2 ) + t r ( Θ ˜ G 2 S 2 1 L 2 * 1 Θ ˜ g 2 ) + t r ( K ˜ τ 2 Δ 1 K ˜ τ 2 ) .
Then it is possible to verify
V ˙ 2 = e 21 Q e 21 + 2 e 21 P B 2 ( Θ ˜ C 2 ϕ C 2 q ˙ 2 + Θ ˜ M 2 M 1 ϕ M 2 M 1 τ 1 Θ ˜ M 2 M 1 C 1 ϕ M 2 M 1 C 1 q ˙ 1 Θ ˜ M 2 ϕ M 2 ( K p e ¯ 21 + K v e ¯ ¯ 21 ) + Θ ˜ G 2 ϕ G 2 ) + 2 t r ( Θ ˜ C 2 S 2 1 L 2 * 1 Θ ˜ ˙ C 2 ) + 2 t r ( Θ ˜ M 2 M 1 S 2 1 L 2 * 1 Θ ˜ ˙ M 2 M 1 ) + 2 t r ( Θ ˜ M 2 M 1 C 1 S 2 1 L 2 * 1 Θ ˜ ˙ M 2 M 1 C 1 ) + 2 t r ( Θ ˜ M 2 S 2 1 L 2 * 1 Θ ˜ ˙ M 2 ) + 2 t r ( Θ ˜ G 2 S 2 1 L 2 * 1 Θ ˜ ˙ G 2 ) + 2 t r ( K ˜ τ 2 Δ 1 K ˜ τ 2 ) = e 21 Q e 21 + 2 t r ( Θ ˜ C 2 L 2 * 1 ( B m P e 21 q ˙ 2 ϕ C 2 + S 2 1 Θ ˜ ˙ C 2 ) ) + 2 t r ( Θ ˜ M 2 M 1 L 2 * 1 ( B m P e 21 τ 1 ϕ M 2 M 1 + S 2 1 Θ ˜ ˙ M 2 M 1 ) ) 2 t r ( Θ ˜ M 2 M 1 C 1 L 2 * 1 ( B m P e 21 q ˙ 1 ϕ M 2 M 1 C 1 + S 2 1 Θ ˜ ˙ M 2 M 1 C 1 ) ) 2 t r ( Θ ˜ M 2 L 2 * 1 ( B m P e 21 ( K p e ¯ 21 + K v e ¯ ¯ 21 ) ϕ M 2 + S 2 1 Θ ˜ ˙ M 2 ) ) + 2 t r ( Θ ˜ G 2 L 2 * 1 ( B m P e 21 ϕ G 2 + S 2 1 Θ ˜ ˙ G 2 ) ) + 2 t r ( K ˜ τ 2 ( B m P e 21 Δ τ 2 a d + Λ 1 K ˜ ˙ τ 2 ) ) = e 21 Q e 21 .
Following similar steps as in the proof of Theorem 1, from (41) we obtain that V 2 has a finite limit, so e 21 , Θ ˜ C 2 , Θ ˜ M 2 M 1 , Θ ˜ M 2 M 1 C 1 , Θ ˜ M 2 , Θ ˜ G 2 , K ˜ τ 2 L . The asymptotic tracking error to zero can be concluded if one of the states, x 2 or x 1 , stay bounded in the modified of the leader dynamics. Then, let us introduce the following Lyapunov function
V 1 ( x 1 ) = x 1 P x 1
where P = P > 0 is such that (22) holds. Since Δ τ 2 0 , we obtain that the commanded control law of the follower exceeds the maximum/minimum control input allowed | τ 2 c | > τ 2 m a x . This may also lead to leader control input saturation | τ 1 | > τ 1 m a x . In saturation case, we obtain τ i = τ i m a x s g n ( τ i c ) , and the ideal leader dynamics in (8) becomes
x ˙ 1 = A 1 x 1 + B 1 τ 1 m a x s g n ( τ 1 )
To prove asymptotic stability, we define
V ˙ 1 ( x 1 ) = x 1 Q x 1 + 2 x 1 P B 1 τ 1 m a x s g n ( τ 1 ) λ m i n ( Q ) | | x 1 | | 2 + 2 τ 1 m a x | | x 1 | | | | P B 1 | |
where λ m i n ( Q ) is the minimum eigenvalue of Q. We obtain that V ˙ 1 ( x 1 ) < 0 if | | x 1 | | > 2 τ 1 m a x | | P B 1 | | / λ m i n ( Q ) . Because e 21 = x 2 x 1 L and x 1 L , we have x 2 L . This implies x 2 , Θ ˜ C 2 , Θ ˜ M 2 M 1 , Θ ˜ M 2 M 1 C 1 , Θ ˜ M 2 , Θ ˜ G 2 L . Consequently, we can obtain τ 2 L . Therefore, all signals in the closed-loop systems are bounded. This concludes the proof of the boundedness of all closed-loop signal and convergence e 21 0 as t . □
Remark 6.
The idea stems from [22], and is the following: In the case of multiple followers, each one can implement a control law in form (36) to synchronize the follower dynamics (9) to the leader dynamics (8). Due to the distributed matching conditions, the follower dynamics will indirectly match the reference dynamics.

4. Spacecraft Test Case

In this section, we consider the attitude control of the spacecraft (chapter 5.9 in [41]) as a test case for the proposed adaptive synchronization algorithm. Let us start by introducing the EL dynamics of a spacecraft (satellite) as a rigid body. In this case, we consider the following total torque of a rigid body that rotates in space frame
τ = L ˙ + ω × L
where τ is torque, L is angular momentum, and ω is angular velocity. The ω x , ω y , ω z R are the angular velocities along axis the body frame axes X b , Y b , Z b shown in Figure 2.
Assuming the spacecraft has two planes of symmetry ( J x y = J x z = J y z = 0 ), the spacecraft dynamics can be defined as follows
J x ω ˙ x + ω y ω z J z ω z ω y J y J y ω ˙ y ω x ω z J z + ω z ω x J x J z ω ˙ z + ω x ω y J y ω y ω x J x = τ x τ y τ z J x 0 0 0 J y 0 0 0 J z M ω ˙ x ω ˙ y ω ˙ z q ¨ + 0 ω z J z ω y J y ω z J z 0 ω x J x ω y J y ω x J x 0 C ( q ˙ ) ω x ω y ω z q ˙ = τ x τ y τ z
where G = 0 and all generalized coordinates are expressed in body frame. From (46) it is possible to see that Assumptions 1–3 are verified. Then, let us derive the control law of the leader in the form (17) for a satellite indicated by subscript i, and with dynamics as in (46). It is easy to see that the linear-in-the-parameter forms for M i and C i are
Θ M i * = J x i 0 0 0 J y i 0 0 0 J z i ϕ M i = 1 0 0 0 1 0 0 0 1 Θ C i * = J x i 0 0 J y i 0 0 J z i 0 0 0 J x i 0 0 J y i 0 0 J z i 0 0 0 J x i 0 0 J y i 0 0 J z i ϕ C i = 0 0 0 0 0 ω y i 0 ω z i 0 0 0 ω x i 0 0 0 ω z i 0 0 0 ω x i 0 ω y i 0 0 0 0 0
Then, we derive the control law of the follower in the form (35) indicated by subscripts i and j. The following equations show that the linear-in-the-parameter forms of M j M i and M j M i C i
Θ M j M i C i * = Γ 1 0 0 0 Γ 1 0 0 0 Γ 1 Γ 2 0 0 0 Γ 2 0 0 0 Γ 2 Γ 3 0 0 0 Γ 3 0 0 0 Γ 3 Γ 4 0 0 0 Γ 4 0 0 0 Γ 4 Γ 5 0 0 0 Γ 5 0 0 0 Γ 5 Γ 6 0 0 0 Γ 6 0 0 0 Γ 6 ϕ M j M i C i = 0 ω z i 0 0 0 0 0 0 0 0 0 ω y i 0 0 0 0 0 0 0 0 0 ω z i 0 0 0 0 0 0 0 0 0 0 ω x i 0 0 0 0 0 0 0 0 0 ω y i 0 0 0 0 0 0 0 0 0 ω x i 0
Θ M j M i * = J x j J x i 0 0 0 J y j J y i 0 0 0 J z j J z i ϕ M j M i = 1 0 0 0 1 0 0 0 1
where
M j M i 1 = J x j J x i 0 0 0 J y j J y i 0 0 0 J z j J z i M j M i 1 C i = 0 J x j J z i J x i ω z i J x j J y i J x i ω y i J y j J z i J y i ω z i 0 J x i J y j J y i ω x i J y i J z j J z i ω y i J x i J z j J z i ω x i 0
Γ 1 = J x j J z i J x i Γ 3 = J y j J z i J y i Γ 5 = J y i J z j J z i Γ 2 = J x j J y i J x i Γ 4 = J x i J y j J y i Γ 6 = J x i J z j J z i
Remark 7.
Note that the regressand Θ M i * , Θ C i * , Θ g i * , Θ M j M i * and Θ M j M i C i * are matrices with unknown parameters whose structure is a priori known. It can be seen in the satellite case, one can use this a priori knowledge to create the estimates of Θ M i , Θ C i , Θ g i , Θ M j M i and Θ M j M i C i with the same structure and project other parameters to zero in corresponding the structure [41,42]. By using this approach, the total number of the estimated parameter will be reduced.
Let us now find some matrices S i satisfying conditions (20) or (38)
L i = M i = J x i 0 0 0 J y i 0 0 0 J z i S i = S i 1 0 0 0 S i 2 0 0 0 S i 3
Therefore, S i can be the identity matrix or any positive values. In next section, we presents the numerical simulation of the satellites attitude control.

5. Numerical Simulations

The simulations are performed on the directed graph shown in Figure 3, where node 0 is the reference. Agents 1, 2, and 3 act as the leaders whose dynamics satisfy (8). Agents 4, 5, and 6 act as the followers whose dynamics satisfy (9).
Let us define the following reference model dynamics and parameters
A m = 0 𝟙 k p 𝟙 k v 𝟙 B m = 0 𝟙 r = k p ϕ d + k v ϕ ˙ d + ϕ ¨ d k p θ d + k v θ ˙ d + θ ¨ d k p ψ d + k v ψ ˙ d + ψ ¨ d Q = 10 𝟙 k p = 50 k v = 10 S 1 = S 2 = = S 6 = 10 𝟙 Λ i = Λ j = 𝟙
where the state of agent i, the leader, is x i = q i q ˙ i , the state of agent j, the follower, is x j = q j q ˙ j . The states of the leader and the follower are the generalized satellite coordinates expressed in the body frame. We define the desired Euler angle, ϕ d = θ d = ψ d = 0.75 s i n ( ω 0.33 t ) , the desired Euler angle rate and the desired Euler angle acceleration equal to zero, actuator constraint τ i m a x = 1 , the positive constant δ is set to 10 % of actuator limit. For the sake of simulation, Table 1 shows the unknown parameters. In our case, we test the constant μ equal to 1 and 100.
In the first case ( μ = 1 ), the synchronization of spacecraft’s states to the states of the reference can be achieved, as depicted in Figure 4. In Figure 5 and Figure 6, it can be seen that the commanded control inputs (black dashed line) exceed the actuator limit, and the actuator limits the actual control input (blue dashed line). The red dashed line is the actuator limit, τ i m a x , and the green dashed line is the virtual bound, τ i m a x δ . The saturation does not cause instability because the dynamics of spacecraft, which is defined in Equations (8) and (9), is a marginally stable system.
In the second case ( μ = 100 ), the synchronization of spacecraft’s states to the states of the reference can be achieved, as depicted in Figure 7. It can be observed that the control inputs do not exceed the actuator limit shown in Figure 8 and Figure 9. It can be concluded that, by choosing μ large enough, the synchronization problem in the presence of input constraints can be solved. Note that the large μ leads to the changes in reference dynamics while reducing the control deficiency. By using the reference dynamics (7), the control input, τ 1 , of the leader in (8), and the commanded control input, τ 1 c in (18), one can verify that μ is proportional to the Δ τ 1 a d .

6. Conclusions

This work has shown the possibility to synchronize uncertain heterogeneous agents with Euler–Lagrange dynamics in the presence of input saturation. The synchronization used distributed model reference adaptive control which utilizes local states and input information and the existence distributed nonlinear matching gains between neighboring agents. Then, we proposed the adaptive control law that estimates these gains. The proposed method was modified with distributed positive mu-modification that ensure the stability of the adaptation in the presence of input saturation which requires the predecessor agent to send the control input information to the successor agent. Finally, numerical simulations of attitude control synchronization were provided to validate the proposed method. It was shown that the convergence of the dynamics can be achieved in the presence of input saturation.
Future work will consider extending the result to rate constrained input or constrained states [43,44]. Other relevant works are, in line with [45], the fast adaptation using high-gain learning rates, whereas, one can consider the synchronization of under-actuated Euler–Lagrange.

Funding

This research received no external funding.

Acknowledgments

Simone Baldi from School of Mathematics, Southeast University, China, and Delft Center for Systems and Control, TU Delft, is gratefully acknowledged for useful discussions and suggestions on adaptive control.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Seyboth, G.S.; Ren, W.; Allgöwer, F. Cooperative control of linear multi-agent systems via distributed output regulation and transient synchronization. Automatica 2016, 68, 132–139. [Google Scholar] [CrossRef] [Green Version]
  2. Das, A.; Lewis, F.L. Distributed adaptive control for synchronization of unknown nonlinear networked systems. Automatica 2010, 46, 2014–2021. [Google Scholar] [CrossRef]
  3. Tang, Y.; Gao, H.; Zou, W.; Kurths, J. Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings. IEEE Trans. Cybern. 2013, 43, 358–370. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Olfati-Saber, R.; Fax, J.A.; Murray, R.M. Consensus and Cooperation in Networked Multi-Agent Systems. Proc. IEEE 2007, 95, 215–233. [Google Scholar] [CrossRef] [Green Version]
  5. Olfati-Saber, R.; Murray, R.M. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 2004, 49, 1520–1533. [Google Scholar] [CrossRef] [Green Version]
  6. Nazari, M.; Butcher, E.A.; Yucelen, T.; Sanyal, A.K. Decentralized Consensus Control of a Rigid-Body Spacecraft Formation with Communication Delay. J. Guid. Control Dyn. 2016, 39, 838–851. [Google Scholar] [CrossRef] [Green Version]
  7. Wu, Z.; Shi, P.; Su, H.; Chu, J. Exponential Synchronization of Neural Networks With Discrete and Distributed Delays Under Time-Varying Sampling. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1368–1376. [Google Scholar]
  8. Dong, H.; Wang, Z.; Gao, H. Distributed Filtering for a Class of Time-Varying Systems Over Sensor Networks With Quantization Errors and Successive Packet Dropouts. IEEE Trans. Signal Process. 2012, 60, 3164–3173. [Google Scholar] [CrossRef]
  9. Ren, W.; Beard, R.W.; Beard, A.W. Decentralized Scheme for Spacecraft Formation Flying via the Virtual Structure Approach. AIAA J. Guid. Control Dyn. 2003, 27, 73–82. [Google Scholar] [CrossRef]
  10. Lesser, V.; Tambe, M.; Ortiz, C.L. (Eds.) Distributed Sensor Networks: A Multiagent Perspective; Kluwer Academic Publishers: Norwell, MA, USA, 2003. [Google Scholar]
  11. Harfouch, Y.A.; Yuan, S.; Baldi, S. Adaptive control of interconnected networked systems with application to heterogeneous platooning. In Proceedings of the 2017 13th IEEE International Conference on Control Automation (ICCA), Ohrid, North Macedonia, 3–6 July 2017; pp. 212–217. [Google Scholar]
  12. Blaabjerg, F.; Teodorescu, R.; Liserre, M.; Timbus, A.V. Overview of Control and Grid Synchronization for Distributed Power Generation Systems. IEEE Trans. Ind. Electron. 2006, 53, 1398–1409. [Google Scholar] [CrossRef] [Green Version]
  13. Jun, M.; D’Andrea, R. Path Planning for Unmanned Aerial Vehicles in Uncertain and Adversarial Environments. In Cooperative Control: Models, Applications and Algorithms; Springer: Boston, MA, USA, 2003; pp. 95–110. [Google Scholar]
  14. Yucelen, T.; Johnson, E.N. Control of multivehicle systems in the presence of uncertain dynamics. Int. J. Control 2013, 86, 1540–1553. [Google Scholar] [CrossRef]
  15. Popa, D.O.; Sanderson, A.C.; Komerska, R.J.; Mupparapu, S.S.; Blidberg, D.R.; Chappel, S.G. Adaptive sampling algorithms for multiple autonomous underwater vehicles. In Proceedings of the 2004 IEEE/OES Autonomous Underwater Vehicles, Sebasco, ME, USA, 17–18 June 2004; pp. 108–118. [Google Scholar]
  16. Jin, X.; Haddad, W.M.; Yucelen, T. An Adaptive Control Architecture for Mitigating Sensor and Actuator Attacks in Cyber-Physical Systems. IEEE Trans. Autom. Control 2017, 62, 6058–6064. [Google Scholar] [CrossRef]
  17. Fax, J.A.; Murray, R.M. Information flow and cooperative control of vehicle formations. IEEE Trans. Autom. Control 2004, 49, 1465–1476. [Google Scholar] [CrossRef] [Green Version]
  18. Zhang, D.; Wei, B. A review on model reference adaptive control of robotic manipulators. Annu. Rev. Control 2017, 43, 188–198. [Google Scholar] [CrossRef]
  19. Baldi, S.; Frasca, P. Adaptive synchronization of unknown heterogeneous agents: An adaptive virtual model reference approach. J. Frankl. Inst. 2019, 356, 935–955. [Google Scholar] [CrossRef] [Green Version]
  20. Rosa, M.R. Adaptive Synchronization for Heterogeneous Multi-Agent Systems with Switching Topologies. Machines 2018, 6, 7. [Google Scholar] [CrossRef] [Green Version]
  21. Baldi, S.; Rosa, M.R.; Frasca, P. Adaptive state-feedback synchronization with distributed input: The cyclic case. In Proceedings of the 7th IFAC Workshop on Distributed Estimation and Control in Networked Systems (NecSys18), Groningen, The Netherlands, 27–28 August 2018. [Google Scholar]
  22. Rosa, M.R.; Baldi, S.; Wang, X.; Lv, M.; Yu, W. Adaptive hierarchical formation control for uncertain Euler–Lagrange systems using distributed inverse dynamics. Eur. J. Control 2019, 48, 52–65. [Google Scholar] [CrossRef]
  23. Abdessameud, A.; Polushin, I.G.; Tayebi, A. Synchronization of Heterogeneous Euler–Lagrange Systems with Time Delays and Intermittent Information Exchange. IFAC Proc. Vol. 2014, 47, 1971–1976. [Google Scholar] [CrossRef] [Green Version]
  24. Abdessameud, A.; Tayebi, A.; Polushin, I.G. On the leader–follower synchronization of Euler–Lagrange systems. In Proceedings of the 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, 15–18 December 2015; pp. 1054–1059. [Google Scholar]
  25. Abdessameud, A.; Tayebi, A.; Polushin, I.G. Leader–Follower Synchronization of Euler–Lagrange Systems With Time-Varying Leader Trajectory and Constrained Discrete-Time Communication. IEEE Trans. Autom. Control 2017, 62, 2539–2545. [Google Scholar] [CrossRef]
  26. Baldi, S.; Rosa, M.R.; Frasca, P.; Kosmatopoulos, E.B. Platooning merging maneuvers in the presence of parametric uncertainty. In Proceedings of the 7th IFAC Workshop on Distributed Estimation and Control in Networked Systems (NecSys18), Groningen, The Netherlands, 27–28 August 2018. [Google Scholar]
  27. Karason, S.P.; Annaswamy, A.M. Adaptive control in the presence of input constraints. IEEE Trans. Autom. Control 1994, 39, 2325–2330. [Google Scholar] [CrossRef]
  28. Wen, C.; Zhou, J.; Liu, Z.; Su, H. Robust Adaptive Control of Uncertain Nonlinear Systems in the Presence of Input Saturation and External Disturbance. IEEE Trans. Autom. Control 2011, 56, 1672–1678. [Google Scholar] [CrossRef]
  29. Lavretsky, E.; Hovakimyan, N. Stable adaptation in the presence of input constraints. Syst. Control Lett. 2007, 56, 722–729. [Google Scholar] [CrossRef]
  30. Baldi, S.; Liu, D.; Jain, V.; Yu, W. Establishing Platoons of Bidirectional Cooperative Vehicles With Engine Limits and Uncertain Dynamics. IEEE Trans. Intell. Transp. Syst. 2020, 1–13. [Google Scholar] [CrossRef]
  31. Ortega, R.; Spong, M.W. Adaptive motion control of rigid robots: A tutorial. Automatica 1989, 25, 877–888. [Google Scholar] [CrossRef]
  32. Mei, J.; Ren, W.; Ma, G. Distributed Coordinated Tracking With a Dynamic Leader for Multiple Euler-Lagrange Systems. IEEE Trans. Autom. Control 2011, 56, 1415–1421. [Google Scholar] [CrossRef]
  33. Nuno, E.; Ortega, R.; Basanez, L.; Hill, D. Synchronization of Networks of Nonidentical Euler-Lagrange Systems With Uncertain Parameters and Communication Delays. IEEE Trans. Autom. Control 2011, 56, 935–941. [Google Scholar] [CrossRef]
  34. Chen, F.; Feng, G.; Liu, L.; Ren, W. Distributed Average Tracking of Networked Euler-Lagrange Systems. IEEE Trans. Autom. Control 2015, 60, 547–552. [Google Scholar] [CrossRef]
  35. Klotz, J.R.; Kan, Z.; Shea, J.M.; Pasiliao, E.L.; Dixon, W.E. Asymptotic Synchronization of a Leader–Follower Network of Uncertain Euler-Lagrange Systems. IEEE Trans. Control Netw. Syst. 2015, 2, 174–182. [Google Scholar] [CrossRef]
  36. Roy, S.; Roy, S.B.; Kar, I.N. Adaptive–Robust Control of Euler–Lagrange Systems With Linearly Parametrizable Uncertainty Bound. IEEE Trans. Control Syst. Technol. 2018, 26, 1842–1850. [Google Scholar] [CrossRef] [Green Version]
  37. Roy, S.; Kar, I.N.; Lee, J.; Jin, M. Adaptive-Robust Time-Delay Control for a Class of Uncertain Euler–Lagrange Systems. IEEE Trans. Ind. Electron. 2017, 64, 7109–7119. [Google Scholar] [CrossRef]
  38. Roy, S.; Kar, I.N.; Lee, J.; Tsagarakis, N.G.; Caldwell, D.G. Adaptive-Robust Control of a Class of EL Systems With Parametric Variations Using Artificially Delayed Input and Position Feedback. IEEE Trans. Control Syst. Technol. 2019, 27, 603–615. [Google Scholar] [CrossRef] [Green Version]
  39. Roy, S.; Kar, I.N.; Lee, J. Toward Position-Only Time-Delayed Control for Uncertain Euler–Lagrange Systems: Experiments on Wheeled Mobile Robots. IEEE Robot. Autom. Lett. 2017, 2, 1925–1932. [Google Scholar] [CrossRef]
  40. Johansen, T.A.; Fossen, T.I. Control allocation—A survey. Automatica 2013, 49, 1087–1103. [Google Scholar] [CrossRef] [Green Version]
  41. Ioannou, P.; Fidan, B. Adaptive Control Tutorial (Advances in Design and Control); Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2006. [Google Scholar]
  42. Tao, G. Adaptive Control Design and Analysis (Adaptive and Learning Systems for Signal Processing, Communications and Control Series); John Wiley & Sons, Inc.: New York, NY, USA, 2003. [Google Scholar]
  43. Leonessa, A.; Haddad, W.M.; Hayakawa, T.; Morel, Y. Adaptive control for nonlinear uncertain systems with actuator amplitude and rate saturation constraints. Int. J. Adapt. Control Signal Process. 2009, 23, 73–96. [Google Scholar] [CrossRef] [Green Version]
  44. Romdlony, M.Z.; Jayawardhana, B. Stabilization with guaranteed safety using Control Lyapunov–Barrier Function. Automatica 2016, 66, 39–47. [Google Scholar] [CrossRef]
  45. Yucelen, T.; Haddad, W.M. Low-Frequency Learning and Fast Adaptation in Model Reference Adaptive Control. IEEE Trans. Autom. Control 2013, 58, 1080–1085. [Google Scholar] [CrossRef]
Figure 1. Communication graph of multi-agent system.
Figure 1. Communication graph of multi-agent system.
Aerospace 07 00127 g001
Figure 2. The body frame of spacecraft.
Figure 2. The body frame of spacecraft.
Aerospace 07 00127 g002
Figure 3. Communication graph of multi-agent system.
Figure 3. Communication graph of multi-agent system.
Aerospace 07 00127 g003
Figure 4. Adaptive spacecraft state synchronization for states ( ϕ , θ , ψ , ω x , ω y , ω z ) ( μ = 1 ).
Figure 4. Adaptive spacecraft state synchronization for states ( ϕ , θ , ψ , ω x , ω y , ω z ) ( μ = 1 ).
Aerospace 07 00127 g004
Figure 5. Control input of the leaders (blue is the actual control input and black is the commanded control input) ( μ = 1 ).
Figure 5. Control input of the leaders (blue is the actual control input and black is the commanded control input) ( μ = 1 ).
Aerospace 07 00127 g005
Figure 6. Control input of the followers (blue is the actual control input and black is the commanded control input) ( μ = 1 ).
Figure 6. Control input of the followers (blue is the actual control input and black is the commanded control input) ( μ = 1 ).
Aerospace 07 00127 g006
Figure 7. Adaptive spacecraft state synchronization for states ( ϕ , θ , ψ , ω x , ω y , ω z ) ( μ = 100 ).
Figure 7. Adaptive spacecraft state synchronization for states ( ϕ , θ , ψ , ω x , ω y , ω z ) ( μ = 100 ).
Aerospace 07 00127 g007
Figure 8. Control input of the leaders (blue is the actual control input) ( μ = 100 ).
Figure 8. Control input of the leaders (blue is the actual control input) ( μ = 100 ).
Aerospace 07 00127 g008
Figure 9. Control input of the followers (blue is the actual control input) ( μ = 100 ).
Figure 9. Control input of the followers (blue is the actual control input) ( μ = 100 ).
Aerospace 07 00127 g009
Table 1. Satellite parameters and initial conditions.
Table 1. Satellite parameters and initial conditions.
Initial Cond.
[ ϕ , θ , ψ ]’ (0)
Initial Cond.
[ ω x , ω y , ω z ]’ (0)
Moment of
Inertia (kg m 2 )
Agent 0
(Trajectory
Generator)
[0, 0, 0]’[0, 0, 0]’
0.01 0 0 0 . 0.02 0 0 . 0 0.01
Agent 1
(Leader 1)
[0.1, 0.1, 0.1]’[0.1, 0.1, 0.1]’
0.02 0 0 0 . 0.04 0 0 . 0 0.04
Agent 1
(Leader 2)
[0.3, 0.3, 0.3]’[−0.2, −0.2, −0.2]’
0.05 0 0 0 . 0.1 0 0 . 0 0.05
Agent 1
(Leader 3)
[−0.3, −0.3, −0.3]’[0.2, 0.2, 0.2]’
0.001 0 0 0 . 0.002 0 0 . 0 0.001
Agent 4
(Follower 1)
[0.2, 0.2, 0.2]’[−0.1, −0.1, −0.1]’
0.03 0 0 0 . 0.06 0 0 . 0 0.03
Agent 5
(Follower 2)
[−0.2, −0.2, −0.2]’[0.2, 0.2, 0.2]’
0.4 0 0 0 . 0.08 0 0 . 0 0.04
Agent 6
(Follower 3)
[0.4, 0.4, 0.4]’[0.1, 0.1, 0.1]’
0.001 0 0 0 . 0.002 0 0 . 0 0.001

Share and Cite

MDPI and ACS Style

Rosa, M.R. Leader–Follower Synchronization of Uncertain Euler–Lagrange Dynamics with Input Constraints. Aerospace 2020, 7, 127. https://doi.org/10.3390/aerospace7090127

AMA Style

Rosa MR. Leader–Follower Synchronization of Uncertain Euler–Lagrange Dynamics with Input Constraints. Aerospace. 2020; 7(9):127. https://doi.org/10.3390/aerospace7090127

Chicago/Turabian Style

Rosa, Muhammad Ridho. 2020. "Leader–Follower Synchronization of Uncertain Euler–Lagrange Dynamics with Input Constraints" Aerospace 7, no. 9: 127. https://doi.org/10.3390/aerospace7090127

APA Style

Rosa, M. R. (2020). Leader–Follower Synchronization of Uncertain Euler–Lagrange Dynamics with Input Constraints. Aerospace, 7(9), 127. https://doi.org/10.3390/aerospace7090127

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop