Next Article in Journal
Solving Poisson Equations by the MN-Curve Approach
Previous Article in Journal
Some Inequalities of Hardy Type Related to Witten–Laplace Operator on Smooth Metric Measure Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Anti-Disturbance Fault-Tolerant Constrained Consensus for Time-Delay Faulty Multi-Agent Systems with Semi-Markov Switching Topology

School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(23), 4581; https://doi.org/10.3390/math10234581
Submission received: 30 September 2022 / Revised: 8 November 2022 / Accepted: 29 November 2022 / Published: 2 December 2022

Abstract

:
In this article, an approach to achieve the anti-disturbance fault-tolerant constrained consensus is proposed for time-delay faulty multi-agent systems under semi-Markov switching topology. Firstly, an observer based on the coupled disturbance and fault information is designed to estimate the disturbance and failure at the same time. Next, because of the conservatism of the traditional H control method, a new performance index is constructed to replace the zero initial condition by making use of initial conditions. Then, the time-varying transfer rate is expressed as a convex combination by using the boundedness of transfer rate, so as to solve the numerical solution problem of time-varying transfer rate. On this basis, according to the performance requirements, an anti-disturbance fault-tolerant constrained consensus strategy is proposed. Finally, simulation results are given to verify the feasibility of the approach.

1. Introduction

In the past years, it is obvious that the consensus problem has gradually become one of the most active topics in the field of multi-agent systems. It has attracted extensive research in the fields of building automation [1], smart grids [2], intelligent transportation [3], underwater exploration [4], cooperative search [5], etc. However, considering security and the particularity of tasks, the agent state is often constrained in practical applications. For example, due to terrain constraints, the agent can only move within constrained areas. Therefore, the constrained consensus of many systems has attracted the attention of more and more researchers; see related articles [6,7,8,9,10,11]. A new distributed primal–dual augmented (sub) gradient algorithm is studied in reference [10], and the distributed constrained optimization and consensus problem in uncertain networks via proximal minimization are discussed in [11].
Furthermore, in many practical multi-agent systems, considering the limitations and disturbances of obstacles and communication range, the communication topology between agents may change randomly. In order to be more in line with the actual situation, it is a good choice to model the random change in communication topology as a semi-Markov process. The synchronization problem of complex networks with semi-Markov switching topology has attracted more attention of some scholars; see [12,13]. The change in system topology will affect the implementation of constrained consensus, which is one of the research purposes of this article.
On the other hand, in the actual multi-agent system, faults, external disturbances and communication delays are also inevitable, which will lead to the destruction of the system performance. In order to ensure the safety and reliability of the closed-loop system, it is feasible for fault-tolerant control to be used; see [14,15,16,17,18,19,20]. By using the adjacency matrix information, a robust adaptive fault-tolerant protocol is proposed to compensate the actuator bias fault and the partial loss of actuator effectiveness fault in [21]. In [22], the consensus problem of nonlinear multi-agent systems with multi-actuator failure and uncertainty is analyzed. Ref. [23] proposed a strategy to solve the bipartite consensus of high-order multi-agent systems with unknown time-varying disturbances. The disturbances are estimated by designing the adaptive law for unknown parameters. Additionally, the proposed adaptive control method can realize consensus control. In [24], a nonlinear disturbance observer is proposed to estimate the disturbances to better realize the consensus of linear multi-agent systems. In [25], a disturbance observer with adaptive parameters is designed for nonlinear multi-agent systems to suppress the total disturbance, including unknown external disturbances and deviation faults. Ref. [26] proposed an accelerated algorithm to solve the linear quadratic optimal consistency problem of multi-agent systems. Ref. [27] studied the global consensus problem of a class of heterogeneous multi-agent systems, but both of them ignore the impact of external interference and actuator failure on the constraint consistency of multi-agent systems. Most of the above studies focus on consensus, but there are few studies on anti-disturbance constrained consensus, especially the existence of disturbance and fault coupling in a time-delay system. This is the second motivation of this article.
This article studies the fault-tolerance constrained consensus for time-delay multi-agent systems with external disturbances and faults based on semi-Markov switching topology. The main contributions of this article are as follows:
(a) Owing to the external disturbances and actuator failures being coupled in time-delay multi-agent systems, inspired by [28,29,30,31], a new disturbance observer can be used to estimate disturbance and fault concurrently.
(b) In order to make the influence of semi-Markov switching topology weaken, taking [32] as the starting point, the time-varying transfer rate is expressed as a convex combination by using the boundedness of transfer rate, so as to solve the numerical solution problem of time-varying transfer rate.
(c) For actuator failures and external disturbances, a novel anti-disturbance fault-tolerant control algorithm is provided to ensure the stability of multi-agent systems, as well as to achieve a consensus on the anti-disturbances’ dynamic fault tolerance constraints.
Notation: R m denotes the m-dimensional Euclidean space; R m × m denotes the set of vectors and matrices of appropriate dimension; and for a symmetric matrix, ‘∗’ denotes the elements below the main diagonal, which are determined by the matrix symmetry. Let d i a g { A 1 , , A n } be the block-diagonal matrix with matrices A 1 , , A n on its principal diagonal; for a given matrix A, A T denotes its transpose; ⊗ denotes the Kronecker product of matrix; I n stands for the identity matrix of n × n ; and I n R n is a column vector with all entries being 1. we use . to represent the Euclidean norm of vectors or matrices.

2. Preliminaries and Problem Formulation

2.1. Graph Theory

A vertices set V = ν 1 , , ν n , an edges set E V × V and an adjacency matrix A = [ a p q ] R n × n make up graph G = ( V , E , A ) . P r o Ω ( x ) stands for the projection of the vector x on the closed convex set Ω . Use ( ν p , ν q ) for each edge, which means that node ν p is a neighbor of ν q , or node ν q is a neighbor of ν p . When the graph G is undirected, a p q = a q p for all p, q. The adjacency matrix A = [ a p q ] n × n has the following characteristics: ( a p p = 0 , a p q > 0 , i f ( ν p , ν q ) E ), and the Laplacian matrix L = [ l p q ] n × n can be represented by l p p = q = 1 , q p n a p q , l p q = a p q , ( p q ) . Furthermore, the interactive topology of semi-Markov topology switching is noted as: G ¯ ( r ( t ) ) = ( V , E ( r ( t ) ) , A ( r ( t ) ) ) , G ¯ ( r ( t ) ) { G ¯ ( 1 ) , G ¯ ( 2 ) , , G ¯ ( s ) } , where r ( t ) : R + Z = { 1 , 2 , , s } is a switching signal, which is controlled by semi-Markov process. The evolution of semi-Markov processes is determined by the following transition probability:
Pr { r ( t + h ) = α r ( t ) = β } = π β α ( h ) h + o ( h ) , β α 1 + π β α ( h ) h + o ( h ) , β = α
In the equation above, the transition rate π β α ( h ) is time-varying and dependent on h, where h is called sojourn time, which represents the duration between two jumps.

2.2. Problem Formulation

Given the multi-agent systems with the following dynamics:
x ˙ p ( t ) = u p f ( t ) + ω p ( t ) , p = 1 , 2 , , n
where u p f ( t ) = [ u p 1 f T ( t ) , u p 2 f T ( t ) , , u p m f T ( t ) ] T , u p f ( t ) R m stands for the control input. n is the number of the agent. x p R m stands for the state of agent. ω p ( t ) = [ ω p 1 T ( t ) , ω p 2 T ( t ) , , ω p m T ( t ) ] T is the external disturbance in the system. Considering the partial failure of actuator, the model can be described as
u p d f ( t ) = θ p d u p d ( t ) , 0 < θ ̲ p d θ p d θ ¯ p d 1 d = 1 , 2 , , m
which denotes the output signal from the dth actuator of the pth agent. d represents the numbers of actuator channels. θ ¯ p d and θ ̲ p d stand for the upper and lower bounds of θ p d , respectively. When θ ¯ p d = θ ̲ p d = 1 , it represents that there is no failure of the dth actuator. If 0 < θ p d < 1 , it indicates that the dth actuator is partially faulty.
For each agent described by (1), a variable ω p ( t ) R m represents the disturbance of the pth agent to be rejected, and it can be described by the following exosystem:
ω ˙ p ( t ) = S p ω p ( t )
where S p R m × m is a known constant matrix, and the matrix of each agent can be different.
Our ultimate goal is to make the anti-disturbance fault-tolerant constrained consensus of multi-agent systems (1), achieved by designing an appropriate controller u p ( t ) = [ u p 1 ( t ) , u p 2 ( t ) , , u p m ( t ) ] T in the case of semi-Markov topology switching. The following definition and lemmas are recalled:
Definition 1 
([33]). When the state x p ( t ) of agent p and the state x q ( t ) of any other agent q satisfy the following equation, the multi-agent system is said to have constrained consensus:
lim t x p ( t ) x q ( t ) = 0 p , q V lim t x p ( t ) Ω
Note that P r o Ω ( x ) stands for the projection of the vector x on the closed convex set Ω , and V is the vertex set defined in the above graph theory.
Lemma 1 
(Schur complement [34]). Given a symmetric matrix
S = S 11 S 12 S 12 T S 22
the following statements are equivalent:
( 1 ) S < 0 ; ( 2 ) S 11 < 0 , S 22 S 12 T S 11 1 S 12 < 0 ; ( 3 ) S 22 < 0 , S 11 S 12 S 22 1 S 12 T < 0 .
Lemma 2 
([35]). Let ℘= T and M and N be real matrices of appropriate dimensions with F ( t ) satisfying F T ( t ) F ( t ) I , then + M F ( t ) N + N T F T ( t ) M < 0 , if and only if there exists some σ > 0 such that + σ M M T + σ 1 N T N < 0 .
Lemma 3 
([36]). The transition rate π β α (h) of semi-Markov is time-varying. If its bound is [ π β α , π β α + ] , the following formulas are recalled:
π β α ( h ) = δ = 1 Λ δ π β α , δ , δ = 1 Λ δ = 1 , Λ δ 0 , λ 2 π β α , δ = π β α + ( δ 1 ) π β α + π β α λ 1 , β α , α Z π β α + ( δ 1 ) π β α + π β α λ 1 , β = α , α Z

3. Distributed Fault-Tolerant Protocol Design

In this section, the following control law is designed for systems with constant communication delay τ :
u p ( t ) = K ( r ( t ) ) q = 1 , q p p a p q ( r ( t ) ) [ x q ( t τ ) x p ( t τ ) ] + P r o Ω ( x p ( t ) ) x p ( t ) ω ^ p ( t )
where K ( r ( t ) ) represents the controller gain that we need to design. ω ^ p ( t ) stands for the estimation of the disturbance ω p ( t ) .
In addition, in an actual system, disturbance and fault are often coupled together, so disturbance observer and fault estimation cannot be designed separately. Therefore, we design the following interconnected disturbance observer and fault adaptive law
ω ^ p ( t ) = O p ( t ) + M p x p ( t ) O ˙ p ( t ) = S p O p ( t ) + M p x p ( t ) M p [ θ ^ p ( t ) μ p ( t ) + ( 1 θ ^ p ( t ) ) ω ^ p ( t ) ] μ p ( t ) = K ( r ( t ) ) q = 1 , q p n a p q ( r ( t ) ) [ x q ( t τ ) x p ( t τ ) ] + P r o Ω ( x p ( t ) ) x p ( t )
θ ˜ ˙ p d ( t ) = ϑ d ω ˜ p T ( t ) M p [ ω ^ p ( t ) μ p ( t ) ]
where O p ( t ) is a subsidiary variable, and M p is the gain of the observer and can be calculated to obtain it. θ ^ p ( t ) = d i a g { θ ^ p 1 ( t ) , θ ^ p 2 ( t ) , , θ ^ p m ( t ) } is the estimation of the failure θ p ( t ) . θ ˜ p ( t ) = θ ^ p ( t ) θ p ( t ) , ω ˜ p ( t ) = ω ^ p ( t ) ω p ( t ) , ϑ d is a positive number. P r o Ω ( x ) stands for the projection of the vector x on the closed convex set Ω .
Next, we solve the anti-disturbance fault-tolerant constrained consensus problem of system (1) under semi-Markov switching topology.

4. Main Result

Theorem 1. 
For the undirected and connected graph G ¯ ( r ( t ) ) and multi-agent system (1), let the attenuation level γ > 0 . Given partial failure coefficient θ 0 and matrix H, the whole system can achieve anti-disturbance fault-tolerant constrained consensus with an H disturbance attention level γ under the control law (4) and disturbance observer (5) (6) if the existence of matrices K ( r ( t ) ) = Y ¯ ( r ( t ) ) X ¯ 1 , Φ ^ , M p , p = 1 , 2 , , n , and F i , i = 1 , 2 , , 6 with suitable dimensions, suitable parameter b 3 , W > 0 , D ^ > 0 , Z ^ > 0 and a positive number σ ^ make the following matrix inequalities hold:
Ξ = [ Ξ 1 Λ ^ T σ 2 Γ ^ σ 1 I 0 σ 2 I ] < 0 ,
γ 2 Φ F 1 1 2 Φ > 0 , γ 2 F 2 I > 0 , γ 2 F 3 1 2 I > 0 , γ 2 F 4 I > 0 , γ 2 F 5 I > 0 , γ 2 F 6 I > 0 .
where
Ξ 1 = [ Ξ 11 Ξ 12 X 1 ] Ξ 11 = [ ψ ˜ 11 Z ^ 1 2 θ 0 Y ( r ( t ) ) 1 2 X 0 1 2 I D ^ Z ^ 0 0 0 b ^ 2 0 0 ψ ˜ 44 0 γ 2 I ] ψ ˜ 11 = 1 2 α Z π β α ( h ) W , ψ ˜ 44 = b 3 2 I + 2 ( S M ) , Ξ 12 = [ 0 τ θ 0 Y ( r ( t ) ) τ θ 0 X 0 τ I ] T , X = I n X ¯ , Y ( r ( t ) ) = L ( r ( t ) ) Y ¯ ( r ( t ) ) = ( L ( r ( t ) ) K ( r ( t ) ) ) X , S = diag { S 1 , S 2 , , S n } , M = diag { M 1 , M 2 , , M n } ,
Λ ^ = [ 0 Y ( r ( t ) ) 0 0 0 0 0 0 0 0 0 τ Y ( r ( t ) ) ) 0 0 0 0 0 τ X 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ] , Γ ^ = d i a g { 1 2 σ ^ θ 0 H , σ ^ θ 0 H , σ ^ θ 0 H , 0 , 0 , 0 } σ 1 = d i a g { σ ^ I , σ ^ I , σ ^ I , σ ^ I , σ ^ I , σ ^ 1 I } , σ 2 = d i a g { σ ^ I , σ ^ l , σ ^ l , σ ^ l , σ ^ l , σ ^ I } Φ = I n 1 n 1 n 1 n T , Z ^ = X Z X , Z X 1 = I .
Proof. 
Choose a Lyapunov function, shown as follows:
V ( t ) = V 1 ( t ) + V 2 ( t ) + V 3 ( t )
where
V 1 ( t ) = p = 1 n d = 1 m θ ˜ p d 2 ( t ) ϑ d + p = 1 n ω ˜ p T ( t ) ω ˜ p ( t ) + p = 1 n 1 2 max p x p ( t ) P r o Ω ( x p ( t ) ) 2 V 2 ( t ) = 1 2 p = 1 n x p ( t ) 1 n q = 1 n x q ( t ) T P ( r ( t ) ) x p ( t ) 1 n q = 1 n x q ( t ) V 3 ( t ) = p = 1 n τ τ 0 t + θ t x ˙ p T ( s ) Z p x ˙ p ( s ) d s d θ + p = 1 n t τ t x p T ( s ) D p x p ( s ) d s
The weak infinitesimal operator L of V ( t ) can be calculated as follows:
E { L V 1 ( t ) } 2 p = 1 n d = 1 m θ ˜ p d ( t ) θ ˜ ˙ p d ( t ) ϑ d p = 1 n max p x p ( t ) P r o Ω ( x p ( t ) ) 2 + 2 p = 1 n ω ˜ p T ( t ) ω ˜ ˙ p ( t ) E { L V 3 ( t ) } = x T ( t ) D x ( t ) x T ( t τ ) D x ( t τ ) + τ 2 x ˙ T ( t ) Z x ˙ ( t ) x T ( t ) Z x ( t ) + x T ( t ) Z x ( t τ ) + x T ( t τ ) Z x ( t ) x T ( t τ ) Z x ( t τ )
Define B p = x p ( t ) 1 n q = 1 n x q ( t ) and the following results can be obtained directly: E { L V 2 ( t ) } = p = 1 n B p T P β B + 1 2 α Z π β α ( h ) ( p = 1 n B p T P α B p ) = p = 1 n B p T P β [ ω p ( t ) θ p ω ^ p ( t ) ] + p = 1 n B p T P β θ p [ P r o Ω ( x p ( t ) ) x p ( t ) ] + p = 1 n B p T P β θ p K ( r ( t ) ) q N p ( t ) a p q ( r ( t ) ) [ x q ( t τ ) x p ( t τ ) ] + 1 2 p = 1 n B p T α Z π β α ( h ) P α B p
Therefore, considering the above analysis, using augmented vector and adaptive law (6), we can obtain the following results:
E { L V ( t ) } Δ ( t ) 2 x T ( t ) P β θ ( L ( r ( t ) ) K ( r ( t ) ) ) x ( t τ ) + x T ( t ) P β Δ ( t ) + x T ( t ) P β z ( t ) + 2 ω ˜ T ( t ) ( S i M i ) ω ˜ ( t ) + 1 2 x T ( t ) α Z π β α ( h ) P α x ( t ) + x T ( t ) D x ( t ) x T ( t τ ) D x ( t τ ) + τ 2 x ˙ T ( t ) Z x ˙ ( t ) x T ( t ) Z x ( t ) + x T ( t ) Z x ( t τ ) + x T ( t τ ) Z x ( t ) x T ( t τ ) Z x ( t τ )
where z ( t ) = ω ( t ) θ ω ^ ( t ) . See Appendix A for the specific derivation process of Equation (9). It is necessary to ensure that zero initial conditions are met when we use the traditional H control theory. This situation will bring some conservatism, and the following performance requirements are built based on initial conditions:
J = 0 [ y T ( t ) y ( t ) γ 2 ( z T ( t ) z ( t ) + x T ( 0 ) Φ P β F 1 x ( 0 ) + ω ˜ T ( 0 ) F 2 ω ˜ ( 0 ) + Δ ^ T ( 0 ) F 3 Δ ^ ( 0 ) + θ ˜ T ( 0 ) F 4 θ ˜ ( 0 ) ϑ d + τ 0 x T ( s ) F 5 D x ( s ) d s + τ τ 0 θ 0 x ˙ T ( s ) F 6 Z x ˙ ( s ) d s d θ ) ] d t < 0
where
y ( t ) = y 1 ( t ) , y 2 ( t ) , y 3 ( t ) T , y 1 ( t ) = b 1 x ( t ) ( 1 n 1 n T ( 1 n 1 n T n n ) x ( t ) y 2 ( t ) = b 2 Δ ( t ) T , y 3 ( t ) = b 3 ω ˜ T ( t ) , Δ ^ ( 0 ) = max p x p ( 0 ) P r o Ω ( x p ( 0 ) ) , p = 1 , 2 , , n
Therefore, define J ˜ = y T ( t ) y ( t ) γ 2 z T ( t ) z ( t ) + V ˙ ( t ) , then we have
J ˜ Δ T ( t ) ( b 2 2 I ) Δ ( t ) + x T ( t ) b 1 2 Φ x ( t ) + x T ( t ) Δ ( t ) + x T ( t ) P β z ( t ) + b 3 2 ω ˜ T ( t ) ω ˜ ( t ) + 2 ω ˜ T ( t ) ( S M ) ω ˜ ( t ) γ 2 z T ( t ) z ( t ) + τ 2 x ˙ T ( t ) Z x ˙ ( t ) x T ( t ) P β θ ( L ( r ( t ) ) K ( r ( t ) ) ) x ( t τ ) + 1 2 x T ( t ) α Z π β α ( h ) P α x ( t ) + x T ( t ) D x ( t ) x T ( t τ ) D x ( t τ ) x T ( t ) Z x ( t ) + x T ( t ) Z x ( t τ ) + x T ( t τ ) Z x ( t ) x T ( t τ ) Z x ( t τ ) = ξ T ( t ) Ξ 11 ξ ( t ) + τ 2 x ˙ T ( t ) Z Z 1 Z x ˙ ( t ) = ξ T ( t ) Ξ 11 ξ ( t ) + ξ T ( t ) Ξ 12 Z 1 Ξ 12 T ξ ( t ) = ξ T ( t ) Ξ ξ ( t )
where ξ ( t ) = x T ( t ) , x T ( t τ ) , Δ T ( t ) , ω ˜ ( t ) , z T ( t ) T and
Ξ = Ξ 11 Ξ 12 Ξ 22
where
Ξ 11 = ψ 11 ψ 12 1 2 P β 0 1 2 P β D Z 0 0 0 I 0 0 ψ 44 0 γ 2 I
ψ 11 = 1 2 α Z π β α ( h ) P α + D Z , ψ 12 = Z 1 2 P β θ ( L ( r ( t ) ) K ( r ( t ) ) ) , ψ 44 = 2 ( S M ) , Ξ 12 = 0 , τ θ ( L ( r ( t ) ) K ( r ( t ) ) ) Z , τ θ Z , 0 , τ Z T , Ξ 22 = Z , S = d i a g { S 1 , S 2 , , S n } , M = d i a g { M 1 , M 2 , , M n } .
However, θ in Ξ is unknown. How to deal with it? The answer is that we can utilize θ p = θ 0 p ( I + g p ) to reduce its conservation, where
θ 0 p = θ ̲ p + θ ¯ p 2 , g p = θ p θ 0 p θ 0 p , h p = θ ¯ p θ ̲ p θ ̲ p + θ ¯ p , g p h p I
Therefore, the following formula can be obtained from (11):
θ = θ 0 ( I + G )
where
G H I , G = d i a g g 1 , g 2 , , g n , H = d i a g h 1 , h 2 , , h n
By utilizing (10), (12) and Lemma 2, the following result can be obtained:
Ξ Ξ ˜ + Γ ˜ F Λ ˜ + Λ ˜ T F T Γ ˜ T
Based on Lemma 2, (13) can be converted to (14)
Ξ ˜ Λ ˜ T σ Γ ˜ σ I 0 σ I < 0
where
Ξ ˜ = Ξ ˜ 11 Ξ ˜ 12 Ξ ˜ 22
Ξ ˜ 11 = ξ 1 ξ 2 1 2 P β 0 1 2 P β D Z 0 0 0 b 2 2 I I 0 0 ξ 3 0 γ 2 I , Λ ˜ = 0 P β K ( r ( t ) ) 0 0 0 0 0 0 0 0 0 τ K ( r ( t ) ) Z 0 0 0 0 0 τ Z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 , Ξ ˜ 12 = 0 τ θ 0 ( L ( r ( t ) ) K ( r ( t ) ) ) Z τ θ 0 Z 0 τ Z T , Ξ ˜ 22 = Z , ξ 1 = 1 2 α Z π β α ( h ) P α + D Z + b 1 2 Φ , ξ 2 = Z 1 2 P β θ 0 ( L ( r ( t ) ) K ( r ( t ) ) ) , ξ 3 = b 3 2 I + 2 ( S M ) , Γ ˜ = d i a g { 1 2 θ 0 H , θ 0 H , θ 0 H , 0 , 0 , 0 } .
According to the above, if inequality (14) holds, we can obtain that J ˜ < 0 , clearly. Next, we can obtain
0 J ˜ d t = J + V ( t ) + [ γ 2 0 t x T ( 0 ) Φ P β F 1 x ( 0 ) d t 1 2 γ 2 x T ( 0 ) Φ P β x ( 0 ) ] + [ γ 2 0 t ω ˜ T ( 0 ) F 2 ω ˜ ( 0 ) d t ω ˜ T ( 0 ) ω ˜ ( 0 ) ] + [ γ 2 0 t Δ T ( 0 ) F 3 Δ ( 0 ) d t Δ T ( 0 ) Δ ( 0 ) ] + [ γ 2 0 t θ ˜ T ( 0 ) F 4 θ ˜ ( 0 ) ϑ d d t θ ˜ T ( 0 ) θ ˜ ( 0 ) ϑ d ] + { γ 2 0 t τ 0 x T ( s ) F 5 D x ( s ) d s d t γ 2 τ 0 x T ( s ) D x ( s ) d s } + { γ 2 0 t τ τ 0 x ˙ T ( s ) F 6 Z x ˙ ( s ) d s d θ d t τ γ 2 τ 0 θ 0 x ˙ T ( s ) Z x ˙ ( s ) d s d θ }
See Appendix B for the detailed derivation process.
Combined J ˜ < 0 and V ( t ) > 0 , when the linear matrix inequalities (14) and (8) are satisfied, J < 0 , lim t x p ( t ) P r o Ω ( x p ) = 0 , x p ( t ) ( 1 / n ) q = 1 n x q ( t ) = 0 . Therefore, the whole system can achieve anti-disturbance fault-tolerant constrained consensus with required performance indicators. But the above linear matrix inequality contains coupling terms, definition:
D = d i a g { X , X , X , I , I , X 1 , X , X , X , X , X , X 1 , X , X , X , X , X , X }
multiply matrices D and D on the left and right sides of Ξ ˜ , some transformations are introduced to better illustrate the theorem:
W = X P α X , D ^ = X D X , Z ^ = X Z X , Φ ^ = X b 1 2 Φ X , Y ( r ( t ) ) = ( L ( r ( t ) ) K ( r ( t ) ) ) X , b ^ 2 = X ( b 2 2 1 ) X , σ ^ = X σ X , σ ^ 1 = X σ 1 X , P β X = I , Z X 1 = I .
From the above, we can obtain the linear matrix inequalities (7) and (8), and the controller gains can be obtained by K ( r ( t ) ) = Y ¯ ( r ( t ) ) X ¯ 1 . The proof of Theorem 1 is completed. □
Since the transition rate of the semi-Markov process in Theorem 1 is time-varying and difficult to solve, in order to solve this problem, we design the following theorem.
Theorem 2. 
For the undirected and connected graph G ¯ ( r ( t ) ) and multi-agent system (1), let the attenuation level γ > 0 . The transition rate has upper and lower bounds. Given partial failure coefficient θ 0 and matrix H. The whole system can achieve anti-disturbance fault-tolerant constrained consensus in the presence of time-delay and semi-Markov switching topology with an H disturbance attention level γ under the control law (4) and disturbance observer (5) (6) if the existence of matrices K ( r ( t ) ) = Y ¯ ( r ( t ) ) X ¯ 1 , Φ ^ , M p , p = 1 , 2 , , n and F i , i = 1 , 2 , , 6 with suitable dimensions, suitable parameter b 3 , W > 0 , W ^ > 0 , D ^ > 0 , Z ^ > 0 and a positive number σ ^ make the following matrix inequalities hold:
Ω 1 = Ω ¯ 1 Λ ^ T σ 2 Γ ^ σ 1 I 0 σ 2 I < 0 , Ω 2 = Ω ¯ 2 Λ ^ T σ 2 Γ ^ σ 1 I 0 σ 2 I < 0 , γ 2 Φ F 1 1 2 Φ > 0 , γ 2 F 2 I > 0 , γ 2 F 3 1 2 I > 0 , γ 2 F 4 I > 0 , γ 2 F 5 I > 0 , γ 2 F 6 I > 0
where:
Ω ¯ i = Ω ¯ i 11 Ω ¯ 12 X 1 , i = 1 , 2 Ω ¯ i 11 = [ ψ i 11 ψ 12 1 2 X 0 1 2 I D ^ Z ^ 0 0 0 b ^ 2 0 0 ψ 44 0 γ 2 I ] , Λ ^ = [ 0 Y ( r ( t ) ) 0 0 0 0 0 0 0 0 0 τ Y ( r ( t ) ) 0 0 0 0 0 τ X 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ] , ψ i 11 = 1 2 W i + D ^ Z ^ + Φ ^ , ψ 12 = Z ^ 1 2 θ 0 Y ( r ( t ) ) , ψ 44 = b 3 2 I + 2 ( S M ) , Ω ¯ 12 = [ 0 τ θ 0 Y ( r ( t ) ) τ θ 0 X 0 τ I ] T , Φ = I n 1 n 1 n 1 n T ,   X = I n X ¯ , Y ( r ( t ) ) = L ( r ( t ) ) Y ¯ ( r ( t ) ) = ( L ( r ( t ) ) K ( r ( t ) ) ) X W 1 = α Z , α β π β α + W + π β β W ^ , W 2 = α Z , α β π β α W + π β β + W ^
Proof. 
Based on the Lemma 3, the term α Z π β α ( h ) P α can be split into portions: one is Λ δ with the element δ, and the other is Q without the element δ. Note that δ = 1 λ Λ δ Δ δ = 1 and Λ δ 0 , then one has
δ = 1 Λ δ Δ δ + Q = δ = 1 Λ δ ( Δ δ + Q )
That is to say
δ = 1 λ Λ δ Δ δ + Q
can be ensured by
Δ δ + Q < 0
Then
α Z π β α ( h ) P α = δ = 1 α Z Λ δ π β α , δ P α = δ = 1 Λ δ α Z π β α , δ P α = δ = 1 Λ δ [ π β 1 , δ P 1 + + π β 2 , δ P 2 + + π β β , δ P β ] = δ = 1 Λ δ [ π β 1 P 1 + ( δ 1 ) π β 1 + π β 1 1 P 1 + π β 2 P 2
+ ( δ 1 ) π β 2 + π β 2 1 P 2 + π β β + P β ( δ 1 ) π β β + π β β 1 P β ] = α Z , α β π β α P α + π β β + P β + δ = 1 Λ δ [ π β 1 + π β 1 1 ( δ 1 ) P 1 + π β 2 + π β 2 1 ( δ 1 ) P 2 + π β β + π β β 1 ( δ 1 ) P β ]
Note that
α Z π β α ( h ) P α = Θ 1 + Θ 2
where
Θ 1 = α Z , α β π β α P α + π β β + P β , Θ 2 = δ = 1 Λ δ [ π β 1 + π β 1 1 ( δ 1 ) P 1 + π β 2 + π β 2 1 ( δ 1 ) P 2 + π β β + π β β 1 ( δ 1 ) P β ] .
We have:
Θ 3 = δ 1 1 [ α Z , α β ( π β α + π β α ) P α ( π β β + π β β ) P β ] Θ 4 = α Z , α β ( π β α + π β α ) P α ( π β β + π β β ) P β Θ 2 = Θ 3 + Θ 4 = δ = 1 Λ δ δ 1 1 [ ( π β 1 + π β 1 ) P 1 + ( π β 2 + π β 2 ) P 2 + ( π β β + π β β ) P β ] = δ = 1 Λ δ δ 1 1 [ α Z , α β ( π β α + π β α ) P α ( π β β + π β β ) P β ]
The following can be obtained from (16): α Z π β α ( h ) P α < 0 is equivalent to Θ 1 + Θ 3 < 0 , where Θ 1 is the determined constant.
  • if Θ 4 0 , max ( Θ 1 + Θ 3 ) = Θ 1 + Θ 4 .
  • if Θ 4 < 0 , max ( Θ 1 + Θ 3 ) = Θ 1 .
where
Θ 1 + Θ 4 = α Z , α β π β α P α + π β β + P β + α Z , α β ( π β α + π β α ) P α ( π β β + π β β ) P β = α Z , α β π β α + P α + π β β P β Θ 1 = α Z , α β π β α P α + π β β + P β
So, α Z π β α ( h ) P α < 0 is equivalent to Θ 1 < 0 , Θ 1 + Θ 4 < 0 . Then, we define W = X P α X , W ^ = X P β X . Linear matrix inequality (7) can be transformed into linear matrix inequality (15), and Theorem 2 is proved. □

5. Numerical Example

In this part, it is assumed that the dynamic interaction topology is semi-Markov switching and the semi-Markov process has three modes. Let us take four agents as examples. It is assumed that the weight of the first mode is 1 and that of the other two modes is 10. The corresponding topology is shown in Figure 1.
The corresponding Laplacian matrices of communication graph are described as follows:
L 1 = 2 1 0 1 1 2 1 0 0 1 2 1 1 0 1 2 , L 2 = 10 10 0 0 10 20 10 0 0 10 20 10 0 0 10 10 , L 3 = 20 10 0 10 10 30 10 10 0 10 10 0 10 10 0 20 .
It is assumed that the transition probability is bounded and satisfies:
π 11 ( h ) ( 7.8 , 1.3 ) , π 12 ( h ) ( 0.5 , 3 ) , π 13 ( h ) ( 0.8 , 4.5 ) , π 21 ( h ) ( 0.6 , 3.8 ) , π 22 ( h ) ( 9 , 1.6 ) , π 23 ( h ) ( 1 , 5 ) , π 31 ( h ) ( 1.6 , 5.6 ) , π 32 ( h ) ( 0.7 , 6 ) , π 33 ( h ) ( 12 , 2 ) .
In the simulation, the ranges of fault are set to 0.5 θ 1 m 1 , 0.7 θ 2 m 1 , 0.5 θ 3 m 1 and 0.65 θ 4 m 1 , m = 1 , 2 . The communication delay is set to τ = 0.13 . Let
S 1 = 1 0 0 1 , S 2 = s i n ( t π / 40 π ) 0 0 1 , S 3 = 1 0 0 0 , S 4 = c o s ( t π / 50 π ) 0 0 0 .
b 1 = 0.13 , b 2 = 0.99 , b 3 = 0.06 .
Based on Theorem 2, the calculated gain is as follows:
M 1 = 10 0 0 10 , M 2 = 9.53 0 0 9.53 , M 3 = 9.91 0 0 9.91 , M 4 = 10 0 0 10 .
K ( 1 ) = 0.1515 0 0 0.1515 , K ( 2 ) = 0.0155 0 0 0.0155 , K ( 3 ) = 0.0134 0 0 0.0134 .
The simulation results of failure estimations and disturbance estimations of each agent are shown in Figure 2, Figure 3, Figure 4 and Figure 5. When there are disturbances and actuator failures in the system, the observer can quickly and accurately estimate the disturbances and judge the occurrence of the failure and, finally, estimate the failure rate of the actuator to improve the control effect.
Considering the influence of disturbance, actuator failures and the change in topology on the system, the trajectory of each agent is shown in Figure 6. Set Agent 1 and Agent 3 to fail when the time is 6 seconds, and Agent 2 and Agent 4 to fail at 7 seconds. The change in topology satisfies the semi-Markov process. In reference [37], considering the influence of interference and actuator partial failure under the same conditions, the trajectory of each agent is shown in Figure 7. Obviously, in the absence of the application of the anti-disturbance fault-tolerant algorithm, due to the lack of strong robustness, once partial actuator failures and disturbances occur during system operation, and the control performance of the system will be greatly affected. Figure 8 shows the semi-Markov switching signal with three modes. We can clearly see in Figure 6 that under the influence of the fault-tolerant controller we designed, after constantly fluctuating the trajectory, all the agents finally reach the consensus point in Ω.

6. Conclusions

This article studies the consensus problem of anti-disturbance and fault-tolerant constraints of a time-delay multi-agent with semi-Markov topology switching. Firstly, a disturbance observer is designed to observe the external disturbance, and an adaptive law is designed to estimate the fault information combined with the known information of the system. Then, in order to bypass the zero initial condition in the harsh control method, a new performance index is designed by using the zero initial condition, and the time-varying problem is solved by using the upper and lower bounds of the time-varying transfer rate of the semi-Markov topology switching, so as to calculate the gain of the controller and disturbance observer. After this, the interference suppression level γ of the closed-loop system is given. Finally, the feasibility of our approach is verified by numerical simulations. Compared with the reference [37], it is obvious that the multi-agent system has better anti-interference performance and fault tolerance in case of failure after the use of the anti-disturbance fault-tolerant control algorithm. This article mainly discusses the actuator failure fault of a constant time-delay multi-agent, and the number of agents is fixed. In future work, we can consider more complex fault models with time-varying delay and an uncertain number of agents in the system.

Author Contributions

Conceptualization, Y.C.; methodology, Y.C. and F.Z.; validation, Y.C., F.Z. and J.L.; formal analysis, F.Z. and J.L.; investigation, Y.C.; writing—original draft preparation, Y.C.; writing—review and editing, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Zhejiang Provincial Natural Science Foundation of China (No. LZ22F030008), the National Natural Science Foundation of China (No. 61733009), the Fundamental Research Funds for the Provincial Universities of Zhejiang (No. GK229909299001-012).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The detailed process of obtaining E { L V ( t ) } in Theorem 1 is as follows. First, according to the derivation of E { L V 1 ( t ) } , E { L V 2 ( t ) } , E { L V 3 ( t ) } in Theorem 1, we have
E { L V ( t ) } 2 p = 1 n d = 1 m θ ˜ p d ( t ) θ ˜ ˙ p d ( t ) ϑ d + 2 p = 1 n ω ˜ p T ( t ) ω ˜ ˙ p ( t ) p = 1 n max p x p ( t ) P r o Ω ( x p ( t ) ) 2 + x T ( t ) D x ( t ) x T ( t τ ) D x ( t τ ) + τ 2 x ˙ T ( t ) Z x ˙ ( t ) x T ( t ) Z x ( t ) + x T ( t ) Z x ( t τ ) + x T ( t τ ) Z x ( t ) x T ( t τ ) Z x ( t τ ) + p = 1 n B p T P β [ ω p ( t ) θ p ω ^ p ( t ) ] + p = 1 n B p T P β θ p [ P r o Ω ( x p ( t ) ) x p ( t ) ] + p = 1 n B p T P β θ p K ( r ( t ) ) q N p ( t ) a p q ( r ( t ) ) [ x q ( t τ ) x p ( t τ ) ] + 1 2 p = 1 n B p T α Z π β α ( h ) P α B p
Next, for ease of description, define the following augmented vectors as following:
x ( t ) = [ x 1 T ( t ) , , x n T ( t ) ] T , ω ( t ) = [ ω 1 T ( t ) , , ω n T ( t ) ] T , ω ^ ( t ) = [ ω ^ 1 T ( t ) , , ω ^ n T ( t ) ] T , θ = d i a g { θ 1 , θ 2 , , θ n } , Δ ( t ) = P r o Ω ( x 1 ( t ) ) x 1 ( t ) , , P r o Ω ( x n ( t ) ) x n ( t ) T .
Therefore, we have:
E { L V ( t ) } 2 p = 1 n d = 1 m θ ˜ p d ( t ) θ ˜ ˙ p d ( t ) ϑ d Δ ( t ) 2 + 2 p = 1 n ω ˜ p T ( t ) M p d = 1 m θ ˜ p d ( t ) ( ω ^ p ( t ) μ p ( t ) ) + 2 p = 1 n ω ˜ p T ( t ) ( S p M p ) ω ˜ p ( t ) + x T ( t ) Φ P β [ ω ( t ) θ ω ^ ( t ) ] + x T ( t ) Φ P β θ Δ ( t ) x T ( t ) P β θ ( L ( r ( t ) ) K ( r ( t ) ) ) x ( t τ ) + 1 2 x T ( t ) Φ α Z π β α ( h ) P α Φ T x ( t ) + x T ( t ) D x ( t ) x T ( t τ ) D x ( t τ ) + τ 2 x ˙ T ( t ) Z x ˙ ( t ) x T ( t ) Z x ( t ) + x T ( t ) Z x ( t τ ) + x T ( t τ ) Z x ( t ) x T ( t τ ) Z x ( t τ )
Then, combine (9) with adaptive law (6), and we can finally obtain the following results
E { L V ( t ) } Δ ( t ) 2 x T ( t ) P β θ ( L ( r ( t ) ) K ( r ( t ) ) ) x ( t τ ) + x T ( t ) P β Δ ( t ) + x T ( t ) P β z ( t ) + 2 ω ˜ T ( t ) ( S i M i ) ω ˜ ( t ) + 1 2 x T ( t ) α Z π β α ( h ) P α x ( t ) + x T ( t ) D x ( t ) x T ( t τ ) D x ( t τ ) + τ 2 x ˙ T ( t ) Z x ˙ ( t ) x T ( t ) Z x ( t ) + x T ( t ) Z x ( t τ ) + x T ( t τ ) Z x ( t ) x T ( t τ ) Z x ( t τ )
where z ( t ) = ω ( t ) θ ω ^ ( t ) .

Appendix B

On the complex derivation process in the proof of theorem one
0 J ˜ d t = 0 [ y T ( t ) y ( t ) γ 2 z T ( t ) z ( t ) ] d t + V ( t ) V ( 0 ) = 0 [ y T ( t ) y ( t ) γ 2 z T ( t ) z ( t ) ] d t + V ( t ) τ 0 x T ( s ) D x ( s ) d s τ τ 0 θ 0 x ˙ T ( s ) Z x ˙ ( s ) d s d θ 1 2 x T ( 0 ) Φ P β x ( 0 ) ω ˜ T ( 0 ) ω ˜ ( 0 ) Δ T ( 0 ) Δ ( 0 ) θ ˜ T ( 0 ) θ ˜ ( 0 ) ϑ d + γ 2 0 t τ 0 x T ( s ) F 5 D x ( s ) d s d t γ 2 0 τ 0 x T ( s ) F 5 D x ( s ) d s d t + γ 2 0 t τ τ 0 x ˙ T ( s ) F 6 Z x ˙ ( s ) d s d θ d t γ 2 0 τ θ 0 x ˙ T ( s ) F 6 Z x ˙ ( s ) d s d θ d t + γ 2 0 t x T ( 0 ) Φ P β F 1 x ( 0 ) d t γ 2 0 x T ( 0 ) Φ P β F 1 x ( 0 ) d t + γ 2 0 t ω ˜ T ( 0 ) F 2 ω ˜ ( 0 ) d t γ 2 0 ω ˜ T ( 0 ) F 2 ω ˜ ( 0 ) d t + γ 2 0 t Δ T ( 0 ) F 3 Δ ( 0 ) d t γ 2 0 Δ T ( 0 ) F 3 Δ ( 0 ) d t + γ 2 0 t θ ˜ T ( 0 ) F 4 θ ˜ ( 0 ) ϑ d d t γ 2 0 θ ˜ T ( 0 ) F 4 θ ˜ ( 0 ) ϑ d d t = 0 [ y T ( t ) y ( t ) γ 2 z T ( t ) z ( t ) ] d t + V ( t ) γ 2 0 τ 0 x T ( s ) F 5 D x ( s ) d s d t γ 2 0 τ θ 0 x ˙ T ( s ) F 6 Z x ˙ ( s ) d s d θ d t γ 2 0 x T ( 0 ) Φ P β F 1 x ( 0 ) d t γ 2 0 ω ˜ T ( 0 ) F 2 ω ˜ ( 0 ) d t γ 2 0 Δ T ( 0 ) F 3 Δ ( 0 ) d t γ 2 0 θ ˜ T ( 0 ) F 4 θ ˜ ( 0 ) ϑ d d t + γ 2 0 t x T ( 0 ) Φ P β F 1 x ( 0 ) d t 1 2 γ 2 x T ( 0 ) Φ P β x ( 0 ) + [ γ 2 0 t ω ˜ T ( 0 ) F 2 ω ˜ ( 0 ) d t ω ˜ T ( 0 ) ω ˜ ( 0 ) ] + [ γ 2 0 t Δ T ( 0 ) F 3 Δ ( 0 ) d t Δ T ( 0 ) Δ ( 0 ) ] + [ γ 2 0 t θ ˜ T ( 0 ) F 4 θ ˜ ( 0 ) ϑ d d t θ ˜ T ( 0 ) θ ˜ ( 0 ) ϑ d ] + γ 2 0 t τ 0 x T ( s ) F 5 D x ( s ) d s d t γ 2 τ 0 x T ( s ) D x ( s ) d s + γ 2 0 t τ τ 0 x ˙ T ( s ) F 6 Z x ˙ ( s ) d s d θ d t τ γ 2 τ 0 θ 0 x ˙ T ( s ) Z x ˙ ( s ) d s d θ = J + V ( t ) + [ γ 2 0 t x T ( 0 ) Φ P β F 1 x ( 0 ) d t 1 2 γ 2 x T ( 0 ) Φ P β x ( 0 ) ] + [ γ 2 0 t ω ˜ T ( 0 ) F 2 ω ˜ ( 0 ) d t ω ˜ T ( 0 ) ω ˜ ( 0 ) ] + [ γ 2 0 t Δ T ( 0 ) F 3 Δ ( 0 ) d t Δ T ( 0 ) Δ ( 0 ) ] + [ γ 2 0 t θ ˜ T ( 0 ) F 4 θ ˜ ( 0 ) θ d d t θ ˜ T ( 0 ) θ ˜ ( 0 ) θ d ] + { γ 2 0 t [ τ 0 x T ( s ) F 5 D x ( s ) d s ] d t γ 2 τ 0 x T ( s ) D x ( s ) d s } + { γ 2 0 t [ τ τ 0 x ˙ T ( s ) F 6 Z x ˙ ( s ) d s d θ ] d t τ γ 2 τ 0 θ 0 x ˙ T ( s ) Z x ˙ ( s ) d s d θ }

References

  1. Yang, R.; Wang, L. Development of multi-agent system for building energy and comfort management based on occupant behaviors. Energy Build. 2013, 56, 1–7. [Google Scholar] [CrossRef]
  2. Pipattanasomporn, M.; Feroze, H.; Rahman, S. Multi-agent systems in a distributed smart grid: Design and implementation. In Proceedings of the IEEE/PES Power Systems Conference and Exposition, Seattle, WA, USA, 15–18 March 2009; pp. 1–8. [Google Scholar]
  3. Adler, J.L.; Satapathy, G.; Manikonda, V.; Bowles, B.; Blue, V.J. A multi-agent approach to cooperative traffic management and route guidance. Transp. Res. Part Methodol. 2005, 39, 297–318. [Google Scholar] [CrossRef]
  4. Yan, Z.; Yang, Z.; Yue, L.; Wang, L.; Jia, H.; Zhou, J. Discrete-time coordinated control of leader-following multiple AUVs under switching topologies and communication delays. Ocean. Eng. 2019, 172, 361–372. [Google Scholar] [CrossRef]
  5. Jin, X.; Wang, S.; Qin, J.; Zheng, W.X.; Kang, Y. Adaptive fault-tolerant consensus for a class of uncertain nonlinear second-order multi-agent systems with circuit implementation. IEEE Trans. Circuits Syst. I Regul. Pap. 2017, 65, 2243–2255. [Google Scholar] [CrossRef]
  6. Nedic, A.; Ozdaglar, A.; Parrilo, P.A. Constrained consensus and optimization in multi-agent networks. IEEE Trans. Autom. Control. 2010, 55, 922–938. [Google Scholar] [CrossRef]
  7. Ren, W.; Beard, R.W. Consensus algorithms for double-integrator dynamics. In Distributed Consensus in Multi-Vehicle Cooperative Control: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2008; pp. 77–104. [Google Scholar]
  8. Lin, P.; Ren, W.; Yang, C.; Gui, W. Distributed consensus of second-order multiagent systems with nonconvex velocity and control input constraints. IEEE Trans. Autom. Control. 2017, 63, 1171–1176. [Google Scholar] [CrossRef]
  9. Lin, P.; Ren, W. Distributed H constrained consensus problem. Syst. Control. Lett. 2017, 104, 45–48. [Google Scholar] [CrossRef]
  10. Li, H.; Lü, Q.; Huang, T. Convergence analysis of a distributed optimization algorithm with a general unbalanced directed communication network. IEEE Trans. Netw. Sci. Eng. 2018, 6, 237–248. [Google Scholar] [CrossRef]
  11. Margellos, K.; Falsone, A.; Garatti, S.; Prandini, M. Distributed constrained optimization and consensus in uncertain networks via proximal minimization. IEEE Trans. Autom. Control. 2017, 63, 1372–1387. [Google Scholar] [CrossRef] [Green Version]
  12. Shen, H.; Park, J.H.; Wu, Z.G.; Zhang, Z. Finite-time H synchronization for complex networks with semi-Markov jump topology. Commun. Nonlinear Sci. Numer. Simul. 2015, 24, 40–51. [Google Scholar] [CrossRef]
  13. Liang, K.; Dai, M.; Shen, H.; Wang, J.; Wang, Z.; Chen, B. L2-L synchronization for singularly perturbed complex networks with semi-Markov jump topology. Appl. Math. Comput. 2018, 321, 450–462. [Google Scholar] [CrossRef]
  14. Li, J.N.; Pan, Y.J.; Su, H.Y.; Wen, C.L. Stochastic reliable control of a class of networked control systems with actuator faults and input saturation. Int. J. Control. Autom. Syst. 2014, 12, 564–571. [Google Scholar] [CrossRef]
  15. Li, J.N.; Bao, W.D.; Li, S.B.; Wen, C.L.; Li, L.S. Exponential synchronization of discrete-time mixed delay neural networks with actuator constraints and stochastic missing data. Neurocomputing 2016, 207, 700–707. [Google Scholar] [CrossRef]
  16. Li, J.N.; Li, L.S. Reliable control for bilateral teleoperation systems with actuator faults using fuzzy disturbance observer. IET Control. Theory Appl. 2017, 11, 446–455. [Google Scholar] [CrossRef]
  17. Li, J.N.; Xu, Y.F.; Gu, K.Y.; Li, L.S.; Xu, X.B. Mixed passive/H hybrid control for delayed Markovian jump system with actuator constraints and fault alarm. Int. J. Robust Nonlinear Control. 2018, 28, 6016–6037. [Google Scholar] [CrossRef]
  18. Fan, Q.Y.; Yang, G.H. Event-based fuzzy adaptive fault-tolerant control for a class of nonlinear systems. IEEE Trans. Fuzzy Syst. 2018, 26, 2686–2698. [Google Scholar] [CrossRef]
  19. Wu, C.; Liu, J.; Xiong, Y.; Wu, L. Observer-based adaptive fault-tolerant tracking control of nonlinear nonstrict-feedback systems. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 3022–3033. [Google Scholar] [CrossRef]
  20. Liu, Y.; Ma, H. Adaptive fuzzy fault-tolerant control for uncertain nonlinear switched stochastic systems with time-varying output constraints. IEEE Trans. Fuzzy Syst. 2018, 26, 2487–2498. [Google Scholar] [CrossRef]
  21. Yazdani, S.; Haeri, M. Robust adaptive fault-tolerant control for leader-follower flocking of uncertain multi-agent systems with actuator failure. ISA Trans. 2017, 71, 227–234. [Google Scholar] [CrossRef] [PubMed]
  22. Yao, D.J.; Dou, C.X.; Yue, D.; Zhao, N.; Zhang, T.J. Adaptive neural network consensus tracking control for uncertain multi-agent systems with predefined accuracy. Nonlinear Dynamics 2020, 101(1), 243–255. [Google Scholar] [CrossRef]
  23. Wu, Y.; Zhao, Y.; Hu, J. Bipartite consensus control of high-order multiagent systems with unknown disturbances. IEEE Trans. Syst. Man, Cybern. Syst. 2017, 49, 2189–2199. [Google Scholar] [CrossRef]
  24. Wu, Y.; Hu, J.; Zhang, Y.; Zeng, Y. Interventional consensus for high-order multi-agent systems with unknown disturbances on coopetition networks. Neurocomputing 2016, 194, 126–134. [Google Scholar] [CrossRef]
  25. Ren, C.E.; Fu, Q.; Zhang, J.; Zhao, J. Adaptive event-triggered control for nonlinear multi-agent systems with unknown control directions and actuator failures. Nonlinear Dyn. 2021, 105, 1657–1672. [Google Scholar] [CrossRef]
  26. Wang, Q.S.; Duan, Z.S.; Wang, J.Y.; Wang, Q.; Chen, G. An accelerated algorithm for linear quadratic optimal consensus of heterogeneous multiagent systems. IEEE Trans. Autom. Control. 2022, 67, 421–428. [Google Scholar] [CrossRef]
  27. Li, X.B.; Yu, Z.H.; Li, Z.W.; Wu, N. Group consensus via pinning control for a class of heterogeneous multi-agent systems with input constraints. Inf. Sci. 2021, 542, 247–262. [Google Scholar] [CrossRef]
  28. Sun, J.; Geng, Z.; Lv, Y.; Li, Z.; Ding, Z. Distributed adaptive consensus disturbance rejection for multi-agent systems on directed graphs. IEEE Trans. Control. Netw. Syst. 2016, 5, 629–639. [Google Scholar] [CrossRef]
  29. Li, J.N.; Xu, Y.F.; Bao, W.D.; Li, Z.J.; Li, L.S. Finite-time non-fragile state estimation for discrete neural networks with sensor failures, time-varying delays and randomly occurring sensor nonlinearity. J. Frankl. Inst. 2019, 356, 1566–1589. [Google Scholar] [CrossRef]
  30. Li, J.; Li, Z.; Xu, Y.; Gu, K.; Bao, W.; Xu, X. Event-triggered non-fragile state estimation for discrete nonlinear Markov jump neural networks with sensor failures. Int. J. Control. Autom. Syst. 2019, 17, 1131–1140. [Google Scholar] [CrossRef]
  31. Li, J.N.; Liu, X.; Ru, X.F.; Xu, X. Disturbance rejection adaptive fault-tolerant constrained consensus for multi-agent systems with failures. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 3302–3306. [Google Scholar] [CrossRef]
  32. Shen, H.; Wu, Z.G.; Park, J.H. Reliable mixed passive and filtering for semi-Markov jump systems with randomly occurring uncertainties and sensor failures. Int. J. Robust Nonlinear Control. 2015, 25, 3231–3251. [Google Scholar] [CrossRef]
  33. Zhou, Z.; Wang, X. Constrained consensus in continuous-time multiagent systems under weighted graph. IEEE Trans. Autom. Control. 2017, 63, 1776–1783. [Google Scholar] [CrossRef]
  34. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  35. Xie, L. Output feedback H control of systems with parameter uncertainty. Int. J. Control. 1996, 63, 741–750. [Google Scholar] [CrossRef]
  36. Liang, K.; He, W.; Xu, J.; Qian, F. Impulsive effects on synchronization of singularly perturbed complex networks with semi-Markov jump topologies. IEEE Trans. Syst. Man, Cybern. Syst. 2021, 52, 3163–3173. [Google Scholar] [CrossRef]
  37. Wen, G.H.; Duan, Z.S.; Yu, W.W.; Chen, G. Consensus in multi-agent systems with communication constraints. Int. J. Robust Nonlinear Control. 2012, 22, 170–182. [Google Scholar] [CrossRef]
Figure 1. Topological structure of three modes.
Figure 1. Topological structure of three modes.
Mathematics 10 04581 g001
Figure 2. Estimations of partial failure of actuators and disturbances (Agent 1).
Figure 2. Estimations of partial failure of actuators and disturbances (Agent 1).
Mathematics 10 04581 g002
Figure 3. Estimations of partial failure of actuators and disturbances (Agent 2).
Figure 3. Estimations of partial failure of actuators and disturbances (Agent 2).
Mathematics 10 04581 g003
Figure 4. Estimations of partial failure of actuators and disturbances (Agent 3).
Figure 4. Estimations of partial failure of actuators and disturbances (Agent 3).
Mathematics 10 04581 g004
Figure 5. Estimations of partial failure of actuators and disturbances (Agent 4).
Figure 5. Estimations of partial failure of actuators and disturbances (Agent 4).
Mathematics 10 04581 g005
Figure 6. All state trajectories of agents in our method.
Figure 6. All state trajectories of agents in our method.
Mathematics 10 04581 g006
Figure 7. All state trajectories of agents in other method.
Figure 7. All state trajectories of agents in other method.
Mathematics 10 04581 g007
Figure 8. Semi-Markov switching signal with three modes.
Figure 8. Semi-Markov switching signal with three modes.
Mathematics 10 04581 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Y.; Zhang, F.; Li, J. Anti-Disturbance Fault-Tolerant Constrained Consensus for Time-Delay Faulty Multi-Agent Systems with Semi-Markov Switching Topology. Mathematics 2022, 10, 4581. https://doi.org/10.3390/math10234581

AMA Style

Chen Y, Zhang F, Li J. Anti-Disturbance Fault-Tolerant Constrained Consensus for Time-Delay Faulty Multi-Agent Systems with Semi-Markov Switching Topology. Mathematics. 2022; 10(23):4581. https://doi.org/10.3390/math10234581

Chicago/Turabian Style

Chen, Yangjie, Fan Zhang, and Jianning Li. 2022. "Anti-Disturbance Fault-Tolerant Constrained Consensus for Time-Delay Faulty Multi-Agent Systems with Semi-Markov Switching Topology" Mathematics 10, no. 23: 4581. https://doi.org/10.3390/math10234581

APA Style

Chen, Y., Zhang, F., & Li, J. (2022). Anti-Disturbance Fault-Tolerant Constrained Consensus for Time-Delay Faulty Multi-Agent Systems with Semi-Markov Switching Topology. Mathematics, 10(23), 4581. https://doi.org/10.3390/math10234581

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop