Next Article in Journal
Constructal Design of Elliptical Cylinders with Heat Generating for Entropy Generation Minimization
Next Article in Special Issue
The Complex Adaptive Delta-Modulator in Sliding Mode Theory
Previous Article in Journal
Anomaly Detection for Individual Sequences with Applications in Identifying Malicious Tools
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Leader-Following Consensus in Multi-Agent Systems with Discrete Updates at Random Times

1
Center for Research and Development in Mathematics and Applications, Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal
2
Faculty of Computer Science, Bialystok University of Technology, 15-351 Białystok, Poland
3
Faculty of Mathematics and Computer Science, University of Plovdiv Paisii Hilendarski, 4027 Plovdiv, Bulgaria
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(6), 650; https://doi.org/10.3390/e22060650
Submission received: 27 March 2020 / Revised: 4 June 2020 / Accepted: 9 June 2020 / Published: 12 June 2020
(This article belongs to the Special Issue Dynamical Systems, Differential Equations and Applications)

Abstract

:
This paper studies the leader-following consensus problem in continuous-time multi-agent networks with communications/updates occurring only at random times. The time between two consecutive controller updates is exponentially distributed. Some sufficient conditions are derived to design the control law that ensures the leader-following consensus is asymptotically reached (in the sense of the expected value of a stochastic process). The numerical examples are worked out to demonstrate the effectiveness of our theoretical results.

1. Introduction

In recent years, we witnessed increasing attention on the distributed cooperative control of dynamic multi-agent systems due to their vast applications in various fields. In many situations, groups of dynamic agents need to interact with each other and their goal is to reach an agreement (consensus) on a certain task. For example, it can be flocking of birds during migration [1,2] to eventually reach their destinations; or robot teams synchronized in order to accomplish their collective tasks [3,4]. The main challenge for distributed cooperative control of multi-agent systems is that interaction between agents is only based on local information. There already exists a vast literature concerning first-order [3,5], second-order [6,7], and fractional-order [8,9,10] networks. For a survey of the recent results, we refer the reader to [11]. Within different approaches to the consensus problem in multi-agent networks, one can find continuous-time agents’ state evolving (the state trajectory is a continuous curve) [3,5,12,13], discrete-time agents’ state evolving (the state trajectory is a sequence of values) [14,15,16,17,18,19], and both continuous and discrete-time agents’ state evolving (the domain of the state trajectory is any time scale) [20,21,22,23]. An important question connected with the consensus problem is whether the communication topology is fixed over time or is time-varying; that is, communications channels are allowed to change over time [24]. The latter case seems to be more realistic; therefore, scientists mostly focus on it. Going farther, in real-word situations, it may happen that agents’ states are continuous but an exchange of information between agents occurs only at discrete time instants (update times). This issue was already addressed in the literature [25]. In this paper we also investigate such a situation. However, our approach is new and more challenging: we consider the case when agents exchange information between each other at random instants of times. Another question to be answered is whether the consensus problem is considered with or without the leader. Based on the existence of a leader, there are two kinds of consensus problems: leaderless consensus and leader-following consensus. This last-mentioned problem relies on establishing conditions under which, through local interactions, all the agents reach an agreement upon a common state (consensus), which is defined by the dynamic leader. A great number of works have been already devoted to the consensus problem with the leader (see, e.g., [24,26,27,28] and the references given there).
In the present paper, we investigate a leader-following consensus for multi-agent systems. It is assumed that the agents’ state variables are continuous but exchange information between them occurs only at discrete time instants (update times) appearing randomly. In other words, the consensus control law is applied at those update times. We analyze the case when a sequence of update times is a sequence of random variables and the waiting time between two consecutive updates is exponentially distributed. To avoid unnecessary complexity, we assume that the update times are the same for all agents. Combining of continuity of state variables with discrete random times of communications requires the introduction of an artificial state variable for each agent that evolves in continuous-time and is allowed to have discontinuities; the primary state variable is continuous for all time. Between update times, both the original state and the artificial variable evolve continuously according to some specified dynamics. At randomly occurring update times, the state variable keeps its current value, while the artificial variable is updated by the state values received from other agents, including the leader. It is worth noting that, in the case of deterministic update times, known initially, the idea of artificial state variables is applied in [15,16,25]. The presence of randomly occurring update times in the model leads to a total change of the behavior of the solutions. They are changed from deterministic real valued functions to stochastic processes. This requires combining of results from the probability theory with the ones of the theory of ordinary differential equations with impulses. In order to analyze the leader-following consensus problem, we define the state error system and a sample path solution of this system. Since solutions to the studied model of a multi-agent system with discrete-time updates at random times are stochastic processes, asymptotically reaching leader-following consensus is understood in the sense of the expected value.
The paper is organized in the following manner. In Section 2, we describe the multi-agent system of our interest in detail. Some necessary definitions, lemmas, and propositions from probability theory are given in Section 3. Section 4 contains our main results. First, we describe a stochastic process that is a solution to the continuous-time system with communications at random times. Next, sufficient conditions for the global asymptotic leader-following consensus in a continuous-time multi-agent system with discrete-time updates occurring at random times are proven. In Section 5, an illustrative example with numerical simulations is presented to verify theoretical discussion. Some concluding remarks are drawn in Section 6.
Notation: For a given vector x R n , x stands for its Euclidean norm x = x T x . For a given square n × n matrix, A = [ a i j ] , A stands for its spectral norm A = max i { λ i } , where λ i are the eigenvalues of A . A T . We have A n max i , j | a i j | and e A e A .

2. Statement of the Model

We consider a multi-agent system consisting of N agents and one leader. The state of agent i is denoted by y i : [ t 0 , ) R , i = 1 , , N , and the state of the leader by y r : [ t 0 , ) R , where t 0 0 is a given initial time. Without information exchange between agents, the leader has no influence on the other agents (see Example 1, Case 1.1, and Example 2, Case 2.1). In order to analyze the influence of the leader on the behavior of the other agents, we assume that there is information exchange between agents but it occurs only at random update times. In other words, the model is set up as the continuous-time multi-agent system with discrete-time communications/updates occurring only at random times.
Let us denote by ( Ω , F , P ) a probability space, where Ω is the sample space, F is a σ -algebra on Ω , and P is the probability on F . Consider a sequence of independent, exponentially-distributed random variables { τ k } k = 1 with parameter λ > 0 and such that k = 1 τ k = with a probability 1. Define the sequence of random variables { ξ k } k = 0 by
ξ 0 = t 0 , ξ k = t 0 + i = 1 k τ i , k = 1 , 2 , ,
where t 0 is a given initial time. The random variable τ k measures the waiting time of the k-th event time after the ( k 1 ) -st controller update occurs and the random variable ξ k is connected with the random event time and it denotes the length of time until k controller updates occur for t t 0 . At each time ξ k agent i updates its state variable according to the following equation:
Δ y i ( ξ k ) = u i ( ξ k ) , i = 1 , , N , k = 1 , 2 , ,
where u i : R R is the control input function for the i-th agent. Here, Δ y i ( ξ k ) is the difference between the value of the state variable of the i-th agent after the update y i ( ξ k + 0 ) and before it y i ( ξ k ) ; i.e., Δ y i ( ξ k ) = y i ( ξ k + 0 ) y i ( ξ k ) . The state of the leader remains unchanged; that is,
Δ y r ( ξ k ) = 0 .
For each agent i we consider the control law, at the random times ξ k , k = 1 , 2 , , based on the information it receives from its neighboring agents and the leader:
u i ( ξ k ) = j = 1 N a i j ( τ k ) y i ( ξ k ) y j ( ξ k ) + ω i ( τ k ) y i ( ξ k ) y r ( ξ k ) , k = 1 , 2 , ,
where weights a i i ( t ) 0 , i = 1 , 2 , N , and a i j ( t ) 0 , t t 0 , i , j = 1 , 2 , , N , are entries of the weighted connectivity matrix A ( t ) at time t:
A ( t ) = 0 a 12 ( t ) a 13 ( t ) a 1 N ( t ) a 21 ( t ) 0 a 23 ( t ) a 2 N ( t ) a N 1 ( t ) a N 2 ( t ) a N 3 ( t ) 0 ,
and ω i ( t ) > 0 if the virtual leader is available to agent i at time t, while ω i ( t ) = 0 otherwise. Between two update times ξ k 1 and ξ k , any agent i has information only about his own state. More precisely, the dynamics of agent i are described by
y i ( t ) = ( b i ( τ k ) c i ( τ k ) ) y i ( t ) , for t ( ξ k 1 , ξ k ] , i = 1 , 2 , , N , k = 1 , 2 , ,
where b i C ( [ t 0 , ) , ( 0 , ) ) , c i C ( [ t 0 , ) , [ 0 , ) ) , i = 1 , , N .
The leader for the multi-agent system is an isolated agent with constant reference state
y r ( t ) = 0 .
Observe that the model described above can be written as a system of differential equations with impulses at random times ξ k , k = 1 , 2 , , and waiting time between two consecutive updates τ k as follows:
y r ( t ) = 0 for t ( ξ k 1 , ξ k ] , y i ( t ) = ( b i ( τ k ) c i ( τ k ) ) y i ( t ) for t ( ξ k 1 , ξ k ] , Δ y r ( ξ k ) = 0 , Δ y i ( ξ k ) = j = 1 N a i j ( τ k ) y i ( ξ k ) y j ( ξ k ) + ω i ( τ k ) y i ( ξ k ) y r ( ξ k ) , k = 1 , 2 , , i = 1 , , N ,
with initial conditions
y r ( t 0 ) = y r 0 , y i ( t 0 ) = y i 0 , i = 1 , 2 , , N .
We introduce an additional (artificial) variable Y i for each state y i , i = 1 , 2 , , N , such that it has discontinuities at random times ξ k and Y r = y r . These variables allow us to keep the state of each agent y i , i = 1 , 2 , , N , as a continuous function of time. Between two update times ξ k 1 and ξ k the evolution of Y i and Y r are given by
Y i ( t ) = ( b i ( τ k ) c i ( τ k ) ) Y i ( t ) , i = 1 , 2 , , N , Y r ( t ) = 0 .
Then, by Equations (2) and (4), we obtain
y i ( t ) + Y i ( t ) = ( b i ( τ k ) c i ( τ k ) ) y i ( t ) + ( b i ( τ k ) c i ( τ k ) ) Y i ( t ) = b i ( τ k ) ( y i ( t ) Y i ( t ) ) + c i ( τ k ) ( y i ( t ) Y i ( t ) ) , i = 1 , , N .
Consequently, we get the following system:
y i ( t ) = b i ( τ k ) ( y i ( t ) Y i ( t ) ) , i = 1 , , N , Y i ( t ) = c i ( τ k ) ( y i ( t ) Y i ( t ) ) , i = 1 , , N , y r ( t ) = 0 , Y r ( t ) = 0 for t ( ξ k 1 , ξ k ] , k = 1 , 2 .
At each update time we set:
y i ( ξ k + 0 ) = y i ( ξ k ) , i = 1 , , N , k = 1 , 2 , , Y i ( ξ k + 0 ) = j = 1 N a i j ( τ k ) Y i ( ξ k ) y j ( ξ k ) + ω i ( τ k ) Y i ( ξ k ) y r ( ξ k ) + Y i ( ξ k ) , i = 1 , , N , k = 1 , 2 , , y r ( ξ k + 0 ) = y r ( ξ k ) , Y r ( ξ k + 0 ) = Y r ( ξ k ) , k = 1 , 2 , .
The initial conditions for (5) and (6) are:
y r ( t 0 ) = y r 0 , y i ( t 0 ) = y i 0 , i = 1 , 2 , , N , Y i ( t 0 ) = y r 0 , i = 1 , 2 , , N , Y r ( t 0 ) = y r 0 .
Observe that dynamics described by (5) lead to a decrease of the absolute difference between a state variable y i and an artificial variable Y i , i = 1 , 2 , , N . Whereas by (6), the value of Y i is updated using the information received, while y i remains unchanged. Therefore, Equations (5) and (6) provide a formal description of the multi-agent system with continuous-time states of agents and information exchange between agents occurring at discrete-time instants.
Let x i ( t ) : = y i ( t ) y r ( t ) , X i ( t ) : = Y i ( t ) y r ( t ) , i = 1 , 2 , , N , be errors between any state y i or Y i , and the leader state y r , at time t. Then, by (5)–(7), one gets the following error system:
x i ( t ) = b i ( τ k ) ( x i ( t ) X i ( t ) ) , i = 1 , , N , X i ( t ) = c i ( τ k ) ( x i ( t ) X i ( t ) ) , i = 1 , , N , for t ( ξ k 1 , ξ k ] , x i ( ξ k + 0 ) = x i ( ξ k ) , i = 1 , 2 , , N , X i ( ξ k + 0 ) = d i i ( τ k ) X i ( ξ k ) + j = 1 N d i j ( τ k ) x j ( ξ k ) + X i ( ξ k ) , i = 1 , 2 , , N , k = 1 , 2 , , x i ( t 0 ) = y i 0 y r 0 , i = 1 , 2 , , N , X i ( t 0 ) = 0 , i = 1 , 2 , , N ,
where coefficients d i j are the entries of the matrix
D ˜ ( t ) = j = 1 N a 1 j ( t ) ω 1 ( t ) a 12 ( t ) a 1 N ( t ) a 21 ( t ) j = 1 N a 2 j ( t ) ω 2 ( t ) a 2 N ( t ) a N 1 ( t ) a N 2 ( t ) j = 1 N a N j ( t ) ω N ( t ) ,
i.e.,
d i j ( t ) = a i j ( t ) , i j , i , j = 1 , 2 , , N , d i i ( t ) = j = 1 N a i j ( t ) ω i ( t ) , i = 1 , 2 , , N .
Now let us introduce the 2 N × 2 N -dimensional matrices
C ( t ) = b 1 ( t ) b 1 ( t ) 0 0 0 0 c 1 ( t ) c 1 ( t ) 0 0 0 0 0 0 b 2 ( t ) b 2 ( t ) 0 0 0 0 c 2 ( t ) c 2 ( t ) 0 0 0 0 0 0 b N ( t ) b N ( t ) 0 0 0 0 c N ( t ) c N ( t ) ,
and
D ( t ) = 0 0 0 0 0 0 0 0 0 d 1 , 1 ( t ) + 1 d 1 , 2 ( t ) 0 d 1 , 3 ( t ) 0 d 1 , N ( t ) 0 0 0 0 0 0 0 0 0 d 2 , 1 ( t ) 0 0 d 2 , 2 ( t ) + 1 d 2 , 3 ( t ) 0 d 2 , N ( t ) 0 0 0 0 0 0 0 0 0 d N , 1 ( t ) 0 d N , 2 ( t ) 0 d N , 2 ( t ) 0 0 d N , N ( t ) + 1 .
Then, denoting Z = ( x 1 , X 1 , x 2 , X 2 , x N , X N ) T , we can write error system (8) in the following matrix form:
Z ( t ) = C ( τ k ) Z ( t ) , for t ( ξ k 1 , ξ k ] , k = 1 , 2 , , Z ( ξ k + 0 ) = D ( τ k ) Z ( ξ k ) , k = 1 , 2 , , Z ( t 0 ) = Z 0 ,
where Z 0 = ( x 1 0 , 0 , x 2 0 , 0 , , x N 0 , 0 ) T R 2 N , x i 0 = y i 0 y r 0 , i = 1 , 2 , , N .

3. Some Preliminary Results from Probability Theory

In this section, having in mind the definitions of random variables { τ k } k = 1 and { ξ k } k = 1 , given in Section 2, we list some facts from probability theory that will be used in the proofs of our main results.
Proposition 1
([29]). The random variable Ξ = i = 1 k τ i is Erlang distributed with a probability density function f Ξ ( t ) = λ e λ t ( λ t ) k 1 ( k 1 ) ! and cumulative distribution function F ( t ) = P ( Ξ < t ) = 1 e λ t j = 0 k 1 ( λ t ) j j ! .
Let t t 0 be a fixed point. Consider the events
S k ( t ) = { ω Ω : ξ k 1 ( ω ) < t < ξ k ( ω ) } , k = 1 , 2 ,
and define the stochastic processes Δ k ( t ) , k = 1 , 2 , , by
Δ k ( t ) = 1 for ω S k ( t ) 0 for ω S k ( t ) .
Note that, for any fixed point t and any element ω Ω , there exists a natural number k such that ω S k ( t ) and ω S j ( t ) for j k , or for any fixed point t there exists a natural number k such that Δ k ( t ) = 1 and Δ j ( t ) = 0 for j k .
Lemma 1
([30] Lemma 2.1). Let { τ k } k = 1 be independent, exponentially-distributed random variables with a parameter λ and ξ k = t 0 + i = 1 k τ i . Then,
E Δ k ( t ) = λ k ( t t 0 ) k k ! e λ ( t t 0 ) , f o r t t 0 a n d k = 1 , 2 , ,
where E { . } denotes the mathematical expectation.
Corollary 1
([30]). The probability that there will occur exactly k controller updates of each agent until the time t, t t 0 , is given by the equality
P ( S k ( t ) ) = λ k ( t t 0 ) k k ! e λ ( t t 0 ) .
Definition 1
([29]). We say that the stochastic processes m and n satisfy the inequality m ( t ) n ( t ) , for t J R , if the state space of the stochastic processes v ( t ) = m ( t ) n ( t ) is ( , 0 ] .
Proposition 2
([29]). If the stochastic processes y and u satisfy the inequality m ( t ) n ( t ) for t J R , then E ( m ( t ) ) E ( n ( t ) ) for t J .
Proposition 3
([29]). Let a > 0 be a real constant and τ be an exponentially-distributed random variable with parameter λ > a . Then, E ( e a τ ) = λ λ a .

4. Leader-Following Consensus

Consider the sequence of points { t k } k = 1 , where the point t k is an arbitrary value of the corresponding random variable τ k , k = 1 , 2 , . Define the increasing sequence of points { T k } k = 0 by T 0 = t 0 and T k = T 0 + j = 1 k t j for k = 1 , 2 , .
Remark 1.
Note that if t k is a value of the random variable τ k , k = 1 , 2 , , then T k is a value of the random variable ξ k , k = 1 , 2 , , correspondingly.
Since the multi-agent system with the leader described by system (2)–(3) is equivalent to system (9), we focus on initial value problem (9).
Let us consider the following system of impulsive differential equations with fixed points of impulses and fixed length of action of the impulses:
x i ( t ) = b i ( t k ) ( x i ( t ) X i ( t ) ) , for t ( T k 1 , T k ] , X i ( t ) = c i ( t k ) ( x i ( t ) X i ( t ) ) , for t ( T k 1 , T k ] , x i ( T k + 0 ) = x i ( T k ) , X i ( T k + 0 ) = ( 1 + d i i ( t k ) ) X i ( T k ) + j = 1 N d i j ( t k ) x j ( T k ) , x i ( t 0 ) = x i 0 , X i ( t 0 ) = 0 , k = 1 , 2 , , i = 1 , 2 , , N
or its equivalent matrix form
Z ( t ) = C ( t k ) Z ( t ) , for t ( T k 1 , T k ] , Z ( T k + 0 ) = D ( t k ) Z ( T k ) , k = 1 , 2 , , Z ( t 0 ) = Z 0 .
Note that system (11) is a system of impulsive differential equations with impulses at deterministic time moments { T k } k = 0 . For a deeper discussion of impulsive differential equations we refer the reader to [31] and the references given there. The solution to (11) depends not only on the initial condition ( t 0 , Z 0 ) but also on the moments of impulses T k , k = 1 , 2 , , i.e., on the arbitrary chosen values t k of the random variables τ k , k = 1 , 2 , , and is given by
Z ( t ; t 0 , Z 0 , { T k } ) = e C ( t k ) ( t T k 1 ) Z 0 i = 1 k 1 D ( t k i ) e C ( t k i ) t k i , t ( T k 1 , T k ] , k = 1 , 2 , .
The set of all solutions Z ( t ; t 0 , Z 0 , { T k } ) of the initial value problems of type (11) for any values t k of the random variables τ k , k = 1 , 2 , , generates a stochastic process with state space R 2 N . We denote it by Z ( t ; t 0 , Z 0 , { τ k } ) and call it a solution to initial value problem (9). Following the ideas of a sample path of a stochastic process [29,32] we define a sample path solution of studied system (9).
Definition 2.
For any given values t k of the random variables τ k , k = 1 , 2 , 3 , , respectively, the solution Z ( t ; t 0 , Z 0 , { T k } ) of the corresponding initial value problem (10) is called a sample path solution of initial value problem (9).
Definition 3.
A stochastic process Z ( t ; t 0 , Z 0 , { τ k } ) with an uncountable state space R 2 N is said to be a solution of initial value problem (9) if, for any values t k of the random variables τ k , k = 1 , 2 , , the corresponding function Z ( t ; t 0 , Z 0 , { T k } ) is a sample path solution of initial value problem (9).
Let the stochastic process Z ( t ; t 0 , Z 0 , { τ k } ) , Z = ( x 1 , X 1 , x 2 , X 2 , , x N , X N ) T , with an uncountable state space R 2 N be a solution of initial value problem with random impulses (9).
Definition 4.
We say that the leader-following consensus is reached asymptotically in multi-agent system (2) if, for any t 0 0 and any y 0 R N ,
lim t E | y i ( t ; t 0 , y 0 , { τ k } ) y r ( t ; t 0 , y 0 , { τ k } ) | = 0 , f o r i = 1 , 2 , , N ,
where y 0 = ( y 1 0 , y 2 0 , , y N 0 , y r 0 ) T .
Remark 2.
Observe that since x i ( t ) = y i ( t ) y r ( t ) , i = 1 , , N , and initial value problem (2)–(3) is equivalent to initial value problem (9), equality (12) means that
lim t E x ( t ; t 0 , x 0 , { τ k } ) = 0 , where x 0 = ( x 1 0 , . . . , x N 0 ) T .
Now we prove the main results of the paper, which are sufficient conditions for the leader-following consensus in a continuous-time multi-agent system with discrete-time updates occurring at random times.
Theorem 1.
Assume that:
(A1) 
The inequalities
0 < b i ( t ) < 1 , 0 c i ( t ) 1 , f o r t t 0 , i = 1 , 2 , N ,
hold, and there exists a real α ( 0 , 1 ) such that
| 1 j = 1 N a i j ( t ) ω i ( t ) | < α 2 N f o r t t 0 , i = 1 , 2 , N ,
and
0 a i j ( t ) < α 2 N f o r t t 0 , i , j = 1 , 2 , N , i j .
(A2) 
The random variables τ k , k = 1 , 2 , , are independently exponentially distributed with the parameter λ such that λ > 2 N 1 α .
Then, for any initial point t 0 0 the solution Z ( t ; t 0 , Z 0 , { τ k } ) of the initial value problem with random moments of impulses (9) is given by the formula
Z ( t ; t 0 , Z 0 , { τ k } ) = e C ( t k ) ( t ξ k ) i = 1 k 1 D ( τ k i ) e C ( τ k i ) τ k i Z 0 f o r t ( ξ k 1 , ξ k ] , k = 1 , 2 , ,
and the expected value of the solution satisfies the inequality
E ( Z ( t ; t 0 , Z 0 , { τ k } ) ) Z 0 e ( 2 N + α λ λ ) ( t t 0 ) .
Proof. 
Let t 0 0 be an arbitrary given initial time. According to (A1), we have
C ( t ) 2 N max i = 1 , 2 , , N { b i ( t ) , c i ( t ) } 2 N ,
D ( t ) 2 N max | 1 + d i i ( t ) | , max i , j = 1 , 2 , , N , i j { | d i , j ( t ) | } < α ,
and e C ( t ) e 2 N for t t 0 . For any k = 1 , 2 , , we choose an arbitrary value t k of the random variable τ k and define the increasing sequence of points T 0 = t 0 , T k = t 0 + j = 1 k t j , k = 1 , 2 , 3 , . By Remark 1, for any natural k, T k is a value of the random variable ξ k . Consider the initial value problem of impulsive differential equations with fixed points of impulses (11). The solution of initial value problem (11) is given by the formula
Z ( t ; t 0 , Z 0 , { T k } ) = e C ( t k ) ( t T k 1 ) i = 1 k 1 D ( t k i ) e C ( t k i ) t k i Z 0 , t ( T k 1 , T k ] , k = 1 , 2 , .
Then, for t ( T k 1 , T k ] , we get the following estimation
Z ( t ; t 0 , Z 0 , { T k } ) Z 0 i = 1 k 1 D ( t k i ) e C ( t k i ) t k i e C ( t k ) ( t T k 1 ) Z 0 i = 1 k 1 α e | | C ( t k i ) | | 2 t k i e | | C ( t k ) | | ( t T k 1 ) Z 0 i = 1 k 1 α e 2 N t k i e 2 N ( t T k 1 ) Z 0 α k e 2 N i = 1 k 1 t k i + ( t T k 1 ) = Y 0 α k e 2 N ( t t 0 ) .
The solutions Z ( t ; t 0 , Z 0 , { T k } ) generate continuous stochastic process Z ( t ; t 0 , Z 0 , { τ k } ) that is defined by (16). It is a solution to initial value problem of impulsive differential equation with random moments of impulses (9). According to Proposition 2, Proposition 3, and inequality (17), we get
E ( Z ( t ; t 0 , Z 0 , { τ k } ) | S k ( t ) ) Z 0 α k e 2 N ( t t 0 ) .
Therefore, applying Corollary 1, we obtain
E Z ( t ; t 0 , Z 0 , { τ k } ) = k = 0 E Z ( t ; t 0 , Z 0 , { τ k } ) | S k ( t ) P ( S k ( t ) ) k = 0 Z 0 α k e 2 N ( t t 0 ) e λ ( t t 0 ) λ k ( t t 0 ) k k ! Z 0 e ( 2 N λ ) ( t t 0 ) k = 0 ( α λ ( t t 0 ) ) k k ! = Z 0 e ( 2 N + α λ λ ) ( t t 0 ) .
 □
Remark 3.
The inequalities (14) and (15) are satisfied only for ω i ( t ) , i = 1 , 2 , , N , such that ω i ( t ) 0 for all i = 1 , 2 . , N and t t 0 . Indeed, assume that there exist i = 1 , 2 , N and t * t 0 , such that ω i ( t * ) = 0 . Then inequality (14) reduces to | 1 j = 1 N a i j ( t * ) | < α 2 N . If 1 < j = 1 N a i j ( t * ) , then from (14) it follows that 1 α ( N 1 ) 2 N , i.e, 2 N N 1 < α , which is not possible since α ( 0 , 1 ) . Therefore, assume that 1 j = 1 N a i j ( t * ) and 1 j = 1 N a i j ( t * ) < α 2 N . Hence 1 < α 2 N + j = 1 N a i j ( t * ) < α 2 N + α ( N 1 ) 2 N = α 2 which is again a contradiction with assumption that α ( 0 , 1 ) .
Theorem 2.
If the assumptions of Theorem 1 are satisfied, then the leader-following consensus for multi-agent system (2) is reached asymptotically.
Proof. 
The claim follows from Theorem 1, Remark 1, the equality Z 0 = x 0 , and the inequalities
E x ( t ; t 0 , Z 0 , { τ k } ) E Z ( t ; t 0 , Z 0 , { τ k } ) Z 0 e ( 2 N + α λ λ ) ( t t 0 ) ,
for i = 1 , 2 , , N . □
According to Remark 3, condition (A1) is satisfied only in the case when a leader is available to each agent at any update random time. An interpretation of this situation can be the following. A leader can be viewed as the root node for the communication network; if there exists a directed path from the root to each agent (device), then all the agents can track the objective successfully. Since the leader can perceive more information in order to guide the whole group to complete the task (consensus), it seems to be reasonable to demand that he is available to each follower at any update random time.

5. Illustrative Examples

In this section, the numerical examples are given to verify the effectiveness of the proposed sufficient conditions for a multi-agent system to achieve asymptotically the leader-following consensus. In all examples, we set t 0 = 0 and consider a sequence of independent exponentially distributed random variables { τ k } k = 1 with parameter λ > 0 (it will be defined later in each example) and the sequence of random variables { ξ k } k = 0 defined by (1).
Example 1.
Let us consider a system of three agents and the leader. In order to illustrate the meaningfulness of the studied model and the obtained results, we consider three cases.
Case 1.1.There is no information exchange between agents and the leader is not available.
The dynamics of agents are given by
y r ( t ) = 0 , y 1 ( t ) = 0.1 ( 1 + | sin ( t ) | ) y 1 ( t ) , y 2 ( t ) = 0.9 t + 1 0.8 cos 2 ( t ) y 2 ( t ) , y 3 ( t ) = ( 0.4 | cos ( t ) | 0.1 ) y 3 ( t ) , t 0 .
Figure 1 shows the solution to system (18) with the initial values: y 1 0 = 1 , y 2 0 = 2 , y 3 0 = 3 , y r 0 = 3 2 . From the graphs in Figure 1 it can be seen that the leader-following consensus is not reached.
Case 1.2.There is information exchange between agents (including the leader) occurring at random update times.
The dynamics between two update times of each agent and of the leader are given by (compare with (18)):
y r ( t ) = 0 , y 1 ( t ) = 0.1 ( 1 + | sin ( τ k ) | ) y 1 ( t ) , y 2 ( t ) = 0.9 τ k + 1 0.8 cos 2 ( τ k ) y 2 ( t ) , y 3 ( t ) = ( 0.4 | cos ( τ k ) | 0.1 ) y 3 ( t ) t ( ξ k , ξ k + 1 ] , k = 0 , 1 , 2 , .
The consensus control law at any update time ξ k , k = 1 , 2 , , is given by
u 1 ( ξ k ) = 0.1 τ k τ k + 1 ( y 1 ( ξ k ) y 2 ( ξ k ) ) ( 1 0.05 cos 2 τ k ) y 1 ( ξ k ) y r ( ξ k ) , u 2 ( ξ k ) = ( 1 0.09 cos 2 τ k ) ( y 2 ( ξ k ) y r ( ξ k ) ) , u 3 ( ξ k ) = 0.1 sin 2 ( τ k ) ( y 3 ( ξ k ) y 2 ( ξ k ) ) ( 1 0.01 cos ( τ k ) ) y 3 ( ξ k ) y r ( ξ k ) ,
Hence,
C ( t ) = 0.1 ( 1 + | sin ( t ) | ) 0.1 ( 1 + | sin ( t ) | ) 0 0 0 0 0 0 0 0 0 0 0 0 0.9 t + 1 0.9 t + 1 0 0 0 0 0.8 cos 2 ( t ) 0.8 cos 2 ( t ) 0 0 0 0 0 0 0.4 ( 1 + | cos ( t ) | ) 0.4 ( 1 + | cos ( t ) | ) 0 0 0 0 0.5 0.5
and
D ( t ) = 0 0 0 0 0 0 0 0.1 t t + 1 + 0.05 cos 2 t 0.1 t t + 1 0 0 0 0 0 0 0 0 0 0 0 0 0.09 cos 2 t 0 0 0 0 0 0 0 0 0 0 0.1 sin 2 ( t ) 0 0 0.1 sin 2 ( t ) + 0.01 cos t .
Observe that, for α ( 0.6 , 1 ) , Assumption(A1)of Theorem 1 is fulfilled. Let λ = 45 . Then, for α = 0.7 , Assumption(A2)of Theorem 1 holds. Therefore, by Theorem 2, the leader-following consensus for multi-agent system (19) with the consensus control law (20) at any update time is reached asymptotically.
To illustrate the behavior of the solutions of the model with impulses occurring at random times, we consider several sample path solutions. For t 0 = 0 we fix the initial values as follows: y 1 0 = 1 , y 2 0 = 2 , y 3 0 = 3 , y r 0 = 3 2 , and choose different values of each random variable τ k , k = 1 , 2 , , 12 , in the following way:
(i) 
t 1 = 10 , t 2 = 2 , t 3 = 8 , t 4 = 10 , t 5 = 15 , t 6 = 2 , t 7 = 8 , t 8 = 7 , t 9 = 6 , t 10 = 12 , t 11 = 2 , t 12 = 8 ;
(ii) 
t 1 = 2 , t 2 = 12 , t 3 = 10 , t 4 = 6 , t 5 = 5 , t 6 = 2 , t 7 = 7 , t 8 = 6 , t 9 = 5 , t 10 = 10 , t 11 = 7 , t 12 = 18 ;
(iii) 
t 1 = 3 , t 2 = 9 , t 3 = 11 , t 4 = 7 , t 5 = 5 , t 6 = 9 , t 7 = 6 , t 8 = 7 , t 9 = 2 , t 10 = 6 , t 11 = 15 , t 12 = 10 ;
(iv) 
t 1 = 7 , t 2 = 5 , t 3 = 8 , t 4 = 10 , t 5 = 5 , t 6 = 7 , t 7 = 4 , t 8 = 11 , t 9 = 8 , t 10 = 6 , t 11 = 9 , t 12 = 10 .
Clearly, the leader state is y r ( t ) 1.5 . For each value of the random variables (i)–(iv) we get the system of impulsive differential equations with fixed points of impulses of type (11) with N = 3 and matrices C ( t ) , D ( t ) given above. Figure 2, Figure 3 and Figure 4 present the state trajectories of the leader y r ( t ) and agent y 1 ( t ) , y 2 ( t ) and y 3 ( t ) , respectively. Apparently, it is visible that the leader-following consensus is reached for all considered sample path solutions.
Case 1.3.At any update random time, only the leader is available to each agent and there is no information exchange between agents.
The dynamics between two update times of each agent are given by (19), and at update time ξ k , k = 1 , 2 , , the following control law is applied:
u 1 ( ξ k ) = ( 1 0.09 cos 2 ( τ k ) ) y 1 ( ξ k ) y r ( ξ k ) , u 2 ( ξ k ) = ( 1 + 0.09 sin ( τ k ) ) ( y 2 ( ξ k ) y r ( ξ k ) ) , u 3 ( ξ k ) = 1 0.1 τ k τ k + 1 y 3 ( ξ k ) y r ( ξ k ) .
Therefore,
D ( t ) = 0 0 0 0 0 0 0 0.09 cos 2 t 0 0 0 0 0 0 0 0 0 0 0 0 0 0.09 sin t 0 0 0 0 0 0 0 0 0 0 0 0 0 0.1 t t + 1
and C ( t ) is the same as in Case 1.2. It is easy to check that, for α = 0.7 and λ = 45 , assumptions(A1)and(A2)are fulfilled. According to Theorem 2, the leader-following consensus for multi-agent system (19) with the consensus control law (21) at any update time is reached asymptotically.
To illustrate the behavior of the solutions of the model with impulses occurring at random times, we consider sample path solutions with the same data as in Case 1.2. Figure 5, Figure 6 and Figure 7 present the state trajectories of the leader y r ( t ) and agent y 1 ( t ) , y 2 ( t ) and y 3 ( t ) , respectively. Apparently, it is visible that the leader-following consensus is reached in all considered sample path solutions.
Example 2.
Let the system consist of four agents and the leader. In order to illustrate the meaningfulness of the studied model and the obtained results, we consider four cases.
Case 2.1.There is no information exchange between agents and the leader is not available.
The dynamics of agents are given by
y r ( t ) = 0 , y 1 ( t ) = ( 0.4 0.1 sin 2 ( t ) ) y 1 ( t ) , y 2 ( t ) = ( 0.3 ( 1.01 ) t 0.1 cos 2 ( t ) ) y 2 ( t ) , y 3 ( t ) = 0.5 t + 1 y 3 ( t ) , y 4 ( t ) = 0.3 t + 1 0.1 cos 2 ( t ) y 4 ( t ) ,
Figure 8 shows the solution to system (22) with the initial values: y 1 0 = 1 , y 2 0 = 2 , y 3 0 = 3 , y 4 0 = 4 , y r 0 = 3 2 . It is visible that the leader-following consensus is not reached.
Case 2.2.There is information exchange between agents occurring at random update times and the leader is available for agents.
The dynamics between two update times of each agent and of the leader are given by
y r ( t ) = 0 , y 1 ( t ) = ( 0.4 0.1 sin 2 ( τ k ) ) y 1 ( t ) , y 2 ( t ) = ( 0.3 ( 1.01 ) τ k 0.9 cos 2 ( τ k ) ) y 2 ( t ) , y 3 ( t ) = 0.5 τ k + 1 y 3 ( t ) , y 4 ( t ) = 0.3 τ k + 1 0.1 cos 2 ( τ k ) y 4 ( t ) , t ( ξ k , ξ k + 1 ] , k = 0 , 1 , 2 , .
At update time ξ k , k = 1 , 2 , , the following control law is applied:
u 1 ( ξ k ) = 0.1 sin 2 ( τ k ) ( y 1 ( ξ k ) y 2 ( ξ k ) ) 0.9 y 1 ( ξ k ) y r ( ξ k ) , u 2 ( ξ k ) = 0.1 e τ k 2 ( y 2 ( ξ k ) y 1 ( ξ k ) ) ( y 2 ( ξ k ) y r ( ξ k ) ) , u 3 ( ξ k ) = ( y 3 ( ξ k ) y r ( ξ k ) ) , u 4 ( ξ k ) = 0.06 | sin ( τ k ) | ( y 4 ( ξ k ) y 1 ( ξ k ) ) 0.06 | cos ( τ k ) | ( y 4 ( ξ k ) y 2 ( ξ k ) ) y 4 ( ξ k ) y r ( ξ k ) , k = 1 , 2 , .
In this case, we have
D ( t ) = 0 0 0 0 0 0 0 0 0 0.1 sin 2 ( t ) + 0.1 0.1 sin 2 ( t ) 0 0 0 0 0 0 0 0 0 0 0 0 0 0.1 e t 2 0 0 0.1 e t 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.06 | sin ( t ) | 0 0.06 | cos ( t ) | 0 0 0 0 0.06 ( | sin ( t ) | + | cos ( t ) | )
and
C ( t ) = 0.4 0.4 0 0 0 0 0 0 0.1 sin 2 ( t ) 0.1 sin 2 ( t ) 0 0 0 0 0 0 0 0 0.3 ( 1.01 ) t 0.3 ( 1.01 ) t 0 0 0 0 0 0 0.1 cos 2 ( t ) 0.1 cos 2 ( t ) 0 0 0 0 0 0 0 0 0.5 t + 1 0.5 t + 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.3 t + 1 0.3 t + 1 0 0 0 0 0 0 0.1 cos 2 ( t ) 0.1 cos 2 ( t ) .
Hence, for α ( 0.8 , 1 ) , Assumption(A1)of Theorem 1 is fulfilled. Let λ = 55 . Then, for α = 0.85 , Assumption(A2)of Theorem 1 holds. Therefore, by Theorem 2, the leader-following consensus for multi-agent system (23) with the consensus control law (24) at any update time is reached asymptotically.
To illustrate the behavior of the solutions of the model with impulses occurring at random times, we consider several sample path solutions. For t 0 = 0 we fix the initial values as follows: y 1 0 = 1 , y 2 0 = 2 , y 3 0 = 3 , y 4 0 = 4 y r 0 = 3 2 ; and choose different values of each random variable τ k , k = 1 , 2 , , 12 , in the following way:
(i) 
t 1 = 10 , t 2 = 2 , t 3 = 8 , t 4 = 10 , t 5 = 15 , t 6 = 2 , t 7 = 8 , t 8 = 7 , t 9 = 6 , t 10 = 12 , t 11 = 2 , t 12 = 8 ;
(ii) 
t 1 = 2 , t 2 = 12 , t 3 = 10 , t 4 = 6 , t 5 = 5 , t 6 = 2 , t 7 = 7 , t 8 = 6 , t 9 = 5 , t 10 = 10 , t 11 = 7 , t 12 = 18 ;
(iii) 
t 1 = 3 , t 2 = 9 , t 3 = 11 , t 4 = 7 , t 5 = 5 , t 6 = 9 , t 7 = 6 , t 8 = 7 , t 9 = 2 , t 10 = 6 , t 11 = 15 , t 12 = 10 ;
(iv) 
t 1 = 7 , t 2 = π , t 3 = 8 , t 4 = π / 2 , t 5 = 5 , t 6 = 2 π , t 7 = 4 , t 8 = 11 , t 9 = 3 π / 2 , t 10 = 4 π , t 11 = 17 , t 12 = 10 .
Clearly, the leader state is y r ( t ) 1.5 .
Figure 9, Figure 10, Figure 11 and Figure 12 present the state trajectories of the leader y r ( t ) and agent y 1 ( t ) , y 2 ( t ) , y 3 ( t ) , and y 4 ( t ) , respectively. Apparently, it is visible that the leader-following consensus is reached for all considered sample path solutions.
Case 2.3.There is information exchange between agents occurring at random update times but the leader is not available for agents.
The dynamics between two update times of each agent are given by (23), and at update time ξ k , k = 1 , 2 , , the following control law is applied:
u 1 ( ξ k ) = 0.1 sin 2 ( τ k ) ( y 1 ( ξ k ) y 2 ( ξ k ) ) , u 2 ( ξ k ) = 0.1 e τ k 2 ( y 2 ( ξ k ) y 1 ( ξ k ) ) , u 3 ( ξ k ) = 0 , u 3 ( ξ k ) = 0.06 | sin ( τ k ) | ( y 4 ( ξ k ) y 1 ( ξ k ) ) 0.06 | cos ( τ k ) | ( y 4 ( ξ k ) y 2 ( ξ k ) ) .
In this case, ω i ( t ) 0 , t 0 , i = 1 , 2 , 3 , 4 , and inequalities (13) and (15) are satisfied. According to observation in Remark 3, Assumption (A1) is not fulfilled.
To illustrate the behavior of the solutions of the model with impulses occurring at random times, we fix λ = 55 and consider sample path solutions with the same data as in Case 2.2. Figure 13, Figure 14, Figure 15 and Figure 16 present the state trajectories of the leader y r ( t ) and agent y 1 ( t ) , y 2 ( t ) , y 3 ( t ) and y 4 ( t ) , respectively. Observe that the leader-following consensus is not reached in all considered sample path solutions.
Case 2.4.The leader is not available to one agent at all update times.
The dynamics between two update times of each agent are given by (23), and at update time ξ k , k = 1 , 2 , , the control law is applied:
u 1 ( ξ k ) = 0.1 sin 2 ( τ k ) ( y 1 ( ξ k ) y 2 ( ξ k ) ) 0.9 y 1 ( ξ k ) y r ( ξ k ) , u 2 ( ξ k ) = 0.1 e τ k 2 ( y 2 ( ξ k ) y 1 ( ξ k ) ) , u 3 ( ξ k ) = ( y 3 ( ξ k ) y r ( ξ k ) ) , u 4 ( ξ k ) = 0.06 | sin ( τ k ) | ( y 4 ( ξ k ) y 1 ( ξ k ) ) 0.06 | cos ( τ k ) | ( y 4 ( ξ k ) y 2 ( ξ k ) ) y 4 ( ξ k ) y r ( ξ k ) , k = 1 , 2 , .
Since ω 2 ( t ) 0 , by Remark 3, Assumption (A1) is not fulfilled.
To illustrate the behavior of the solutions of the model with impulses occurring at random times, we consider sample path solutions with the same data as in Case 2.2.
Figure 17, Figure 18, Figure 19 and Figure 20 present the state trajectories of 4 agents and the leader, respectively. Observe that the leader-following consensus is not reached in all considered sample path solutions. It is visible in Figure 18 where the graphs of the state trajectory of the second agent for various values of random variables are presented. This shows the importance of Assumption (A1). However, we emphasize that in the model considered in this paper the information exchange between agents is possible only at discrete random update times and the waiting time between two consecutive updates is exponentially distributed (similarly to queuing theory). Of course, in general, it is obvious that if the leader is continuously available for agents, then the leader-following consensus is reached. But in this paper, we consider the situation when the leader is available just from time to time at random times (so he is not available continuously). We deliver conditions under which, in spite of lack of this continuous information flow from the leader to agents, the leader-following consensus is still reached.
Both examples illustrate that the interaction between the leader and the other agents only at random update times changes significantly the behavior of the agents. If conditions (A1) and (A2) are satisfied, then the leader-following consensus is reached in multi-system (2).

6. Conclusions

The leader-following consensus problem is a key point in the analysis of dynamic multi-agent networks. In this paper, we considered the situation when agents exchanged information only at discrete-time instants that occurred randomly. The proposed control law was distributed, in the sense that only information from neighboring agents was included, which implied that the control law was applied only at update times that occurred randomly. In the cases wherein the random update times were equal to the initially given times, our model was reduced to a continuous-time multi-agent system with discrete-time communications studied in [25]. The main difference between our model and the previous approaches was that we considered a sequence of update times as a sequence of random variables. Besides, unlike in other investigations, the waiting time between two consecutive updates was exponentially distributed. This was motivated by the most useful distribution in queuing theory. The presence of randomly occurring update times required using results from the probability theory and the theory of differential equations with impulses in order to describe our proposed solution to the considered multi-agent system. We provided conditions on the control law that ensured asymptotic leader-following consensus in the sense of the expected value of a stochastic process. This work may be treated as the first step towards the analysis of consensus problems of multi-agents with discrete updates at random times. For example, one of the possible problems to be investigated in the future is to deliver conditions under which the consensus is achieved in the multi-agent system with discrete updates at random times in spite of denial-of-service attacks or for systems with double-integrator dynamics. Another important and interesting issue is to work out a model of a real-world system of agents and to apply our theoretical results. For this purpose, we have to develop or adapt the existing numerical procedures for simulating the evolution of a system with a greater number of agents. This problem is currently under investigation.

Author Contributions

Conceptualization, R.A., E.G., S.H. and A.M.; methodology, R.A., E.G., S.H. and A.M.; software, R.A.; validation, R.A., E.G., S.H. and A.M.; formal analysis, R.A., E.G., S.H. and A.M.; investigation, R.A., E.G., S.H. and A.M.; writing—original draft preparation, R.A., E.G., S.H. and A.M; writing—review and editing, R.A., E.G., S.H. and A.M.; visualization, R.A.; supervision, R.A., E.G., S.H. and A.M.; funding acquisition, E.G. and A.M. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by the Bialystok University of Technology Grant W/WI-IIT/1/2020, financed from a subsidy provided by the Minister of Science and Higher Education.

Acknowledgments

R. Almeida was supported by Portuguese funds through the CIDMA—Center for Research and Development in Mathematics and Applications, and the Portuguese Foundation for Science and Technology (FCT-Fundação para a Ciência e a Tecnologia), within project UIDB/04106/2020. E. Girejko and A. B. Malinowska were supported by the Bialystok University of Technology Grant W/WI-IIT/1/2020, financed from a subsidy provided by the Minister of Science and Higher Education. S. Hristova was supported by the Bulgarian National Science Fund under project KP-06-N32/7.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cucker, F.; Smale, S. On the mathematics of emergence. Jpn. J. Math. 2007, 2, 197–227. [Google Scholar] [CrossRef]
  2. Cucker, F.; Smale, S. Emergent Behavior in Flocks. IEEE Trans. Autom. Control 2007, 52, 852–862. [Google Scholar] [CrossRef] [Green Version]
  3. Jadbabaie, A.; Lin, J.; Morse, A.S. Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans. Autom. Control 2003, 48, 988–1001. [Google Scholar] [CrossRef] [Green Version]
  4. Peng, Z.; Wen, G.; Yang, S.; Rahmani, A. Distributed consensus-based formation control for nonholonomic wheeled mobile robots using adaptive neural network. Nonlinear Dyn. 2016, 86, 605–622. [Google Scholar] [CrossRef]
  5. Olfati-Saber, R.; Murray, R.M. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 2004, 49, 1520–1533. [Google Scholar] [CrossRef] [Green Version]
  6. Mei, J.; Ren, W.; Chen, J. Distributed consensus of second-order multi-agent systems with heterogeneous unknown inertias and control gains under a directed graph. IEEE Trans. Autom. Control 2016, 61, 2019–2034. [Google Scholar] [CrossRef]
  7. Mo, L.; Pan, T.; Guo, S.; Niu, Y. Distributed Coordination Control of First- and Second-Order Multiagent Systems with External Disturbances. Math. Probl. Eng. 2015, 2015, 913689. [Google Scholar] [CrossRef] [Green Version]
  8. Hu, H.-P.; Wang, J.-K.; Xie, F.-L. Dynamics analysis of a new fractional-order Hopfeld neural network with delay and its generalized projective synchronization. Entropy 2019, 21, 1. [Google Scholar] [CrossRef] [Green Version]
  9. Li, L.; Wang, Z.; Lu, J.; Li, Y. Adaptive synchronization of fractional-order complex-valued neural networks with discrete and distributed delays. Entropy 2018, 20, 124. [Google Scholar] [CrossRef] [Green Version]
  10. Stamov, G.; Stamova, I.; Martynyuk, A.; Stamov, T. Design and practical stability of a new class of impulsive fractional-like neural networks. Entropy 2020, 22, 337. [Google Scholar] [CrossRef] [Green Version]
  11. Li, Y.; Tan, C. A survey of the consensus for multi-agent systems. Syst. Sci. Control. Eng. 2019, 7, 468–482. [Google Scholar] [CrossRef] [Green Version]
  12. Ghabcheloo, R.; Aguiar, A.P.; Pascoal, A.; Silvestre, C. Synchronization in multi-agent systems with switching topologies and non-homogeneous communication delays. In Proceedings of the 2007 46th IEEE Conference on Decision and Control, New Orleans, LA, USA, 12–14 December 2007; pp. 2327–2332. [Google Scholar]
  13. Moreau, L. Stability of continuous-time distributed consensus algorithms. In Proceedings of the 2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No. 04CH37601), Nassau, Bahamas, 14–17 December 2004; pp. 3998–4003. [Google Scholar]
  14. Bliman, P.A.; Nedić, A.; Ozdaglar, A. Rate of convergence for consensus with delays. In Proceedings of the 2008 47th IEEE Conference on Decision and Control, Cancun, Mexico, 9–11 December 2008; pp. 4849–4854. [Google Scholar]
  15. Cao, M.; Morse, A.S.; Anderson, B.D.O. Reaching a consensus ina dynamically changing environment: Convergence rates, measurement delays, and asynchronous events. SIAM J. Control Optim. 2008, 47, 601–623. [Google Scholar] [CrossRef] [Green Version]
  16. Cao, M.; Morse, A.S.; Anderson, B.D.O. Agreeing asynchronously. IEEE Trans. Automat. Control 2008, 53, 1826–1838. [Google Scholar]
  17. Xiao, F.; Wang, L. Consensus problems in discrete-time multiagent systems with fixed topology. J. Math. Anal. Appl. 2006, 322, 587–598. [Google Scholar] [CrossRef] [Green Version]
  18. Xiao, F.; Wang, L. Consensus protocols for discrete-time multi-agent systems with time-varying delays. Automatica 2008, 44, 2577–2582. [Google Scholar] [CrossRef]
  19. Zhao, H.; Ren, W.; Yuan, D.; Chen, J. Distributed discrete-time coordinated tracking with Markovian switching topologies. Syst. Control Lett. 2012, 61, 766–772. [Google Scholar] [CrossRef]
  20. Almeida, R.; Girejko, E.; Machado, L.; Malinowska, A.B.; Martins, N. Application of predictive control to the Hegselmann-Krause model. Math. Method. Appl. Sci. 2018, 41, 9191–9202. [Google Scholar] [CrossRef]
  21. Girejko, E.; Malinowska, A.B. Non-invasive control of the Hegselmann–Krause type model. In Proceedings of the 2017 22nd International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, 28–31 August 2017. [Google Scholar]
  22. Girejko, E.; Machado, L.; Malinowska, A.B.; Martins, N. On consensus in the Cucker–Smale type model on isolated times scales. Discrete Contin. Dyn. Syst. Ser. S 2018, 11, 77–89. [Google Scholar] [CrossRef]
  23. Xiao, Q.; Huang, Z. Consensus of multi-agent systems with distributed control on time scales. Appl. Math. Comput. 2016, 277, 54–71. [Google Scholar] [CrossRef]
  24. Ni, W.; Cheng, D. Leader-following consensus of multi-agent systems under fixed and switching topologies. Syst. Control Lett. 2010, 59, 209–217. [Google Scholar] [CrossRef]
  25. Almeida, J.; Silvestre, C.; Pascoal, A.M. Continuous-time consensus with discrete-time communications. Syst. Control. Lett. 2012, 61, 788–796. [Google Scholar] [CrossRef]
  26. Girejko, E.; Malinowska, A.B. Leader-following consensus for networks with single- and double-integrator dynamics. Nonlin. Anal. Hybrid Syst. 2019, 31, 302–316. [Google Scholar] [CrossRef]
  27. Malinowska, A.B.; Schmeidel, E.; Zdanowicz, M. Discrete leader-following consensus. Math. Methods Appl. Sci. 2017, 40, 7307–7315. [Google Scholar] [CrossRef]
  28. Song, Q.; Cao, J.; Yu, W. Second-order leader-following consensus of nonlinear multi-agent systems via pinning control. Syst. Control Lett. 2010, 59, 553–562. [Google Scholar] [CrossRef]
  29. Knill, O. Probability Theory and Stochastic Processes with Applications; Overseas Press: Delhi, India, 2009. [Google Scholar]
  30. Agarwal, R.; Hristova, S.; O’Regan, D. Exponential stability for differential equations with random impulses at random times. Adv. Differ. Equ. 2013, 2013, 372. [Google Scholar] [CrossRef] [Green Version]
  31. Lakshmikantham, V.; Bainov, D.D.; Simeonov, P.S. Theory of Impulsive Differential Equations; Series in Modern Applied Mathematics; Word Scientific Publications Co.: Teaneck, NJ, USA, 1989; p. 273. [Google Scholar]
  32. Evans, L.C. An Introduction to Stochastic Differential Equations; American Mathematical Society: Providence, RI, USA, 2014. [Google Scholar]
Figure 1. Example 1. Case 1.1. Graphs of the state trajectories y i ( t ) , i = 1 , 2 , 3 , of the agents and the leader y r .
Figure 1. Example 1. Case 1.1. Graphs of the state trajectories y i ( t ) , i = 1 , 2 , 3 , of the agents and the leader y r .
Entropy 22 00650 g001
Figure 2. Example 1. Case 1.2. Graphs of the state trajectory y 1 ( t ) of the first agent for various values of random variables τ k .
Figure 2. Example 1. Case 1.2. Graphs of the state trajectory y 1 ( t ) of the first agent for various values of random variables τ k .
Entropy 22 00650 g002
Figure 3. Example 1. Case 1.2. Graphs of the state trajectory y 2 ( t ) of the second agent for various values of random variables τ k .
Figure 3. Example 1. Case 1.2. Graphs of the state trajectory y 2 ( t ) of the second agent for various values of random variables τ k .
Entropy 22 00650 g003
Figure 4. Example 1. Case 1.2. Graphs of the state trajectory y 3 ( t ) of the third agent for various values of random variables τ k .
Figure 4. Example 1. Case 1.2. Graphs of the state trajectory y 3 ( t ) of the third agent for various values of random variables τ k .
Entropy 22 00650 g004
Figure 5. Example 1. Case 1.3. Graphs of the state trajectory y 1 ( t ) of the first agent for various values of random variables τ k .
Figure 5. Example 1. Case 1.3. Graphs of the state trajectory y 1 ( t ) of the first agent for various values of random variables τ k .
Entropy 22 00650 g005
Figure 6. Example 1. Case 1.3. Graphs of the state trajectory y 2 ( t ) of the second agent for various values of random variables τ k .
Figure 6. Example 1. Case 1.3. Graphs of the state trajectory y 2 ( t ) of the second agent for various values of random variables τ k .
Entropy 22 00650 g006
Figure 7. Example 1. Case 1.3. Graphs of the state trajectory y 3 ( t ) of the third agent for various values of random variables τ k .
Figure 7. Example 1. Case 1.3. Graphs of the state trajectory y 3 ( t ) of the third agent for various values of random variables τ k .
Entropy 22 00650 g007
Figure 8. Example 2. Case 2.1. Graphs of the state trajectories y i ( t ) , i = 1 , 2 , 3 of the agents and the leader y r .
Figure 8. Example 2. Case 2.1. Graphs of the state trajectories y i ( t ) , i = 1 , 2 , 3 of the agents and the leader y r .
Entropy 22 00650 g008
Figure 9. Example 2. Case 2.2. Graphs of the state trajectory y 1 ( t ) of the first agent for various values of random variables τ k .
Figure 9. Example 2. Case 2.2. Graphs of the state trajectory y 1 ( t ) of the first agent for various values of random variables τ k .
Entropy 22 00650 g009
Figure 10. Example 2. Case 2.2. Graphs of the state trajectory y 2 ( t ) of the second agent for various values of random variables τ k .
Figure 10. Example 2. Case 2.2. Graphs of the state trajectory y 2 ( t ) of the second agent for various values of random variables τ k .
Entropy 22 00650 g010
Figure 11. Example 2. Case 2.2. Graphs of the state trajectory y 3 ( t ) of the third agent for various values of random variables τ k .
Figure 11. Example 2. Case 2.2. Graphs of the state trajectory y 3 ( t ) of the third agent for various values of random variables τ k .
Entropy 22 00650 g011
Figure 12. Example 2. Case 2.2. Graphs of the state trajectory y 4 ( t ) of the fourth agent for various values of random variables τ k .
Figure 12. Example 2. Case 2.2. Graphs of the state trajectory y 4 ( t ) of the fourth agent for various values of random variables τ k .
Entropy 22 00650 g012
Figure 13. Example 2. Case 2.3. Graphs of the state trajectory y 1 ( t ) of the first agent for various values of random variables τ k .
Figure 13. Example 2. Case 2.3. Graphs of the state trajectory y 1 ( t ) of the first agent for various values of random variables τ k .
Entropy 22 00650 g013
Figure 14. Example 2. Case 2.3. Graphs of the state trajectory y 2 ( t ) of the second agent for various values of random variables τ k .
Figure 14. Example 2. Case 2.3. Graphs of the state trajectory y 2 ( t ) of the second agent for various values of random variables τ k .
Entropy 22 00650 g014
Figure 15. Example 2. Case 2.3. Graphs of the state trajectory y 3 ( t ) of the third agent for various values of random variables τ k .
Figure 15. Example 2. Case 2.3. Graphs of the state trajectory y 3 ( t ) of the third agent for various values of random variables τ k .
Entropy 22 00650 g015
Figure 16. Example 2. Case 2.3. Graphs of the state trajectory y 4 ( t ) of the fourth agent for various values of random variables τ k .
Figure 16. Example 2. Case 2.3. Graphs of the state trajectory y 4 ( t ) of the fourth agent for various values of random variables τ k .
Entropy 22 00650 g016
Figure 17. Example 2. Case 2.4. Graphs of the state trajectory y 1 ( t ) of the first agent for various values of random variables τ k .
Figure 17. Example 2. Case 2.4. Graphs of the state trajectory y 1 ( t ) of the first agent for various values of random variables τ k .
Entropy 22 00650 g017
Figure 18. Example 2. Case 2.4. Graphs of the state trajectory y 2 ( t ) of the second agent for various values of random variables τ k .
Figure 18. Example 2. Case 2.4. Graphs of the state trajectory y 2 ( t ) of the second agent for various values of random variables τ k .
Entropy 22 00650 g018
Figure 19. Example 2. Case 2.4. Graphs of the state trajectory y 3 ( t ) of the third agent for various values of random variables τ k .
Figure 19. Example 2. Case 2.4. Graphs of the state trajectory y 3 ( t ) of the third agent for various values of random variables τ k .
Entropy 22 00650 g019
Figure 20. Example 2. Case 2.4. Graphs of the state trajectory y 4 ( t ) of the fourth agent for various values of random variables τ k .
Figure 20. Example 2. Case 2.4. Graphs of the state trajectory y 4 ( t ) of the fourth agent for various values of random variables τ k .
Entropy 22 00650 g020

Share and Cite

MDPI and ACS Style

Almeida, R.; Girejko, E.; Hristova, S.; Malinowska, A. On Leader-Following Consensus in Multi-Agent Systems with Discrete Updates at Random Times. Entropy 2020, 22, 650. https://doi.org/10.3390/e22060650

AMA Style

Almeida R, Girejko E, Hristova S, Malinowska A. On Leader-Following Consensus in Multi-Agent Systems with Discrete Updates at Random Times. Entropy. 2020; 22(6):650. https://doi.org/10.3390/e22060650

Chicago/Turabian Style

Almeida, Ricardo, Ewa Girejko, Snezhana Hristova, and Agnieszka Malinowska. 2020. "On Leader-Following Consensus in Multi-Agent Systems with Discrete Updates at Random Times" Entropy 22, no. 6: 650. https://doi.org/10.3390/e22060650

APA Style

Almeida, R., Girejko, E., Hristova, S., & Malinowska, A. (2020). On Leader-Following Consensus in Multi-Agent Systems with Discrete Updates at Random Times. Entropy, 22(6), 650. https://doi.org/10.3390/e22060650

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop