Next Article in Journal
An Improved Marriage in Honey-Bee Optimization Algorithm for Minimizing Earliness/Tardiness Penalties in Single-Machine Scheduling with a Restrictive Common Due Date
Previous Article in Journal
RMD-Net: A Deep Learning Framework for Automated IHC Scoring of Lung Cancer IL-24
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Centralized and Decentralized Event-Triggered Nash Equilibrium-Seeking Strategies for Heterogeneous Multi-Agent Systems

1
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China
2
School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen 518107, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(3), 419; https://doi.org/10.3390/math13030419
Submission received: 29 November 2024 / Revised: 20 January 2025 / Accepted: 25 January 2025 / Published: 27 January 2025
(This article belongs to the Section E: Applied Mathematics)

Abstract

:
This paper addresses the event-triggered Nash equilibrium-seeking problem for non-cooperative games played by heterogeneous multi-agent systems. Unlike homogeneous multi-agent systems, heterogeneous multi-agent systems consist of agents with different dynamic structures, making it difficult to design control schemes and construct event-triggering conditions for such systems. In this paper, a novel centralized event-triggered Nash equilibrium-seeking strategy and a novel decentralized event-triggered Nash equilibrium-seeking strategy are proposed. The corresponding centralized and decentralized event-triggering conditions are derived. The convergence properties of the proposed centralized and decentralized strategies are proved. Further theoretical analyses illustrate that Zeno behavior does not exist under the proposed strategies. Finally, the effectiveness and efficiency of both centralized and decentralized strategies are presented through numerical experiments. The experimental results illustrate that under both strategies, heterogeneous multi-agent systems achieve the Nash equilibrium successfully, and the communication consumption among agents is significantly reduced.

1. Introduction

In recent years, game theory has been adopted in various application fields, such as control of unmanned systems [1], multi-agent exploration policies [2], collective decision making [3], communication networks [4], and cooperative advertising strategy [5]. As an important part of game theory, the Nash equilibrium (NE)-seeking problem within multi-agent systems (MASs) has attracted increasing interest among scholars in multi-agent communities [6,7,8,9,10,11,12]. For example, in [6], Ye et al. tackled the distributed NE-seeking problem with constraints on control inputs by introducing a series of consensus-based strategies. In [10], Hua et al. focused on the generalized NE-seeking problem, which concerns the action constraints of the agents. An NE-seeking problem within MASs with strongly connected switching networks was considered by He et al. in [11]. Additionally, Tan et al. addressed an NE-seeking problem with the payoff functions and actions of agents unknown to each other in [12].
In the studies mentioned above, the NE is achieved through continuous-time communication. However, in practice, due to physical limitations, it is often necessary for mobile robots and other autonomous agents to implement controllers that do not depend on continuous-time communications [13,14,15]. It is a widely accepted fact that by introducing an event-triggered mechanism or a periodic sampling scheme, the communication consumption is effectively reduced. Thus, event-triggered mechanisms have attracted tremendous attention [16,17,18]. Many scholars have made efforts to replace continuous communication by introducing event-triggered mechanisms into NE-seeking strategies in recent years [19,20,21,22]. For example, Shi et al. developed an event-triggered strategy to compute the NE in aggregative games in [20]. Another event-triggered strategy was proposed by Shi et al. to solve the generalized NE-seeking problem of networked non-cooperative games in [21]. Those strategies are semi-decentralized. They require agents to have access to certain global information, such as the aggregate value of aggregative games or complete decision information. In [22], for non-cooperative games, Wang et al. proposed a decentralized event-triggered NE-seeking strategy that relies on partial decision information of each agent, such as local action and the estimation of neighbors of an agent.
In most of the above-mentioned studies, the authors studied homogeneous MASs, in which the dynamics of the agents are the same. Specifically, those strategies are designed exclusively for homogeneous MASs that contain only single-integrator or only double-integrator agents. However, this situation is a special case. Heterogeneous MASs, with different dynamic structures or identical dynamic structures but different parameters, are encountered in various applications [23,24,25,26,27,28]. According to [26], there are typically two types of heterogeneous MASs. The first type consists of agents with identical system structures but different parameters. For example, in [27], Li et al. developed an adaptive fuzzy containment control method for nonlinear MASs with time-delayed input. The followers in the MASs have identical system structures but different parameters. The second type of heterogeneous MASs consists of agents with non-identical system structures. For example, in [28], He and Huang considered the NE-seeking problem for high-order-integrator agents. If a multi-agent system consists of single- and double-integrator agents simultaneously, then the multi-agent system is deemed as a second-type heterogeneous multi-agent system. Unlike in the field of game theory, the implementation of an event-triggered mechanism for heterogeneous MASs has long been explored in other scientific fields, such as group consensus [29,30,31,32,33]. For example, in [32], combinational measurements were introduced to handle the pinning exponential synchronization of complex networks. In [33], Li et al. designed a fully distributed event-triggered pinning control scheme to address the group consensus problem for heterogeneous MASs with cooperative–competitive interaction.
Driven by the above discussion, this paper aims to design event-triggered NE-seeking strategies for non-cooperative games played by heterogeneous MASs with both single- and double-integrator agents. In comparison to existing works, the main contributions of this paper are presented as follows.
  • Event-triggered NE seeking for non-cooperative games played by heterogeneous MASs comprising both single- and double-integrator agents is studied in this paper. Compared with conventional NE seeking for homogeneous MASs, the NE-seeking problem addressed in this paper is conducted by heterogeneous MASs, which consist of agents with varying dynamic structures, and it introduces an event-triggered mechanism to reduce communication consumption.
  • A novel centralized event-triggered NE-seeking (CETNES) strategy and a novel decentralized event-triggered NE-seeking (DETNES) strategy are proposed to address the event-triggered NE-seeking problem for heterogeneous MASs. The corresponding centralized and decentralized event-triggering conditions are derived. The proposed CETNES and DETNES strategies successfully solve the NE-seeking problem for heterogeneous MASs and significantly reduce communication consumption.
  • The convergence properties of both the CETNES and DETNES strategies are proved through Lyapunov stability theory. The nonexistence of Zeno behavior for both the CETNES and DETNES strategies is also proved.
The remaining part of this paper is organized as follows. In Section 2, some preliminaries concerning graph theory and mathematical notations are provided. Additionally, the formulation for the NE-seeking problem for non-cooperative games played by heterogeneous MASs is also presented in Section 2. In Section 3, the CETNES and DETNES strategies are proposed. Specifically speaking, in Section 3.1, a novel CETNES strategy is proposed, and detailed theoretical analyses of the CETNES strategy are presented. In Section 3.2, a novel DETNES strategy is designed to address the NE-seeking problem in a fully distributed manner, and detailed theoretical analyses of the DETNES strategy are presented. Moreover, in Section 4, the efficiency and efficacy of the proposed strategies are illustrated through numerical experiments. Finally, the conclusion is provided in Section 5.

2. Preliminaries and Problem Formulation

In this section, to lay the foundation for further investigation, some preliminaries concerning mathematical notation and graph theory are provided. Additionally, the formulation of the NE-seeking problem in non-cooperative games played by heterogeneous MASs is presented.

2.1. Mathematical Notation

In this paper, the notation  M = diag { l i j } R n 2 × n 2  for  i , j { 1 , 2 , , n }  denotes a diagonal matrix with diagonal elements  l 11 , l 12 , , l 1 n , l 21 , l 22 , , l 2 n , , l n 1 , l n 2 , , l n n , respectively. Furthermore,  λ min ( O )  denotes the minimum eigenvalue of a symmetric and real matrix  O R n × n , and  [ h i ] vec = [ h 1 , h 2 , , h n ] T . Moreover,   denotes the 2-norm of a matrix or vector and the Kronecker product is denoted by ⊗ [34].

2.2. Graph Theory

Consider  G = ( V , ϵ , A )  as the communication topology in non-cooperative games that contain n agents. The set of nodes is denoted by  V = { V 1 , V 2 , , V n } , the set of unordered pairs is defined as  ϵ = { ϵ i j = ( i , j ) V × V } , the adjacency matrix is  A = ( a i j ) n × n , where  a i j 0 , the in-degree of the ith node is denoted by  d i , and the in-degree matrix is denoted as  D = diag { d 1 , , d n } R n × n , where  d i = j = 1 n a i j . Moreover, the Laplacian matrix is  L = D A R n × n  [35].

2.3. Problem Formulation

Consider non-cooperative games played by heterogeneous MASs with m single-integrator agents and  n m  double-integrator agents. The dynamics of the heterogeneous agents are denoted as
x ˙ i ( t ) = u i ( t ) , i S m , x ˙ i ( t ) = v i ( t ) , v ˙ i ( t ) = u i ( t ) , i S n m ,
where  S m = { 1 , 2 , , m } S n m = { m + 1 , m + 2 , , n } S n = S m S n m ; and  S n S n m = . In addition,  x i ( t ) R v i ( t ) R , and  u i ( t ) R  denote the position, velocity, and control input for agent i, respectively.
Remark 1. 
Before proceeding, in order to simplify complicated computations, it is necessary to point out that the position, velocity, and control input considered in this paper are scalars. In other words,  x i ( t ) R v i ( t ) R , and  u i ( t ) R  belong to 1-dimensional space. Nevertheless, it is easy to extend the main results in this paper to n-dimensional space by adopting the Kronecker product.
The cost function of agent i is  f i ( x ( t ) )  for both single-integrator and double-integrator agents, where  x ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x n ( t ) ] T R n  denotes the position set of all the n agents. The NE  x * = ( x i * , x i * )  is given by
f i ( x i * , x i * ) f i ( x i ( t ) , x i * )
for  x i ( t ) R , i { 1 , 2 , , n } , and  x i ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x i 1 ( t ) , x i + 1 ( t ) , , x n ( t ) ] T R n 1 .
In order to investigate the convergence properties, the following assumptions are introduced [6].
Assumption 1. 
There exists a graph  G  that denotes the communication topology of the n agents, and  G  is undirected and connected.
Assumption 2. 
For all  x ( t ) , z ( t ) R n , there is a positive constant m such that
x ( t ) z ( t ) T φ ( x ( t ) ) φ ( z ( t ) ) m x ( t ) z ( t ) 2 ,
where vector  φ ( x ) = [ φ i ( x ( t ) ) ] vec R n , with  φ i x ( t ) = f i x ( t ) / x i ( t ) .
Assumption 3. 
The partial derivative of cost function  f i ( x ( t ) )  for each agent is globally Lipschitz, which means that there exists a positive constant  l ¯  such that
φ i x ( t ) φ i z ( t ) l ¯ x ( t ) z ( t ) .
Remark 2. 
It is obtained from Assumption 2 that for each fixed  x i f i ( x i ( t ) , x i )  is strongly convex [6]. In addition, it is derived from Assumptions 2 and 3 that the NE  x *  of the non-cooperative game is unique and the positions of agents are at the NE if and only if  φ ( x * ) = 0 n  [36].

3. Main Results

In this section, the CETNES and DETNES strategies for non-cooperative games played by heterogeneous MASs are proposed for the first time. The convergence properties of the proposed strategies are obtained by utilizing Lyapunov stability theory. Furthermore, for both the CETNES and DETNES strategies, the nonexistence of Zeno behavior is proved.

3.1. CETNES Strategy

Suppose that, when adopting the CETNES strategy, all the agents update control input  u i ( t )  and exchange information with their neighbors at the same centralized event-triggering time instant. Consider  t k  as the kth centralized event-triggering time instant for all agents. For heterogeneous MASs, when  t [ t k , t k + 1 ) , the CETNES strategy is proposed as
u i ( t ) = φ i ( y i ( t ) ) , i S m , u i ( t ) = r v i ( t ) + φ i ( y i ( t ) ) , i S n m , y ˙ i j ( t ) = θ q = 1 n a i k y i j ( t k ) y q j ( t k ) + a i j y i j ( t k ) x j ( t k ) , i S n ,
where  y i ( t ) = [ y i 1 ( t ) , y i 2 ( t ) , , y i n ( t ) ] T  denotes the local estimation of  x ( t )  of agent i. Moreover, for the double-integrator agents, r is a positive parameter to be determined. For all agents,  θ  is a positive parameter to be determined. In addition,  a i j  is the element of the adjacency matrix  A  on the ith row and the jth column.
In this subsection, for agent i, to measure the difference between its own estimation and the actual value of agents position state vector  x ( t )  at time t, a vector-valued function  q i ( t ) R n  is introduced as
q i ( t ) = y i ( t ) x ( t ) .
The event-triggered mechanism for the CETNES strategy is determined by a vector-valued function  g ( t ) R n 2 , and  g ( t )  is defined as
g ( t ) = q ( t k ) q ( t ) , t [ t k , t k + 1 ) ,
where  q ( t ) = y ( t ) 1 n x ( t ) R n 2 , with  1 n  denoting an n-dimensional column vector whose elements are 1, and  y ( t ) = [ y 1 T ( t ) , y 2 T ( t ) , , y n T ( t ) ] T R n 2 . In addition,  g ( t ) = [ g 1 T ( t k ) , g 2 T ( t k ) , , g n T ( t k ) ] T R n 2 , and  g i ( t ) = q i ( t k ) q i ( t ) R n .
The event-triggering condition is based on the value relation between  q ( t )  and  g ( t ) . When the event-triggering condition is satisfied, the event is triggered. Then, the next event-triggering time instant  t k + 1  is reached. The centralized event-triggering condition is given as
t k + 1 = inf { t : t > t k , g ( t ) > ρ q ( t ) } ,
where  ρ  is a positive parameter to be determined.
Remark 3. 
Compared with the NE-seeking strategy in [6], the proposed CETNES strategy does not require continuous-time information exchange among neighboring players. Therefore, the proposed CETNES strategy reduces the communication consumption.
The convergence properties of the proposed CETNES strategy are given by the following theorem.
Theorem 1. 
Consider non-cooperative games played by heterogeneous MASs following the CETNES strategy (4) and centralized event-triggering condition (5). Suppose that Assumptions 1–3 are satisfied. Then, there exists a positive constant  ρ * . For each  ρ ( 0 , ρ * ) , the NE is asymptotically achieved.
Proof of Theorem 1. 
Firstly, based on the CETNES strategy (4), when  t [ t k , t k + 1 ) , the dynamics of the heterogeneous MASs are formulated as
x ˙ i ( t ) = φ i y i ( t ) , i S m , x ˙ i ( t ) = v i ( t ) , v ˙ i ( t ) = r v i ( t ) + φ i ( y i ( t ) ) , i S n m , y ˙ i j ( t ) = θ q = 1 N a i q y i j ( t k ) y q j ( t k ) + a i j y i j ( t k ) x j ( t k ) , i S n .
Before proceeding, some notation is introduced for convenience. Let  y s ( t ) = [ y 1 T ( t ) , y 2 T ( t ) , , y m T ( t ) ] T R n m  and  y d ( t ) = [ y m + 1 T ( t ) , y m + 2 T ( t ) , , y n T ( t ) ] T R n 2 n m  denote the estimations of single-integrator and double-integrator agents, respectively. Let  x s ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x m ( t ) ] T R m  and  x d ( t ) = [ x m + 1 ( t ) , x m + 2 ( t ) , , x n ( t ) ] T R n m  denote the position vectors of single-integrator and double-integrator agents, respectively. Moreover, let  y ( t ) = [ y s T ( t ) , y d T ( t ) ] T R n 2  and  x ( t ) = [ x s T ( t ) , x d T ( t ) ] T R n . Therefore, the corresponding vector form of (6) is given as
x ˙ ( t ) = φ s y s ( t ) v d ( t ) , v ˙ d ( t ) = v d ( t ) + φ d y d ( t ) , y ˙ ( t ) = θ L I n × n + A g ( t ) + q ( t ) ,
where  v ˙ d ( t ) = [ v m + 1 ( t ) , v m + 2 ( t ) , , v n ( t ) ] T , and matrix A is defined as  A = diag { a i j } R n 2 × n 2  for  i , j S n . In addition,  φ s ( y s ( t ) )  and  φ d y d ( t )  are defined as
φ s ( y s ( t ) ) = φ 1 y 1 ( t ) , φ 2 y 2 ( t ) , , φ m ( y m ( t ) ) T R n m R m , φ d y d ( t ) = φ m + 1 y m + 1 ( t ) , φ m + 2 y m + 2 ( t ) , , φ n y n ( t ) T R n 2 n m R n m .
To prove Theorem 1, the Lyapunov candidate function is given as
V ( t ) = V 1 ( t ) + V 2 ( t ) ,
where
V 1 ( t ) = 1 2 x s ( t ) x s * T x s ( t ) x s * + 1 2 x d ( t ) x d * + 1 r v d ( t ) T x d ( t ) x d * + 1 r v d ( t ) + 1 2 r 2 v d T ( t ) v d ( t ) ,
V 2 ( t ) = q T ( t ) P q ( t ) .
with matrix  P R n 2 × n 2  defined as  P = ( 1 / 2 ) ( L I n × n + A ) . From Assumption 1, one knows that  G  is connected and undirected. Hence,  L  is symmetric and semi-positive definite. In addition, A is a semi-positive definite diagonal matrix. Hence,  L I n × n + A  is symmetric and semi-positive definite. Additionally, in [37], it is proved that  L I n × n + A  is a non-singular matrix for a connected and undirected graph. Therefore, matrix  L I n × n + A  is symmetric and positive definite. Define matrix  Q  as  Q = P ( L I n × n + A ) + ( L I n × n + A ) P . By Assumption 1,  Q  is also a symmetric positive definite matrix [6]. It is evident that  V 1 ( t ) 0  and  V 2 ( t ) 0 . Hence,  V ( t ) = V 1 ( t ) + V 2 ( t ) 0 . Therefore, if  V ˙ 1 ( t ) + V 2 ˙ ( t ) 0 , then Theorem 1 is proved.
The time derivative of  V 1 ( t )  is given as
V ˙ 1 ( t ) = x s ( t ) x s * T x ˙ s ( t ) + x d ( t ) x d * + 1 r v d ( t ) T x ˙ d ( t ) + 1 r v ˙ d ( t ) + 1 r 2 v d T ( t ) v ˙ d ( t ) = x s ( t ) x s * T φ s y s ( t ) x d ( t ) x d * T φ d y d ( t ) 1 r v d T ( t ) v d ( t ) 2 r v d T ( t ) φ d y d ( t ) = x ( t ) x * T φ y y ( t ) 1 r v d ( t ) + φ d y d ( t ) T v d ( t ) + φ d y d ( t ) + 1 r φ d T y d ( t ) φ d y d ( t ) ,
where  φ y y ( t ) = [ φ s T ( y s ( t ) ) , φ d T y d ( t ) ] T R n 2 R n . Since  φ ( x * ) = 0 n , one further has
V ˙ 1 ( t ) = x ( t ) x * T φ y y ( t ) φ ( x ( t ) ) + φ ( x ( t ) ) φ ( x * ) 1 r v d ( t ) + φ d y d ( t ) T v d ( t ) + φ d y d ( t ) + 1 r φ d T y d ( t ) φ d y d ( t ) = x ( t ) x * T φ y y ( t ) φ ( x ( t ) ) x ( t ) x * T φ x ( t ) φ ( x * ) 1 r v d ( t ) + φ d y d ( t ) T v d ( t ) + φ d y d ( t ) + 1 r φ d T y d ( t ) φ d y d ( t ) .
Under Assumption 2, one has  x ( t ) x * T φ x ( t ) φ ( x * ) m x ( t ) x * 2 . Moreover, under Assumption 3, one obtains  | φ i ( y i ( t ) ) φ i ( x ( t ) ) | l i ¯ y i ( t ) x ( t ) , where  l ¯ i  is the Lipschitz constant of  φ i ( x ( t ) ) . Therefore,  φ y y ( t ) φ ( x ( t ) ) l y ( t ) 1 n x ( t ) = l q ( t ) , where  l = max { l ¯ i }  for  i S n . Hence,
V ˙ 1 ( t ) = x ( t ) x * T φ y y ( t ) φ ( x ( t ) ) x ( t ) x * T φ x ( t ) φ ( x * ) 1 r v d ( t ) + φ d y d ( t ) T v d ( t ) + φ d y d ( t ) + 1 r φ d T y d ( t ) φ d y d ( t ) l q ( t ) x ( t ) x * m x ( t ) x * 2 1 r v d ( t ) + φ d y d ( t ) 2 + 1 r φ d y d ( t ) 2 .
As for  V 2 ( t ) , the time derivative  V ˙ 2 ( t )  is given as
V ˙ 2 ( t ) = y ˙ ( t ) 1 n x ˙ ( t ) T P q ( t ) + q T ( t ) P y ˙ ( t ) 1 n x ˙ ( t ) = 2 q T ( t ) P y ˙ ( t ) 2 q T ( t ) P ( 1 n x ˙ ( t ) ) .
From Equation (7), one obtains
x ˙ ( t ) = φ s y s ( t ) v d ( t ) = φ y y ( t ) + 0 m v d ( t ) + φ d y d ( t ) ,
where  0 m = [ 0 , 0 , , 0 ] m × 1 . Therefore,
V ˙ 2 ( t ) = 2 q T ( t ) P 1 n φ y y ( t ) 0 m v d ( t ) + φ d y d ( t ) + 2 q T ( t ) P y ˙ ( t ) = 2 q T ( t ) P 1 n φ y y ( t ) 2 q T ( t ) P 1 n 0 m v d ( t ) + φ d y d ( t ) θ q T ( t ) Q q ( t ) + g ( t ) = 2 q T ( t ) P 1 n φ y y ( t ) 1 n φ x ( t ) + 2 q T ( t ) P 1 n φ x ( t ) 2 q T ( t ) P 1 n 0 m v d ( t ) + φ d y d ( t ) θ q T ( t ) Q q ( t ) + g ( t ) 2 q ( t ) P 1 n φ y y ( t ) 1 n φ x ( t ) + 2 q ( t ) P 1 n φ x ( t ) + 2 q ( t ) P 1 n ( v d ( t ) + φ d ( y d ( t ) ) ) θ λ min ( Q ) q ( t ) 2 + θ Q q ( t ) g ( t ) .
By Assumption 3, one has  φ y y ( t ) φ ( x ( t ) ) l y ( t ) 1 n x ( t ) = l q ( t ) . Therefore,  1 n φ y y ( t ) 1 n φ x ( t ) n φ y y ( t ) φ ( x ( t ) ) l n y ( t ) 1 n x ( t ) = l n q ( t ) . Moreover, since  φ ( x * ) = 0 n , one has  1 n φ x ( t ) n φ x ( t ) φ x * l n x ( t ) x * . In addition,  1 n ( v d ( t ) + φ d ( y d ( t ) ) ) n v d ( t ) + φ d ( y d ( t ) ) . Hence, one obtains
V ˙ 2 ( t ) 2 l n P q ( t ) 2 + 2 l n P q ( t ) x ( t ) x * θ λ min ( Q ) q ( t ) 2 + 2 n P q ( t ) v d ( t ) + φ d y d ( t ) + θ Q q ( t ) g ( t ) .
Adding inequalities (11) and (13), one has
V ˙ ( t ) = V ˙ 1 ( t ) + V ˙ 2 ( t ) m x ( t ) x * 2 1 r v d ( t ) + φ d y d ( t ) 2 θ λ min ( Q ) q ( t ) 2 + l q ( t ) x ( t ) x * + 2 l n P q ( t ) x ( t ) x * + 2 l n P q ( t ) 2 + 2 n P q ( t ) v d ( t ) + φ d y d ( t ) + θ Q q ( t ) g ( t ) + 1 r φ d y d ( t ) 2 .
Based on Lyapunov stability theory, one knows that if inequality  V ˙ ( t ) 0  is proved, then Theorem 1 is proved. By observing inequality (14), one sees that there are negative terms and positive terms in inequality (14). In addition, inequality (14) is too complicated to analyze  V ˙ ( t ) . Therefore, it needs to be simplified. For further investigation, the Young’s inequality [38] is introduced as
| x y | γ 2 x 2 + 1 2 γ y 2 , x , y R , γ R + .
By reformulating the positive terms  l q ( t ) x ( t ) x * 2 l n P q ( t ) x ( t ) x * , and  2 n P q ( t ) v d ( t ) + φ d y d ( t ) , inequality (14) is reformulated as
V ˙ ( t ) m l 2 γ 1 l h 2 γ 2 x ( t ) x * 2 ( θ λ min ( Q ) l h l γ 1 2 l h γ 2 2 h γ 3 2 ) q ( t ) 2 1 r h γ 3 v d ( t ) + φ d y d ( t ) 2 + θ Q q ( t ) g ( t ) + 1 r φ d y d ( t ) 2 ,
where  γ 1 , γ 2 , and  γ 3  are positive constants to be determined, and constant h is defined as  h = 2 n P  for convenience in algebraic computation. In addition,
φ d y d ( t ) 2 = φ d y d ( t ) φ d ( 1 n m x ( t ) ) + φ d ( 1 n m x ( t ) ) φ d ( 1 n m x * ) 2 2 φ d y d ( t ) φ d ( 1 n m x ( t ) ) 2 + 2 φ d ( 1 n m x ( t ) ) φ d ( 1 n m x * ) 2 .
By Assumption 3, it is derived that
φ d y d ( t ) 2 2 l d y d ( t ) 1 n m x ( t ) 2 + 2 l d n m x ( t ) x * 2 2 l d q ( t ) 2 + 2 l d n m x ( t ) x * 2 ,
where  l d = max { l i ¯ }  for  i S n m . Moreover, under the event-triggering condition (5), one has  g ( t ) ρ q ( t ) . Therefore, inequality (15) is reformulated as
V ˙ ( t ) m l 2 γ 1 l h 2 γ 2 2 l d r n m x ( t ) x * 2 ( θ λ min ( Q ) l h l γ 1 2 l h γ 2 2 h γ 3 2 2 l d r ) q ( t ) 2 1 r h γ 3 v d ( t ) + φ d y d ( t ) 2 + θ Q q ( t ) g ( t ) ,
For convenience in algebraic computation, define parameters  α 1 , α 2 , and  α 3  as
α 1 = m l 2 γ 1 l h 2 γ 2 2 l d r n m , α 2 = θ λ min ( Q ) 2 l d r l h l γ 1 2 l h γ 2 2 h γ 3 2 , α 3 = 1 r h γ 3 .
Then, one obtains
V ˙ ( t ) α 1 x ( t ) x * 2 α 2 q ( t ) 2 α 3 v d ( t ) + φ d y d ( t ) 2 + ρ θ Q q ( t ) 2 .
From inequality (16), one sees that if one can prove that there exists control parameters  θ , r, and  ρ  that guarantee  α 1 , α 2 , α 3 > 0  and  α 2 ρ θ Q > 0 , then one obtains  V ˙ ( t ) < 0  and Theorem 1 is proved.
First, we prove that there exists a positive control parameter r that guarantees  α 1 > 0 . Since m, l, and h are positive constant, one obtains that there exist positive  γ 1  and  γ 2  that guarantee  m l / 2 γ 1 l h / 2 γ 2 > 0 . For fixed  γ 1  and  γ 2 , define a positive constant  r *  that makes  α 1 = 0 . It is derived that
r * = 4 l d γ 1 γ 2 n m 2 m γ 1 γ 2 l γ 2 l h γ 1 .
Hence, for any  r > r * , inequality  α 1 > 0  holds.
Then, for a fixed r, since h is a positive constant and  α 3 = 1 / r h / γ 3 , one obtains that there exists a positive  γ 3  that guarantees  α 3 > 0 . It is derived that  γ 3 > r h .
Moreover, we prove that for fixed  γ 1 γ 2 γ 3 , and r, there exists a positive control parameter r that guarantees  α 2 > 0 . For fixed  γ 1 γ 2 γ 3 , and r, define a positive constant  θ *  that makes  α 2 = 0 . It is derived that
θ * = 2 r l h + 4 l d + r l γ 1 + r l h γ 2 + r h γ 3 2 r λ min ( Q ) .
Hence, for any  θ > θ * , inequality  α 2 > 0  holds.
Finally, we prove that for fixed  γ 1 γ 2 γ 3 , r, and  θ , there exists a positive control parameter r that guarantees  α 2 ρ θ Q > 0 . For fixed  γ 1 γ 2 γ 3 , r, and  θ , define a positive constant  ρ *  that makes  α 2 ρ θ Q = 0 . It is derived that
ρ * = α 2 θ Q = 2 r θ λ min ( Q ) 2 r l h 4 l d r l γ 1 r l h γ 2 r h γ 3 2 r θ Q > 0 .
Hence, for any  ρ > ρ * , inequality  α 2 > 0  holds. Thus, for  ρ ( 0 , ρ * ) α 2 ρ θ Q > 0 . Hence, with  α 1 , α 2 , α 3 > 0 , it is derived that  V ˙ ( t ) < 0 .
Therefore, for properly chosen  γ 1 , γ 2 γ 3 , r ,  and  θ , under the event-triggering condition (5), if  ρ ( 0 , ρ * ) , then  V ˙ ( t ) < 0 . In addition, following the Proof of Theorem 1, it is evident that  V ( t )  is positive definite. Through Lyapunov stability theory [39], one obtains that the NE is achieved asymptotically, and  x ( t ) x * 0 v d ( t ) 0 , and  q ( t ) 0  as  t . Thus, the proof is complete. □
Theorem 2. 
Consider non-cooperative games played by heterogeneous MASs following the CETNES strategy (4) and the centralized event-triggering condition (5). Suppose that Assumptions 1–3 are satisfied. Then, there exists no Zeno behavior.
Proof of Theorem 2. 
If the lower bound of the inter-event time interval  t k + 1 t k  can be determined and inequality  t k + 1 t k > 0  is always satisfied, then Theorem 2 is proved. Since the centralized event-triggering condition is based on the value relation between  g ( t )  and  q ( t ) , we investigate the time derivative of  g ( t ) . For  t [ t k , t k + 1 ) , one has  g ˙ ( t ) = q ˙ ( t ) . Hence,
d g ( t ) d t g ˙ ( t ) y ˙ ( t ) 1 n x ˙ ( t ) y ˙ ( t ) + n x ˙ ( t ) c q ( t k ) + n φ s ( y s ( t ) ) v d ( t ) ,
where c is a positive constant defined as  c = θ L I n × n + A . Moreover, by Assumptions 2 and 3, it is derived that
φ s y s ( t ) v d ( t ) 2 = φ s y s ( t ) 2 + v d ( t ) 2 2 l s y s ( t ) 1 m x ( t ) 2 + 2 l s m x ( t ) x * 2 + v d ( t ) 2 2 l s q ( t ) 2 + 2 l s m x ( t ) x * 2 + v d ( t ) 2 ,
where  l s = max { l ¯ i }  for  i S m . From Equation (9), it is derived that
V 1 ( t ) = 1 2 x s ( t ) x s * 2 + 1 2 x d ( t ) x d * + 1 r v d ( t ) 2 + 1 2 r 2 v d ( t ) 2 .
Since  x d ( t ) x d * + v d ( t ) / r 2 1 / 2 ( x d ( t ) x d * 2 + v d ( t ) / r 2 ) , one further has
V 1 ( t ) 1 2 x s ( t ) x s * 2 + 1 4 x d ( t ) x d * 2 + 1 4 r 2 v d ( t ) 2 + 1 2 r 2 v d ( t ) 2 1 4 x s ( t ) x s * 2 + 1 4 x ( t ) x * 2 + 3 4 r 2 v d ( t ) 2 .
From Equation (10), it is derived that  V 2 ( t ) λ min ( P ) q ( t ) 2 . Hence, one obtains
V ( t ) = V 1 ( t ) + V 2 ( t ) 1 4 x s ( t ) x s * 2 + 1 4 x ( t ) x * 2 + 3 4 r 2 v d ( t ) 2 + λ min ( P ) q ( t ) 2 .
By observing inequalities (17) and (18), one sees that inequalities (17) and (18) consist of similar terms  q ( t ) 2 x ( t ) x * 2 , and  v d ( t ) 2  with different coefficients. Therefore, a value relation between  V ( t )  and  φ s y s ( t ) 2 + v d ( t ) 2  is established. Define  η 1  as  η 1 = max { 2 l s , 2 l s m , 1 }  and  η 2  as  η 2 = min { λ min ( P ) , 3 / 4 r 2 , 1 / 4 } . Thus, one obtains
φ s y s ( t ) v d ( t ) 2 η 1 q ( t ) 2 + x d ( t ) x d * 2 + v d ( t ) 2 V ( t ) η 2 .
Hence,
φ s y s ( t ) v d ( t ) η 1 V ( t ) η 2 = η V ( t ) ,
where  η = η 1 / η 2 . Moreover, as proved in Theorem 1,  V ˙ ( t ) < 0 . Hence, for  t [ t k , t k + 1 ) V ( t ) V ( t k ) . Thus, one obtains
g ˙ ( t ) η n V ( t ) + c q ( t k ) η n V ( t k ) + c q ( t k ) ,
where  V ( t k )  is a positive constant for  t [ t k , t k + 1 ) . Hence,
g ( t k + 1 ) t k t k + 1 g ˙ ( s ) d s η n V ( t k ) + c q ( t k ) t k + 1 t k .
Furthermore, under the event-triggering condition (5), for  t [ t k , t k + 1 ) q ( t k ) q ( t ) g ( t ) ρ q ( t ) . Therefore,  q ( t ) q ( t k ) / ( 1 + ρ ) . Consider the next event-triggering time instant  t k + 1 , one has  g ( t k + 1 ) > ρ q ( t ) ρ q ( t k ) / ( 1 + ρ ) . Then, it is derived that
η n V ( t k ) + c q ( t k ) t k + 1 t k > ρ q ( t k ) 1 + ρ .
Thus, the lower bound of the inter-event time interval is summarized as
t k + 1 t k > ρ q ( t k ) ( 1 + ρ ) η n V ( t k ) + c q ( t k ) > 0 ,
which proves the nonexistence of Zeno behavior. □

3.2. DETNES Strategy

As presented above, the CETNES strategy (4) demands global information to determine whether the condition (5) is satisfied. However, in physical implementations, agents in MASs may not have access to global information. In this subsection, a DETNES strategy is proposed to solve the NE-seeking problem in a fully distributed manner. By adopting the DETNES strategy, each agent determines its own event-triggering time instants only based on local information and information from its neighbors. Let  t k i  denote the kth event-triggering time instant of agent i. For agent i, when  t [ t k i , t k + 1 i ) , the DETNES strategy is proposed as
u i ( t ) = φ i ( y i ( t ) ) , i S m , u i ( t ) = r v i ( t ) + φ i ( y i ( t ) ) , i S n m , y ˙ i j ( t ) = θ q = 1 n a i q y i j ( t k i ) y q j ( t k q ) + a i j y i j ( t k i ) x j ( t k j ) , i S n ,
where  t k j [ t k i , t k + 1 i )  denotes the latest event-triggering time instant of agent j when  t [ t k i , t k + 1 i ) .
Before proceeding, some distributed measurements are introduced to construct the event-triggering condition for the DETNES strategy (19). The definitions of several decentralized measurements are given as
e i j y ( t ) = y i j ( t k i ) y i j ( t ) , i , j S n , t [ t k i , t k + 1 i ) , e i x ( t ) = x i ( t k i ) x i ( t ) , i S n , t [ t k i , t k + 1 i ) , e i j x ( t ) = x j ( t k i ) x j ( t ) , i , j S n , t [ t k i , t k + 1 i ) ,
and
e i j x y ( t ) = y i j ( t k i ) y i j ( t ) x j ( t k i ) + x j ( t ) = e i j y ( t ) e i j x ( t ) , i S n , j N i , t [ t k i , t k + 1 i ) ,
where  N i  denotes the set of neighbors for agent i.
Define  z ( t ) = ( L I n × n + A ) q ( t ) R n 2  and  z i ( t k i ) = [ z i 1 ( t k i ) , z i 2 ( t k i ) , , z i n ( t k i ) ] T R n . For agent i, the elements of measurement  z i ( t )  and  z i ( t k i )  are given as
z i j ( t ) = q = 1 n a i q y i j ( t ) y q j ( t ) + a i j y i j ( t ) x j ( t ) ,
and
z i j ( t k i ) = q = 1 n a i q y i j ( t k i ) y q j ( t k q ) + a i j y i j ( t k i ) x j ( t k j ) .
For agent i, define a decentralized measurement  ε i ( t ) R n  as
ε i ( t ) = z i ( t k i ) z i ( t ) ,
with the corresponding elements being
ε i j ( t ) = q = 1 n a i q e i j y ( t ) e q j y ( t ) + a i j e i j y ( t ) e i j x ( t ) .
Thus, the decentralized event-triggering condition for agent i is given as
t k + 1 i = inf { t : t > t k i , ε i ( t ) > ρ z i ( t ) } .
The transmission of information and the updating of the controller for agent i, along with the decentralized measurements  z i ( t )  and  ε i ( t )  under the event-triggered mechanism, are presented in Figure 1.
Remark 4. 
Unlike measurements such as  g ( t )  and  q ( t ) ε i ( t )  and  z i ( t )  are computable for agent i without utilizing global information. Therefore, the DETNES strategy (19) solves the NE-seeking problem for non-cooperative games played by heterogeneous MASs in a fully distributed manner.
The convergence property of the DETNES strategy (19) is given by the following theorem.
Theorem 3. 
Consider non-cooperative games played by heterogeneous MASs following the DETNES strategy (19) and the decentralized event-triggering condition (20). Suppose that Assumptions 1–3 are satisfied. Then, there exists a positive constant  ρ * . For each  ρ ( 0 , ρ * ) , the NE is asymptotically achieved.
Proof. 
The detailed proof is presented in Appendix A. □
Theorem 4. 
Consider non-cooperative games played by heterogeneous MASs following the DETNES strategy (19) and the decentralized event-triggering condition (20). Suppose that Assumptions 1–3 are satisfied. Then, there exists no Zeno behavior.
Proof. 
The detailed proof is presented in Appendix B. □

4. Numerical Experiments and Results

In this section, some numerical experiments are conducted to illustrate the efficiency and efficacy of the proposed CETNES strategy (4) and DETNES strategy (19).
Consider a heterogeneous multi-agent system consisting of six agents, in which agents  1 , 3 , and 5 are single-integrator agents and agents  2 , 4 , and 6 are double-integrator agents. As shown in Figure 2, the corresponding Laplacian matrix  L  of  G  is given as
L = 1 1 0 0 0 0 1 4 1 0 1 1 0 1 3 1 1 0 0 0 1 2 1 0 0 1 1 1 4 1 0 1 0 0 1 2 R 6 × 6 .
The cost functions of the agents are defined as
f i ( x ( t ) ) = x i ( t ) b i 2 p 0 x i ( t ) j N i x j ( t ) + c 0 , i S n ,
where  b i , p 0 , and  c 0  are constants. Set the corresponding constants as  p 0 = 0.2 c 0 = 3 , and  [ b 1 , b 2 , , b 6 ] = [ 5 , 5 , 7 , 7 , 9 , 9 ] . Let  φ ( x * ) = 0 n ; the NE  x *  is obtained as  x * = [ 4.6072 , 3.9280 , 6.8422 , 5.5279 , 7.8781 , 8.6043 ] T . Moreover, for both CETNES and DETNES strategies, the initial states of the agents are set as  x ( 0 ) = [ 10 , 8 , 6 , 4 , 2 , 0 ] T  and  y i ( 0 ) = [ 0 , 0 , 0 , 0 , 0 , 0 ] T . In addition, control parameters  θ  and r are set as  θ = 25  and  r = 2 , respectively. One can verify that the cost functions (21) satisfy Assumptions 2 and 3, and that the parameters  θ  and r satisfy  θ > θ *  and  r > r * .

4.1. Experiment Results of CETNES Strategy

In this subsection, numerical experiments on the NE-seeking problem of non-cooperative games (21) with the CETNES strategy (4) are conducted. The corresponding experiment results are presented in Figure 3 and Figure 4, and Table 1.
The position trajectories of all agents when using the CETNES strategy (4) with  ρ = 0.6  are illustrated in Figure 3. The corresponding centralized triggering time sequences of all agents are illustrated in Figure 4.
From Figure 3, one sees that, driven by the CETNES strategy (4), the heterogeneous multi-agent system successfully converges to the theoretical NE  x * , as denoted by the dashed lines. Moreover, as illustrated in Figure 4, Zeno behavior does not occur when using the CETNES strategy (4).
Additionally, several numerical experiments of the CETNES strategy (4) with different values of  ρ  are also conducted, and the corresponding results are presented in Table 1. From Table 1, one sees that, with a larger value of  ρ , there are more trigger events. In addition, it is evident that, as  ρ  increases, the minimum and maximum inter-event time intervals also increase.

4.2. Experiment Results of DETNES Strategy

In this subsection, numerical experiments on a non-cooperative game (21) with the DETNES strategy (19) are conducted. The corresponding experiment results are presented in Figure 5 and Figure 6. The initial states and corresponding parameters are set to be the same as those in Section 4.1.
The position trajectories of all agents when using the DETNES strategy (19) with  ρ = 0.6  are illustrated in Figure 5. The corresponding decentralized triggering time sequences of all agents are illustrated in Figure 6.

4.3. Comparative Experiments

In this subsection, comparative experiments are conducted between the CETNES strategy (4), the DETNES strategy (19), and the periodic sampling control (PSC) strategy. The PSC strategy is based on the method presented in [28]. In [28], He and Huang introduced an NE-seeking strategy for homogeneous MASs consisting of agents with high-order integrator dynamics. Suppose that, under the PSC strategy, all agents periodically transmit their information to their neighbors at  t p i = p τ , where  τ > 0  is a constant. Then, for agent i, when  t [ t p i , t p + 1 i ) , a PSC strategy is formulated as follows:
u i ( t ) = φ i ( y i ( t ) ) , i S m , u i ( t ) = r v i ( t ) + φ i ( y i ( t ) ) , i S n m , y ˙ i j ( t ) = θ q = 1 n a i k y i j ( t p i ) y q j ( t p i ) + a i j y i j ( t p i ) η j ( t p i ) , i S n ,
where  η i ( t )  is a fictitious output, which is defined as
η i ( t ) = x i ( t ) , i S m , η i ( t ) = x i ( t ) + v i ( t ) , i S n m .
By introducing the fictitious output  η i ( t ) , the PSG strategy (22) ensures that double-integrator agents achieve consensus not only at the position level but also at the velocity level.
In the comparative experiment, the CETNES, DETNES, and PSG strategies are applied to solve the same NE-seeking problem in Section 4.1 and Section 4.2. For the three strategies, the parameters r and  θ , as well as the initial states, are set identical to those in Section 4.1 and Section 4.2. Moreover, NE is achieved when  φ ( x ( t ) ) = 0 n . Hence,  φ ( x ( t ) )  is chosen as the error that measures the convergence performance of the NE-seeking strategies.
Firstly, several experiments using the PSC strategy (22) with different values of  τ  are conducted to find the feasible sampling gap. The corresponding results are presented in Figure 7 and Table 2.
From Figure 7a,b, it is evident that when the CETNES and DETNES strategies achieve NE successfully, the PSC strategy (22) does not converge and fails to achieve the NE when  τ = 0.016  s and  τ = 0.02  s. Moreover, from Table 2, it can be seen that for the PSC strategy (22) to work, the sampling gap  τ  needs to be small enough. Therefore, the sampling gap  τ  is set to 0.01 s.
Additionally, comparative experiments between the CETNES, DETNES, and PSC strategies are conducted with  ρ = 0.1  and  τ = 0.01  s. The corresponding results are presented in Figure 8 and Table 3.
In addition, to investigate the influence of parameter  ρ , another comparative experiment is conducted with  ρ = 0.6  and the corresponding results are presented in Table 4. From Table 4, the differences in communication costs for the three strategies are apparent. For the CETNES strategy, the number of trigger events amounts to 144. In contrast, for the DETNES strategy, the number of trigger events for agents varies from 178 to 550 due to the distributed nature. As for the PSG strategy, the periodic sampling time remains 2000. Compared with the experiment results when  ρ = 0.1 , it is evident that the number of trigger events increases as  ρ  decreases. Moreover, due to the centralized event-triggered mechanism, for the same  ρ , the communication consumption of the CETNES strategy is lower than the DETNES strategy. However, the DETNES strategy has a broader application scenario, as it does not rely on centralized communication or control.

5. Conclusions

In this paper, the event-triggered NE-seeking problem for non-cooperative games in heterogeneous MASs has been tackled. To solve the NE-seeking problem, novel CETNES and DETNES strategies have been proposed. Theoretical analyses have illustrated the convergence properties of both the CETNES and DETNES strategies, ensuring that both strategies lead to the desired NE without exhibiting Zeno behavior. Through numerical experiments, the effectiveness and efficiency of the proposed strategies have been validated. The experimental results illustrate that both the CETNES and DETNES strategies not only successfully achieve the Nash equilibrium but also significantly reduce the communication consumption among agents. In this paper, the constraints on the agents’ actions are not considered. In practical engineering, heterogeneous MASs may encounter a variety of limitations, and due to their heterogeneous nature, these constraints are more difficult to handle. Generalized (i.e., constrained) Nash equilibrium seeking for heterogeneous MASs may be investigated in our future work.

Author Contributions

Conceptualization, L.H.; methodology, L.H. and H.C.; software, L.H.; validation, H.C. and Y.Z.; formal analysis, H.C. and L.H.; investigation, H.C. and L.H.; resources, L.H. and Y.Z.; data curation, L.H.; writing—original draft preparation, L.H.; writing—review and editing, L.H. and Y.Z.; visualization, L.H. and Y.Z.; supervision, H.C. and Y.Z.; project administration, Y.Z.; funding acquisition, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the China National Key R&D Program (with number 2022ZD0119602), the National Natural Science Foundation of China (with number 62376290), and Natural Science Foundation of Guangdong Province (with number 2024A1515011016).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors wish to express their sincere thanks to Sun Yat-sen University for its support and assistance in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this paper:
NENash equilibrium
MASsMulti-agent systems
CETNESCentralized Event-Triggered Nash Equilibrium Seeking
DETNESDecentralized Event-Triggered Nash Equilibrium Seeking
PSCPeriodic sampling control

Appendix A

Proof of Theorem 3. 
Based on the DETNES strategy (19), the vector form of the dynamics of heterogeneous agents are formulated as
x ˙ ( t ) = φ s y s ( t ) v d ( t ) , v ˙ d ( t ) = r v d ( t ) + φ d y d ( t ) , y ˙ ( t ) = θ z ( t ) + ε ( t ) ,
where  ε ( t ) = [ ε 1 T ( t ) , ε 2 T ( t ) , , ε n T ( t ) ] T . Choose the Lyapunov candidate function  V ( t ) = V 1 ( t ) + V 2 ( t )  to be the same as (8) to analyze the convergence properties of the DETNES strategy (19). Following the Proof of Theorem 1, one has
V ˙ 2 ( t ) = 2 q T ( t ) P 1 n φ y y ( t ) 0 m v d ( t ) + φ d y d ( t ) + y ˙ T ( t ) P q ( t ) + q T ( t ) P y ˙ ( t ) .
Since in Equation (A1),  y ˙ ( t ) = θ z ( t ) + ε ( t ) , the main difference between Equations (12) and (A2) is the last two terms  y ˙ T ( t ) P q ( t ) + q T ( t ) P y ˙ ( t ) . Since  z ( t ) = ( L I n × n + A ) q ( t ) , the last two terms of Equation (A2) are formulated as
y ˙ T ( t ) P q ( t ) + q T ( t ) P y ˙ ( t ) = θ q T ( t ) L I n × n + A z ( t ) + ε ( t ) = θ z T ( t ) z ( t ) + ε ( t ) = θ i = 1 n z i T ( t ) z i ( t ) θ i = 1 n z i T ( t ) ε i ( t ) θ i = 1 n z i ( t ) 2 + θ i = 1 n z i ( t ) ε i ( t ) .
Through Young’s inequality [38], it is derived that
y ˙ T ( t ) P q ( t ) + q T ( t ) P y ˙ ( t ) θ 1 γ 4 2 i = 1 n z i ( t ) 2 + θ 2 γ 4 i = 1 n ε i ( t ) 2 .
Under the event-triggering condition (20), the following inequality is obtained:
y ˙ T ( t ) P q ( t ) + q T ( t ) P y ˙ ( t ) θ 1 γ 4 2 i = 1 n z i ( t ) 2 + θ ρ 2 2 γ 4 i = 1 n z i ( t ) 2 .
Following the Proof of Theorem 1, similar to inequality (15), it is derived that
V ˙ ( t ) m l 2 γ 1 l h 2 γ 2 2 l d r n m x ( t ) x * 2 ( θ λ min ( Q ) l h l γ 1 2 l h γ 2 2 h γ 3 2 2 l d r ) q ( t ) 2 1 r h γ 3 v d ( t ) + φ d y d ( t ) 2 θ 1 γ 4 2 i = 1 n z i ( t ) 2 + θ ρ 2 2 γ 4 z i ( t ) 2 β 1 x ( t ) x * 2 β 2 q ( t ) 2 β 3 v d ( t ) + φ d y d ( t ) 2 β 4 i = 1 n z i ( t ) 2 + β 5 i = 1 n z i ( t ) 2 ,
where, for convenience in algebraic computation, parameters  β 1 , β 2 , β 3 β 4 , and  β 5  are defined as
β 1 = m l 2 γ 1 l h 2 γ 2 2 l d r n m , β 2 = 2 l d r l h l γ 1 2 l h γ 2 2 h γ 3 2 , β 3 = 1 r h γ 3 , β 4 = θ 1 γ 4 2 , β 5 = θ ρ 2 2 γ 4 ,
with  m , l , h ,  and  l d  are defined the same as those during the Proof of Theorem 1.
Choose  γ 1  and  γ 2  such that  m l / 2 γ 1 l h / 2 γ 2 > 0 . Define a positive constant  r *  that makes  β 2 = 0 . It is derived that
r * = 4 l d γ 1 γ 2 n m 2 m γ 1 γ 2 l γ 2 l h γ 1 .
Thus, for  r > r * β 1 > 0 . Choose  γ 3  and  γ 4  such that  1 / r h / γ 3 > 0  and  1 γ 4 / 2 > 0 , then inequalities  β 3 > 0  and  β 4 > 0  hold. Since  γ 1 , γ 2 , γ 3 , r > 0 , it is evident that  β 2 < 0 . Therefore, if  β 4 i = 1 n z i ( t ) 2 β 2 q ( t ) 2 + β 5 i = 1 n z i ( t ) 2 < 0 , then  V ˙ ( t ) < 0 .
Since  z ( t ) = ( L I n × n + A ) q ( t )  and matrix  L I n × n + A  is symmetric and positive definite, one has  q ( t ) = ( L I n × n + A ) 1 z ( t )  and  q ( t ) 2 λ max 2 i = 1 n z i ( t ) 2 , where  λ max  denotes the maximum eigenvalue of matrix  ( L I n × n + A ) 1 . Define a positive constant  θ *  that makes  λ max 2 β 2 + β 4 = 0 ; it is derived that
θ * = λ max 2 ( 2 r l h + 4 l d + r l γ 1 + r l h γ 2 + r h γ 3 ) 2 r r γ 4 .
Hence, for  θ > θ * β 4 i = 1 n z i ( t ) 2 β 2 q ( t ) 2 < 0 . Finally, define a positive constant  ρ *  that makes  λ max 2 β 2 + β 4 β 5 = 0 ; it is derived that
ρ * = 2 β 2 λ max 2 + 2 β 4 γ 4 θ > 0 .
Then, for  ρ ( 0 , ρ * ) , one has  β 4 i = 1 n z i ( t ) 2 β 2 q ( t ) 2 + β 5 i = 1 n z i ( t ) 2 < 0 . With  β 1 , β 3 < 0 , it is derived that  V ˙ ( t ) < 0 .
Therefore, for properly chosen  γ 1 , γ 2 , γ 3 γ 4 , r ,  and  θ , under the event-triggering condition (20), if  ρ ( 0 , ρ * ) , then  V ˙ ( t ) < 0 . In addition, following the Proof of Theorem 1, it is evident that  V ( t )  is positive definite. Through Lyapunov stability theory [39], one obtains that the NE is achieved asymptotically, and  x ( t ) x * 0 v d ( t ) 0 , and  q ( t ) 0  as  t . Thus, the proof is complete. □

Appendix B

Proof of Theorem 4. 
In what follows, the nonexistence of Zeno behavior for the DETNES strategy (19) is proved, which means  t k + 1 i t k i > 0  holds for  i S n . For agent i when  t [ t k i , t k + 1 i ) , one has
d ε i ( t ) d t ε ˙ i ( t ) = z ˙ i ( t ) ,
with
z ˙ i ( t ) = ( a i I n × n ) y ˙ ( t ) + A i y ˙ i ( t ) x ˙ ( t ) ,
where  a i = [ a i 1 , a i 2 , , a i n ]  is an n-dimensional row vector. Matrix  A i  is defined as  A i = diag { a i j } R n × n  for  j = { 1 , 2 , , n } . Hence,
ε ˙ i ( t ) c i i = 1 n z i ( t k i ) 2 + A i z i ( t k i ) + A i x ˙ ( t ) ,
where  c i = a i I n × n . Following the proof process of Theorem 2, one has
x ˙ ( t ) = φ s y s ( t ) v d ( t ) η V ( t ) η V ( t k i ) ,
where  η  is defined the same as in the proof process of Theorem 2. On the basis of event-triggering condition (20), it is derived that  ε i ( t k + 1 i ) > ρ z i ( t k i ) / ( 1 + ρ ) . Thus, for  t [ t k i , t k + 1 i ) , it is derived that
ε i ( t k + 1 i ) t k i t k + 1 i ε ˙ i ( s ) d s < c i i = 1 n z i ( t k i ) 2 + A i z i ( t k i ) + A i η V ( t k i ) ( t k + 1 i t k i ) .
Then, one obtains the following inequality:
t k + 1 i t k i > ρ z i ( t k i ) ( 1 + ρ ) c i i = 1 n z i ( t k i ) 2 + A i z i ( t k i ) + A i η V ( t k i ) > 0 .
The proof is thus completed. □

References

  1. Yang, Y.; He, L.; Fan, Z.; Cheng, H. Distributed group cooperation with multi-mechanism fusion in an adversarial environment. Knowl. Based Syst. 2022, 258, 109953. [Google Scholar] [CrossRef]
  2. Jiang, F.; Cheng, H. Multi-agent bandit with agent-dependent expected rewards. Swarm Intell. 2023, 17, 219–251. [Google Scholar] [CrossRef]
  3. Jiang, F.; Cheng, H.; Chen, G. Collective decision-making for dynamic environments with visual occlusions. Swarm Intell. 2021, 16, 7–27. [Google Scholar] [CrossRef]
  4. Shokri, M.; Kebriaei, H. Leader-follower network aggregative game with stochastic agents’ communication and activeness. IEEE Trans. Autom. Control 2020, 65, 5496–5502. [Google Scholar] [CrossRef]
  5. Liu, J.; Li, C. Dynamic game analysis on cooperative advertising strategy in a manufacturer-led supply chain with risk aversion. Mathematics 2023, 11, 512. [Google Scholar] [CrossRef]
  6. Ye, M. Distributed Nash equilibrium seeking for games in systems with bounded control inputs. IEEE Trans. Autom. Control 2021, 66, 3833–3839. [Google Scholar] [CrossRef]
  7. Krstic, M.; Frihauf, P.; Basar, T. Nash equilibrium seeking in noncooperative games. IEEE Trans. Autom. Control 2012, 57, 1192–1207. [Google Scholar]
  8. Jing, C.; Wang, C.; Song, H.; Shi, Y.; Hao, L. Optimal asymptotic tracking control for nonzero-sum differential game systems with unknown drift dynamics via integral reinforcement learning. Mathematics 2024, 12, 2555. [Google Scholar] [CrossRef]
  9. Li, C.; Lin, W.; Chen, G.; Huang, T. Distributed generalized Nash equilibrium seeking: A singular perturbation-based approach. Neurocomputing 2022, 482, 278–286. [Google Scholar]
  10. Hua, Y.; Dong, X.; Li, Q.; Liu, F.; Yu, J.; Ren, Z. Dynamic generalized Nash equilibrium seeking for n-coalition noncooperative games. Automatica 2023, 147, 431–440. [Google Scholar]
  11. He, X.; Huang, J. Distributed Nash equilibrium seeking over strongly connected switching networks. Neurocomputing 2022, 533, 206–213. [Google Scholar] [CrossRef]
  12. Tan, S.; Wang, Y. A payoff-based learning approach for Nash equilibrium seeking in continuous potential games. Neurocomputing 2022, 468, 431–440. [Google Scholar] [CrossRef]
  13. Chen, J.; Pan, Y.; Zhang, Y. ZNN continuous model and discrete algorithm for temporally variant optimization with nonlinear equation constraints via novel TD formula. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 3994–4004. [Google Scholar] [CrossRef]
  14. Wen, G.; Michael, Z.; Chen, Q.; Yu, X. Event-triggered master-slave sychronization with sampled-data communication. IEEE Trans. Circuits Syst. II Exp. Briefs 2016, 66, 304–308. [Google Scholar]
  15. Zhang, Y.; He, L.; Hu, C.; Guo, J.; Li, J.; Shi, Y. General four-step discrete-time zeroing and derivative dynamics applied to time-varying nonlinear optimization. J. Comput. Appl. Math. 2019, 347, 314–329. [Google Scholar] [CrossRef]
  16. Xu, L.-X.; Zhao, L.-N.; Ma, H.-J.; Wang, X. Observer-based adaptive sampled-data event-triggered distributed control for multi-agent systems. IEEE Trans. Circuits Syst. II Exp. Briefs 2020, 67, 97–101. [Google Scholar]
  17. Wang, D.; Lv, Y.; Zhang, K.; Fang, X.; Yu, X. Distributed Nash equilibrium seeking under event-triggered mechanism. IEEE Trans. Circuits Syst. 2021, 68, 3441–3445. [Google Scholar]
  18. Feng, G.; Fan, Y.; Wang, Y. Distributed event-triggered control of multi-agent systems with combinational measurements. Automatica 2013, 49, 671–675. [Google Scholar]
  19. Li, H.; Hao, Y.; Zhang, X. Offensive/defensive game target damage assessment mathematical calculation method between the projectile and target. Mathematics 2022, 10, 4291. [Google Scholar] [CrossRef]
  20. Shi, C.; Yang, G. Distributed Nash equilibrium computation in aggregative games: An event-triggered algorithm. Inf. Sci. 2019, 489, 289–302. [Google Scholar] [CrossRef]
  21. Shi, L.; He, W. Generalized Nash equilibrium seeking for networked noncooperative games with a dynamic event-triggered mechanism. Appl. Math. Model. 2023, 118, 39–52. [Google Scholar] [CrossRef]
  22. Wang, D.; Gao, Z.; Sheng, L. Distributed finite-time and fixed-time Nash equilibrium seeking for non-cooperative game with input saturation. Mathematics 2023, 11, 2295. [Google Scholar] [CrossRef]
  23. Feng, Y.; Zheng, W. Distributed group consensus of discrete-time heterogeneous multi-agent systems with directed communication topology. Appl. Math. Model. 2023, 118, 39–52. [Google Scholar]
  24. Feng, Y.; Zheng, W. Group consensus control for discrete-time heterogeneous first- and second-order multi-agent systems. IET Contr. Theory Appl. 2018, 12, 753–760. [Google Scholar] [CrossRef]
  25. Cao, M.; Shi, L.; Shao, J.; Xia, H. Asynchronous group consensus for discretime heterogeneous multi-agent systems under dynamically changing interaction topologies. Inf. Sci. 2018, 463, 282–293. [Google Scholar]
  26. Bao, G.; Ma, L.; Yi, X. Recent advances on cooperative control of heterogeneous multi-agent systems subject to constraints: A survey. Syst. Sci. Control Eng. 2022, 10, 539–551. [Google Scholar] [CrossRef]
  27. Li, Y.; Qu, F.; Tong, S. Observer-based fuzzy adaptive finite-time containment control of nonlinear multiagent systems with input delay. IEEE Trans. Cybern. 2021, 51, 126–137. [Google Scholar] [CrossRef]
  28. He, X.; Huang, J. Distributed Nash equilibrium seeking and disturbance rejection for high-order integrators over jointly strongly connected switching networks. IEEE Trans. Cybern. 2024, 54, 2396–2407. [Google Scholar] [CrossRef] [PubMed]
  29. Gao, H.; Shi, Y.; Qin, J.; Ma, Q.; Yu, K. On group synchronization for interacting clusters of heterogeneous systems. IEEE Trans. Cybern. 2017, 47, 4122–4133. [Google Scholar]
  30. Bassett, D.S.; Menara, T.; Baggio, G.; Pasqualetti, F. Stability conditions for cluster synchronization in networks of heterogeneous kuramoto oscillators. IEEE Trans. Control Netw. Syst. 2020, 7, 302–314. [Google Scholar]
  31. Wang, D.; Ma, H.; Liu, D.; Li, C. Centralized and decentralized event-triggered control for group consensus with fixed topology in continuous time. Neurocomputing 2015, 161, 267–276. [Google Scholar]
  32. Huang, T.; Zhou, B.; Liao, X.; Chen, G. Pinning exponential synchronization of complex networks via event-triggered communication with combinational measurements. Neurocomputing 2015, 157, 199–207. [Google Scholar]
  33. Zhang, C.; Li, K.; Ji, L.; Li, H. Fully distributed event-triggered pinning group consensus control for heterogeneous multi-agent systems with cooperative-competitive interaction strength. Neurocomputing 2021, 464, 273–281. [Google Scholar]
  34. Golub, G.; Loan, C.F. Matrix Computations, 4th ed.; John Hopkins Univesity Press: Baltimore, MD, USA, 2013; pp. 22–33. [Google Scholar]
  35. Diestel, R. Graph Theory, 5th ed.; Springer: Berlin/Heidelberg, Germany, 2017; pp. 1–33. [Google Scholar]
  36. Rosen, J.B. Existence and uniqueness of equilibrium points for concave n-person games. Econometrica 1965, 33, 520–534. [Google Scholar] [CrossRef]
  37. Ye, M.; Hu, G. Distributed Nash equilibrium seeking in multi-agent games under switching communication topologies. IEEE Trans. Cybern. 2018, 48, 3208–3217. [Google Scholar] [CrossRef] [PubMed]
  38. Bui, H. Weighted Young’s inequality and convolution theorems on weighted Besov spaces. Math. Nachr. 1994, 170, 25–37. [Google Scholar] [CrossRef]
  39. Khalil, H.K. Nonlinear Systems, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2002; pp. 111–181. [Google Scholar]
Figure 1. Event-triggered mechanism for agent i under DETNES strategy.
Figure 1. Event-triggered mechanism for agent i under DETNES strategy.
Mathematics 13 00419 g001
Figure 2. Communication topology  G  of heterogeneous multi-agent system consisting of 6 agents, in which agents  1 , 3 , and 5 are single-integrator agents and agents  2 , 4 , and 6 are double-integrator agents.
Figure 2. Communication topology  G  of heterogeneous multi-agent system consisting of 6 agents, in which agents  1 , 3 , and 5 are single-integrator agents and agents  2 , 4 , and 6 are double-integrator agents.
Mathematics 13 00419 g002
Figure 3. Position trajectories of all agents when using the CETNES strategy (4) with  ρ = 0.6 , where  x i *  denotes the NE for agent i.
Figure 3. Position trajectories of all agents when using the CETNES strategy (4) with  ρ = 0.6 , where  x i *  denotes the NE for agent i.
Mathematics 13 00419 g003
Figure 4. Triggering time sequences of all agents when using the CETNES strategy (4) with  ρ = 0.6 .
Figure 4. Triggering time sequences of all agents when using the CETNES strategy (4) with  ρ = 0.6 .
Mathematics 13 00419 g004
Figure 5. Position trajectories of all agents when using the DETNES strategy (19) with  ρ = 0.6 , where  x i *  denote the NE for agent i.
Figure 5. Position trajectories of all agents when using the DETNES strategy (19) with  ρ = 0.6 , where  x i *  denote the NE for agent i.
Mathematics 13 00419 g005
Figure 6. Triggering time sequences of all agents when using the DETNES strategy (19) with  ρ = 0.6 .
Figure 6. Triggering time sequences of all agents when using the DETNES strategy (19) with  ρ = 0.6 .
Mathematics 13 00419 g006
Figure 7. Trajectories of  φ ( x ( t ) )  of PSC, CETNES, and DETNES strategies with  ρ = 0.1  and different  τ .
Figure 7. Trajectories of  φ ( x ( t ) )  of PSC, CETNES, and DETNES strategies with  ρ = 0.1  and different  τ .
Mathematics 13 00419 g007
Figure 8. Trajectories of  φ ( x ( t ) )  synthesized by the PSC, CETNES, and DETNES strategies with  ρ = 0.1  and  τ = 0.01  s.
Figure 8. Trajectories of  φ ( x ( t ) )  synthesized by the PSC, CETNES, and DETNES strategies with  ρ = 0.1  and  τ = 0.01  s.
Mathematics 13 00419 g008
Table 1. Comparison of CETNES strategy (4) with different values of  ρ .
Table 1. Comparison of CETNES strategy (4) with different values of  ρ .
  ρ Number of Trigger EventsMinimum Inter-Event Time IntervalMaximum Inter-Event Time Interval
0.16210.02 s0.11 s
0.22570.05 s0.24 s
0.61440.07 s0.41 s
Table 2. Convergence performance of PSG with different  τ .
Table 2. Convergence performance of PSG with different  τ .
  τ Periodic Sampling TimesWhether Convergence
0.01 s2000Converge
0.016 s1250Do not converge
0.02 s1000Do not converge
Table 3. Comparison between PSG, CETNES, and DETNES strategies with  ρ = 0.1  and  τ = 0.01  s.
Table 3. Comparison between PSG, CETNES, and DETNES strategies with  ρ = 0.1  and  τ = 0.01  s.
AgentNumber of Trigger Events for CETNESNumber of Trigger Events for DETNESPeriodic Sampling Times
Agent 162111672000
Agent 262114832000
Agent 36217522000
Agent 46219172000
Agent 562113002000
Agent 66217062000
Table 4. Comparison between PSG, CETNES, and DETNES strategies with  ρ = 0.6  and  τ = 0.01  s.
Table 4. Comparison between PSG, CETNES, and DETNES strategies with  ρ = 0.6  and  τ = 0.01  s.
AgentNumber of Trigger Events for CETNESNumber of Trigger Events for DETNESPeriodic Sampling Times
Agent 11442202000
Agent 21445502000
Agent 31443232000
Agent 41442042000
Agent 51444642000
Agent 61441782000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, L.; Cheng, H.; Zhang, Y. Centralized and Decentralized Event-Triggered Nash Equilibrium-Seeking Strategies for Heterogeneous Multi-Agent Systems. Mathematics 2025, 13, 419. https://doi.org/10.3390/math13030419

AMA Style

He L, Cheng H, Zhang Y. Centralized and Decentralized Event-Triggered Nash Equilibrium-Seeking Strategies for Heterogeneous Multi-Agent Systems. Mathematics. 2025; 13(3):419. https://doi.org/10.3390/math13030419

Chicago/Turabian Style

He, Liu, Hui Cheng, and Yunong Zhang. 2025. "Centralized and Decentralized Event-Triggered Nash Equilibrium-Seeking Strategies for Heterogeneous Multi-Agent Systems" Mathematics 13, no. 3: 419. https://doi.org/10.3390/math13030419

APA Style

He, L., Cheng, H., & Zhang, Y. (2025). Centralized and Decentralized Event-Triggered Nash Equilibrium-Seeking Strategies for Heterogeneous Multi-Agent Systems. Mathematics, 13(3), 419. https://doi.org/10.3390/math13030419

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop