Next Article in Journal
Approximation of Caputo Fractional Derivative and Numerical Solutions of Fractional Differential Equations
Next Article in Special Issue
Fixed-Time Distributed Time-Varying Optimization for Nonlinear Fractional-Order Multiagent Systems with Unbalanced Digraphs
Previous Article in Journal
A Joint Multifractal Approach to Solar Wind Turbulence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Adaptive Optimization Algorithm for Fractional High-Order Multiagent Systems Based on Event-Triggered Strategy and Input Quantization

School of Air Transportation, Shanghai University of Engineering Science, Shanghai 201620, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(10), 749; https://doi.org/10.3390/fractalfract7100749
Submission received: 13 August 2023 / Revised: 25 September 2023 / Accepted: 6 October 2023 / Published: 11 October 2023

Abstract

:
This paper investigates the distributed optimization problem (DOP) for fractional high-order nonstrict-feedback multiagent systems (MASs) where each agent is multiple-input–multiple-output (MIMO) dynamic and contains uncertain dynamics. Based on the penalty-function method, the consensus constraint is eliminated and the global objective function is reconstructed. Different from the existing literatures, where the DOPs are addressed for linear MASs, this paper deals with the DOP through using radial basis function neural networks (RBFNNs) to approximate the unknown nonlinear functions for high-order MASs. To reduce transmitting and computational costs, event-triggered scheme and quantized control technology are combined to propose an adaptive backstepping neural network (NN) control protocol. By applying the Lyapunov stability theory, the optimal consensus error is proved to be bounded and all signals remain semi-global uniformly ultimately bounded. Simulation shows that all agents reach consensus and errors between agents’ outputs and the optimal solution is close to zero with low computational costs.

1. Introduction

Recently, distributed optimization problem (DOP) of MASs has attracted much interest since its wide range applications including robotic systems [1,2], sensor networks [3], marine surface vehicles [4], smart grids [5,6], fractional order MASs [7] and multiple one-link manipulators system [8]. Given a DOP, each agent acquires a local objective function and the MASs have a global objective function obtained by making the sum of local objective functions. Through minimizing the global objective function, agents in MASs follow an optimal trajectory while reaching consensus.
A key objective for the DOP is to provide appropriate distributed control algorithms which ensure that all agents in MASs collaborate in seeking the optimal solution of the global objective function. Ref. [9] designs a discrete algorithm for multi-robot system to deal with the cooperative transportation by minimizing the total energy consumption. In [10], a class of online DOP considering coupled inequality constraints is investigated and an online primal-dual algorithm is developed. Noting that the above papers focus on the development of discrete-time algorithm for DOP, which means these algorithms are not suitable for the continuous-time dynamics. Recently, a growing number of researchers are developing continuous-time algorithms for DOP due to its potential applications in MASs [11,12,13,14,15]. In [16], a distributed algorithm is developed to deal with the resource allocation problem by designing a dynamic event-triggered mechanism. On the basis of proportional integral technique, Ref. [17] proposes an adaptive neurodynamic algorithm to address the DOP and the proposed algorithm ensures that agents in MASs achieve consensus first in a finite-time and then converge to the optimal trajectory in a fixed-time. To avoid solving high-dimensional subproblems, Ref. [18] proposes a novel projection-free dynamics for solving constrained DOP using the Frank–Wolfe method. An adaptive fault-tolerant controller is designed in [19] to deal with the DOP for nonlinear MASs through building exosystem state observers. In these research works, algorithms are developed for DOPs in first-order MASs, which means that in second-order MASs, these methods are hardly to work well. With this reason in mind, many methods are constructed for second-order MASs due to the widely use in practice [20,21,22,23]. In [24], a decentralized optimization control protocol with fixed-time flocking is developed for second-order MASs where the networks are time-varying. Ref. [25] develops a dynamic event-trigger based control protocol for the optimal consensus problem in second-order MASs where communication edges are affected by both cyber attacks and disturbances simultaneously. The generic optimal formation control problems with various formation constraints is investigated in [26] for second-order MASs. In many engineering practices, such as generators, robots, and satellites, the dynamics of physical systems can be depicted by high-order systems. Thus, investigating the DOP in high-order MASs is meaningful and important. However, due to the inaccuracy of modeling, many MASs contains nonlinear uncertainties objectively and the aforementioned algorithms may be ineffective.
To address this issue, many adaptive methods, such as RBFNNs and fuzzy logic systems (FLSs), are adopted to compensate for the unknown nonlinear function and design an adaptive control protocol to achieve the control goal. Through ultilizing RBFNNs or FLSs, a wide range of adaptive control protocols are developed [27,28,29,30]. In [31], an adaptive control protocol is proposed based on FLSs technique to address the switched nonlinear functions in the MIMO system and the unknown gain direction of the controller is solved by the Nussbaum gain function. To reduce computation, a RBFNNs-based prescribed- time controller is developed in [32] via event-trigger mechanism for robotic manipulators with nonlinear uncertainties and state constraints. In [33], based on the projection operator-based compensation mechanism, a FLSs-based adaptive controller is proposed to deal with the consensus problem in nonlinear MASs under deception attacks. Upon reviewing the above literature, it should be noted that there is currently no study on developing the adaptive intelligent control protocol for the high-order uncertain nonstrict-feedback MASs with MIMO agents. Besides, as a special case of consensus problems, DOP needs all agents to achieve both consensus and the optimal solution, which means that the aforementioned algorithms may not work to realize the control objective.
Furthermore, the above studies are restricted to integer-order MASs only. In reality, fractional-order MASs (FOMASs) have multiple potential applications due to its ability to accurately model the system [34,35]. Recently, the consensus problem for FOMASs has attracted considerable attention and has emerged as another research priority [36,37,38,39]. Ref. [40] introduces an adaptive algorithm using an event-triggered strategy for FOMASs with partial state constraints and input saturation. Ref. [41] develop a novel distributed algorithm with fixed time delay to address containment control problem for nonlinear FOMASs. However, to the best of our knowledge, the DOP for uncertain nonlinear FOMASs where each agent is described by MIMO dynamics has not been studied in the existing works. This motivates us to undertake the research in this paper.
Motivated by the above discussion, this paper proposes an event-trigger based adaptive backstepping algorithm to address the DOP using input quantization technique for the nonstrict-feedback MIMO MASs. The main contributions in this paper are as follow.
(1)
Unlike [27,33], where algorithms are developed for the consensus problem in MASs, this paper introduces an adaptive control protocol for the DOP. Agents in the MASs not only reach consensus, but also achieve the optimal solution of the global objective function. Besides, each agent in the FOMASs is described by nonstrict-feedback MIMO dynamics, which is more general and complex to design the control protocol.
(2)
Different from [16,17,18,19,20,21,22,23,24,25,26], where the DOPs are investigated for first-order or second-order MASs, this paper dedicates to solve the fractional high-order DOP, which means that MASs and the DOP in this paper are close to the engineering systems. Besides, the MASs in this paper includes nonlinear uncertain terms in each order. Thus, RBFNNs technique is adopted to approximate and compensate for the unknown dynamics. In addition, to reduce the transmitting and computational costs, this paper combines the event-trigger mechanism and input quantization technique together to deal with the high-order DOP for the first time.
(3)
In contrast to the algorithms in aforementioned works which are only effective in integer order MASs, this paper investigates the high-order DOP in uncertain nonlinear FOMASs with MIMO agents and an adaptive NNs based algorithm is developed. To avoid the ’computation complexity’, this paper utilizes the fractional order DSC (FODSC) method and the fractional derivatives for virtual controllers are obtained in the meantime.

2. Preliminaries

Define the Caputo fractional derivative [42] as
0 C D t ω f t = 1 Γ δ ω 0 t f δ τ t τ 1 + ω δ d τ
where δ Δ and δ 1 < ω δ , Γ z = 0 t z 1 e t d t is the Gamma function. In this paper, we set 0 C D t ω f ( t ) = D ω f ( t ) to simplify the notation. For a two-parameter function of the Mittag-Leffler type
E ω , γ ( ς ) = n = 0 ς n Γ ( ω n + γ ) , ( ω > 0 ) , ( γ > 0 )
we have the following lemmas.
Lemma 1
([43]). For real numbers γ, ω and ϕ satisfying ω 0 , 1 , τ > 0 , one has
τ ω 2 < ϕ < τ ω
and for integers n 1 , one obtains
E ω , γ ς = j = 1 n ς j Γ γ ω j + o 1 ς n + 1
when ς , ϕ arg ς τ .
Lemma 2
([43]). If ϕ satisfies the condition of Lemma 1, one holds
E ω , γ ς μ 1 + ς
where ω 0 , 2 and γ is an arbitrary real number, μ > 0 , ϕ arg ς τ .
Lemma 3
([44]). For κ R and ζ > 0 , the following inequality holds
0 κ κ 2 κ 2 + ζ 2 ζ
Lemma 4
([42]). Suppose that the Lyapunov function V t , x satisfies D ω V t , x Θ V ( t , x ) + Λ . Let 0 < ω < 1 , Θ > 0 and Λ 0 , the following inequality holds
V t , x V 0 E ω Θ t ω + Λ μ Θ , t 0
Then, V t , x is bounded on 0 , t and fractional order systems are stable, where μ is defined in Lemma 2.
Notations: In this paper, 0 m = [ 0 , , 0 ] T R m , 1 m = [ 1 , , 1 ] T R m . Denote f ( · ) as the gradient of function f ( · ) and ⊗ as Kronecker product.
Remark 1.
In this paper, the fractional order is considered within the interval [ 0 , 1 ] .

3. Problem Formulation

3.1. Hysteresis Quantizer

This paper uses the hysteresis quantizer to reduce chattering. According to [45], the quantizer υ i ϖ i t is
υ i ϖ i t = ϖ i k s i g n ϖ i , ϖ i k 1 + d s i g n ϖ i , 0 , ϖ i k 1 + d < ϖ i ϖ i k 1 d ϖ i k < ϖ i ϖ i k 1 + d 1 d 0 ϖ i < ϖ min
where ϖ i k = n 1 k ϖ min k = 1 , 2 , with parameters ϖ m i n > 0 and 0 < ϱ < 1 , d = 1 ϱ 1 + ϱ . Meanwhile, υ i ϖ i t is in the set U = 0 , ± ϖ i k , ± ϖ i k 1 + d , k = 1 , 2 , . ϖ min determines the magnitude of the dead-zone for υ i ϖ i t .
Lemma 5
([46]). The system inputs υ i ϖ i t can be described as
υ i ϖ i t = Ξ ϖ i ϖ i t + Ψ i t
where 1 d Ξ ϖ i 1 + d , Ψ i t ϖ min .

3.2. Graph Theory

Denote an undirected graph Q = U , J , A ¯ , where U = 1 , , N is a node set. Define J U × U as the edge set with no self-loop and A ¯ = a i j R N × N as the adjacency matrix. An edge ( i , j ) J , if and only if a i j = 1 . Denote N i = j | ( i , j ) J as the neighbor set of node i and the matrix D = d i a g j = 1 N a 1 j , , j = 1 N a N j as the degree matrix. Define the Laplacian matrix as L = D A ¯ . If there is an undirected path between each node pair, graph Q is a connected graph.
Lemma 6
([47]). For a connected undirected graph Q, it has a positive semi-definite Laplacian matrix which has a simple eigenvalue 0 and the associated eigenvector 1 N .

3.3. Multi-Agent Systems

Consider the FOMASs with N agents and each agent is described by the MIMO system with m subsystems. The dynamic for agent i is:
D α x i , l t = x i , l + 1 + h i , l X i , n D α x i , n t = u i t + h i , n X i , n y i = x i , 1
where i = 1 , , N , l = 1 , , n 1 , α 0 , 1 , x i , 1 , , x i , l R m are system states, u i ( t ) R m is the control input, y i = [ y i , 1 , , y i , m ] T R m is the system output, X i , n = [ x i , 1 T , , x i , n T ] T R m n is the state vector, h i , l X i , n = [ h i , l , 1 X i , n , , h i , l , m X i , n ] T R m is the vector of unknown nonlinear functions. Specifically, the kth subsystem is described as
D α x i , l , k t = x i , l + 1 , k + h i , l , k X i , n D α x i , n , k t = u i , k t + h i , n , k X i , n y i , k = x i , 1 , k
where x i , l , k is the system state, y i , k is the system output, u i , k ( t ) is the control input, h i , l , k X i , n is an unknown nonlinear function.

3.4. Distributed Optimization Problem

This paper investigates the quadratic DOP for the FOMASs. For agent i, define the local objective function f i ( · ) : R m R as
f i ( y i ) = y i T a i ( t ) y i + b i T ( t ) y i + c i ( t )
where a i ( t ) R m × m , b i ( t ) R m , c i ( t ) R m , a i ( t ) a 0 , b i ( t ) b 0 , c i ( t ) c 0 and a 0 , b 0 , c 0 are known constants. The global objective function f ( · ) : R m N R is defined as
f y = i = 1 N f i ( y i ) s . t . ( L I m ) y = 0 m N
where y = [ y 1 T , , y N T ] T . According to Lemma 6, given a bounded continuous function α ( t ) , if y = α ( t ) · 1 m N , one has ( L I m ) y = 0 . Thus, based on the penalty-function method, design the penalty term 1 2 μ y T ( L I m ) y where μ is a positive designed parameter. The global objective function can be rewritten as
F y = i = 1 N f i ( y i ) + 1 2 μ y T ( L I m ) y
Define the optimal solution of F y as y * = [ y 1 * T , , y N * T ] T R m N and y * = arg min ( y 1 , , y N ) F ( y ) , where y i * = [ y i , 1 * , , y i , m * ] T . From (9), we know that when the FOMASs achieve the optimal solution y * , all agents reach consensus and converge to the optimal trajectory simultaneously.
Control objectives: This paper aims at developing an adaptive NNs-based control protocol using FODSC technology, event-trigger mechanism and input quantization, so that all agents’ signals remain bounded and converge to the optimal trajectory while keeping consensus with sufficiently small errors.

4. Main Results

4.1. Neural Networks Approximation

As an effective tool for approximating continuous functions, RBFNNs in this paper are utilized to compensate for the nonlinear functions h ( X i , n ) : R n R . The RBFNNs are described as follow
h ( X i , n ) = ϑ T φ ( X i , n )
where X i , n R m n is the input vector, ϑ R p is the weight vector and φ ( X i , n ) = [ φ 1 ( X i , n ) , , φ p ( X i , n ) ] T R p is the radial basis function vector and p represents the NN nodes where φ q ( X i , n ) is a typical basis Gaussian function as
φ q ( X i , n ) = e x p ( X i , n c q ) T ( X i , n c q ) b q 2 , q = 1 , , p
with c q R n being the centers and b q R being the width of Gaussian functions.
Lemma 7
([36]). Given a continuous unknown function h ( x ) defined on the compact set Ω x , there exist the NN ϑ * T φ ( x ) and the arbitrary accuracy ϵ ( x ) such that
h ( x ) = ϑ * T φ ( x ) + ϵ ( x )
where ϑ * is the ideal weight vector defined by ϑ * = arg min ϑ Ω ϑ [ sup x Ω x | h ( x ) ϑ T φ ( x ) ] defined by ϑ and ϵ ( x ) denotes the minimum approximation error.
Define the parameter estimation error ϑ ˜ and the optimal approximation error ϵ as
ϑ ˜ = ϑ * ϑ , l = 1 , 2 , , n .
ϵ = h ( X i , n ) h ( X i , n | ϑ * )
where h ( X i , n | ϑ * ) is the arrpoximation value of RBFNNs with the optimal parameter.
Assumption 1.
The optimal approximation errors remain bounded and the arbitrary accuracy satisfies ϵ ( x ) ϵ 0 , ϵ 0 > 0 .

4.2. Controller Design

Theorem 1.
For the nonlinear FOMASs under Assumption 1, we construct an event-trigger adaptive NN-based dynamic surface quantized controller (70), virtual controllers (29), (41) and (55), adaptive laws (30), (42), (56) and (67) such that all signals in the closed-loop system remain semi-global uniformly ultimately bounded and errors between outputs and the optimal trajectory are sufficiently small.
Proof. 
Define the errors for agent i, subsystem k as follow
z i , 1 , k = x i , 1 , k y i , k * z i , l , k = x i , l , k r i , l , k w i , l , k = r i , l , k x i , l , k * l = 2 , , n k = 1 , , m
where z i , 1 , k is the tracking error, r i , l , k is the output of the FODSC, and w i , l , k is the FODSC error between r i , l , k and x i , l , k * .
Step 1. According to (7) and (9), the gradient of newly constructed global objective function F ( y ) is
F ( y ) = i = 1 N f i ( y i ( t ) ) y + μ ( L I m ) y .
Since the global objective function F ( y ) is convex, the optimal solution y * satisfies F ( y * ) = 0 . Thus, for agent i, we obtain
2 a i ( t ) y i * + b i ( t ) + μ j N i a i j ( y i * y j * ) = 0 m .
Define the vector z i , 1 = [ z i , 1 , 1 , , z i , 1 , m ] T . From (13) and (15), one has
F ( y ) y i = f i y i ( t ) y i + μ j N i a i j ( y i y j ) = 2 a i ( t ) y i + b i ( t ) + μ j N i a i j ( y i y j ) = 2 a i ( t ) y i + b i ( t ) + μ j N i a i j ( y i y j ) 2 a i ( t ) y i * b i ( t ) μ j N i a i j ( y i * y j * ) = 2 a i ( t ) z i , 1 + μ j N i a i j ( z i , 1 z j , 1 ) .
Let z 1 = [ z 1 , 1 T , , z N , 1 T ] T . Through (16) and (18), one obtains
F ( y ) y = H z 1
where H = A + μ ( L I m ) and A = 2 d i a g { a i ( t ) } . Construct the Lyapunov function for FOMASs as:
V 1 = 1 2 F ( y ) y T H 1 F ( y ) y + i = 1 N k = 1 m 1 2 γ i , 1 , k ϑ ˜ i , 1 , k T ϑ ˜ i , 1 , k = 1 2 z 1 T H z 1 + i = 1 N k = 1 m 1 2 γ i , 1 , k ϑ ˜ i , 1 , k T ϑ ˜ i , 1 , k
where γ i , 1 , k is a positive designed parameter. From the definition of z 1 and H, we have
D α V 1 = z 1 T H ( D α y 1 D α y * ) + i = 1 N k = 1 m 1 γ i , 1 , k ϑ ˜ i , 1 , k T D α ϑ ˜ i , 1 , k = i = 1 N μ j N i a i j ( z i , 1 z j , 1 ) + 2 a i ( t ) z i , 1 T ( D α x i , 1 D α y i * ) i = 1 N k = 1 m 1 γ i , 1 , k ϑ ˜ i , 1 , k T D α ϑ i , 1 , k = i = 1 N μ j N i a i j ( y i y j ) + 2 a i ( t ) y i + b i ( t ) T ( D α x i , 1 D α y i * ) i = 1 N μ j N i a i j ( y i * y j * ) + 2 a i ( t ) y i * + b i ( t ) T ( D α x i , 1 D α y i * ) i = 1 N k = 1 m 1 γ i , 1 , k ϑ ˜ i , 1 , k T D α ϑ i , 1 , k = i = 1 N μ j N i a i j ( y i y j ) + 2 a i ( t ) y i + b i ( t ) T ( D α x i , 1 D α y i * ) i = 1 N k = 1 m 1 γ i , 1 , k ϑ ˜ i , 1 , k T D α ϑ i , 1 , k = i = 1 N k = 1 m μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k ( D α x i , 1 , k D α y i , k * ) i = 1 N k = 1 m 1 γ i , 1 , k ϑ ˜ i , 1 , k T D α ϑ i , 1 , k
where [ a i ( t ) y i + b i ( t ) ] k is the kth element of vector a i ( t ) y i + b i ( t ) . According to (6), we have
D α x i , 1 , k = x i , 2 , k * + w i , 2 , k + h i , 2 , k ( X i , n ) + z i , 2 , k
Substituting D α x i , 1 , k into (21), one has
D α V 1 = i = 1 N k = 1 m { μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k ( x i , 2 , k * + w i , 2 , k + z i , 2 , k + h i , 2 , k ( X i , n ) D α y i , k * ) } i = 1 N k = 1 m 1 γ i , 1 , k ϑ ˜ i , 1 , k T D α ϑ i , 1 , k
Using the RBFNNs to approximate the unknown nonlinear function h i , 2 , k ( X i , n ) D α y i , k * , from (12), it results in
D α V 1 = i = 1 N k = 1 m { μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k ( x i , 2 , k * + w i , 2 , k + z i , 2 , k + ϑ i , 1 , k T φ i , 1 , k ( X i , n ) + ϑ ˜ i , 1 , k T φ i , 1 , k ( X i , n ) + ϵ i , 1 , k ) } i = 1 N k = 1 m 1 γ i , 1 , k ϑ ˜ i , 1 , k T D α ϑ i , 1 , k
According to Young’s inequality, one obtains
μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k w i , 2 , k 1 2 μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k 2 + 1 2 w i , 2 , k 2
μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k z i , 2 , k 1 2 μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k 2 + 1 2 z i , 2 , k 2
μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k ϵ i , 1 , k 1 2 μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k 2 + 1 2 ϵ i , 1 , k 2
Combining (23) with (25)–(27), one obtains
D α V 1 i = 1 N k = 1 m { μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k x i , 2 , k * + ϑ i , 1 , k T φ i , 1 , k ( X i , n ) + ϑ ˜ i , 1 , k T φ i , 1 , k ( X i , n ) + 3 2 μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k 2 + 1 2 w i , 2 , k 2 + 1 2 z i , 2 , k 2 + 1 2 ϵ i , 1 , k 2 } i = 1 N k = 1 m 1 γ i , 1 , k ϑ ˜ i , 1 , k T D α ϑ i , 1 , k
Design the virtual controller x i , 2 , k * and the adaptive law ϑ i , 1 , k as
x i , 2 , k * = c 1 μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k ϑ i , 1 , k T φ i , 1 , k ( X i , n )
D α ϑ i , 1 , k = γ i , 1 , k φ i , 1 , k ( X i , n ) μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k ρ i , 1 , k ϑ i , 1 , k
where c 1 , ρ i , 1 , k are positive designed parameters and c 1 > 3 2 . According to (20), z 1 T H H z 1 = F ( y ) y T F ( y ) y can be obtained. Thus, substituting x i , 2 , k * and ϑ i , 1 , k into (28), it results in
D α V 1 i = 1 N k = 1 m { ( c 1 3 2 ) μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k 2 + 1 2 w i , 2 , k 2 + 1 2 z i , 2 , k 2 + 1 2 ϵ i , 1 , k 2 } + i = 1 N k = 1 m ρ i , 1 , k γ i , 1 , k ϑ ˜ i , 1 , k T ϑ i , 1 , k ( c 1 3 2 ) z 1 T H H z 1 + i = 1 N k = 1 m ρ i , 1 , k γ i , 1 , k ϑ ˜ i , 1 , k T ϑ i , 1 , k + 1 2 i = 1 N k = 1 m ( w i , 2 , k 2 + z i , 2 , k 2 + ϵ i , 1 , k 2 ) 2 c 1 3 2 λ m a x H 1 F ( y ) y T H 1 F ( y ) y + i = 1 N k = 1 m ρ i , 1 , k γ i , 1 , k ϑ ˜ i , 1 , k T ϑ i , 1 , k + 1 2 i = 1 N k = 1 m ( w i , 2 , k 2 + z i , 2 , k 2 + ϵ i , 1 , k 2 )
where λ m a x H 1 is the maximum eigenvalue of the matrix H 1 .
Based on FODSC technique, as the solution of the fractal differential equation, the state variable r i , 2 , k is as follows
η i , 2 , k D α r i , 2 , k + r i , 2 , k = x i , 2 , k * r i , 2 , k ( 0 ) = x i , 2 , k * ( 0 ) .
From (15) and (32), one has
D α w i , 2 , k = w i , 2 η i , 2 , k + M i , 2 , k
where η i , 2 , k is the positive designed parameter, M i , 2 , k is a continuous function depanding on variables x i , 1 , k , x j , 1 , k , z i , 2 , k , z j , 2 , k , w i , 2 , k , w j , 2 , k , ϑ i , 1 , k , ϑ j , 1 , k , b i ( t ) , D α b i ( t ) . According to [48,49], there exist constants Γ i , 2 , k > 0 , i = 1 , , N , such that | M i , 2 , k | Γ i , 2 , k holds.
Step 2. Define the error variable z i , 2 , k = x i , 2 , k r i , 2 , k . Taking the fractional derivative of z i , 2 , k , one has
D α z i , 2 , k = D α x i , 2 , k D α r i , 2 , k = x i , 3 , k + ϑ i , 2 , k T φ i , 2 , k ( X i , n ) + ϑ ˜ i , 2 , k T φ i , 2 , k ( X i , n ) + ϵ i , 2 , k D α r i , 2 , k .
From (15), one obtains
D α z i , 2 , k = z i , 3 , k + x i , 3 , k * + w i , 3 , k + ϑ i , 2 , k T φ i , 2 , k ( X i , n ) + ϑ ˜ i , 2 , k T φ i , 2 , k ( X i , n ) + ϵ i , 2 , k D α r i , 2 , k .
Let V 2 = V 1 + 1 2 i = 1 N k = 1 m z i , 2 , k 2 + 1 γ i , 2 , k ϑ ˜ i , 2 , k T ϑ ˜ i , 2 , k + w i , 2 , k 2 , where γ i , 2 , k is a positive designed parameter. Then we have
D α V 2 = D α V 1 + i = 1 N k = 1 m z i , 2 , k D α z i , 2 , k + 1 γ i , 2 , k ϑ ˜ i , 2 , k T D α ϑ ˜ i , 2 , k + w i , 2 , k D α w i , 2 , k .
Substituting (35) into (36), we have
D α V 2 = D α V 1 + i = 1 N k = 1 m [ z i , 2 , k ( z i , 3 , k + x i , 3 , k * + w i , 3 , k + ϑ i , 2 , k T φ i , 2 , k ( X i , n ) + ϑ ˜ i , 2 , k T φ i , 2 , k ( X i , n ) + ϵ i , 2 , k D α r i , 2 , k ) + 1 γ i , 2 , k ϑ ˜ i , 2 , k T D α ϑ ˜ i , 2 , k + w i , 2 , k D α w i , 2 , k ] .
According to Young’s inequality, one has
z i , 2 , k z i , 3 , k + w i , 3 , k z i , 2 , k 2 + 1 2 z i , 3 , k 2 + w i , 3 . k 2
z i , 2 , k ϵ i , 2 , k 1 2 z i , 2 , k 2 + 1 2 ϵ i , 2 , k 2
Substituting (38) and (39) into (37), it results in
D α V 2 D α V 1 + i = 1 N k = 1 m [ z i , 2 , k x i , 3 , k * + ϑ i , 2 , k T φ i , 2 , k ( X i , n ) + ϑ ˜ i , 2 , k T φ i , 2 , k ( X i , n ) D α r i , 2 , k + 3 2 z i , 2 , k 2 + 1 2 w i , 3 , k 2 + 1 2 z i , 3 , k 2 + 1 2 ϵ i , 2 , k 2 1 γ i , 2 , k ϑ ˜ i , 2 , k T D α ϑ i , 2 , k + w i , 2 , k D α w i , 2 , k ] .
Design the virtual controller x i , 3 , k * and the update law ϑ i , 2 , k as follow
x i , 3 , k * = c i , 2 , k z i , 2 , k 2 z i , 2 , k ϑ i , 2 , k T φ i , 2 , k ( X i , n ) + x i , 2 , k * r i , 2 , k η i , 2 , k
D α ϑ i , 2 , k = γ i , 2 , k φ i , 2 , k ( X i , n ) z i , 2 , k ρ i , 2 , k ϑ i , 2 , k
where c i , 2 , k and ρ i , 2 , k are positive designed parameters.
Substituting Equations (31), (33), (41) and (42) into (40), it results in
D α V 2 2 c 1 3 2 λ m a x H 1 F ( y ) y T H 1 F ( y ) y + i = 1 N k = 1 m ρ i , 1 , k γ i , 1 , k ϑ ˜ i , 1 , k T ϑ i , 1 , k + 1 2 i = 1 N k = 1 m ( w i , 2 , k 2 + z i , 2 , k 2 + ϵ i , 1 , k 2 ) + i = 1 N k = 1 m [ z i , 2 , k ( c i , 2 , k z i , 2 , k 2 z i , 2 , k ϑ i , 2 , k T φ i , 2 , k ( X i , n ) + x i , 2 , k * r i , 2 , k η i , 2 , k + ϑ i , 2 , k T φ i , 2 , k ( X i , n ) + ϑ ˜ i , 2 , k T φ i , 2 , k ( X i , n ) D α r i , 2 , k ) + 3 2 z i , 2 , k 2 + 1 2 w i , 3 , k 2 + 1 2 z i , 3 , k 2 + 1 2 ϵ i , 2 , k 2 1 γ i , 2 , k ϑ ˜ i , 2 , k T ( γ i , 2 , k φ i , 2 , k ( X i , n ) z i , 2 , k ρ i , 2 , k ϑ i , 2 , k ) + w i , 2 , k w i , 2 , k η i , 2 , k + M i , 2 , k ] .
Using Young’s inequality, we have w i , 2 , k M i , 2 , k 1 2 w i , 2 , k 2 + 1 2 Γ i , 2 , k 2 . Combining the inequality with (43), one obtains
D α V 2 2 c 1 3 2 λ m a x H 1 F ( y ) y T H 1 F ( y ) y i = 1 N k = 1 m c i , 2 , k z i , 2 , k 2 + i = 1 N k = 1 m ρ i , 1 , k γ i , 1 , k ϑ ˜ i , 1 , k T ϑ i , 1 , k + i = 1 N k = 1 m ρ i , 2 , k γ i , 2 , k ϑ ˜ i , 2 , k T ϑ i , 2 , k i = 1 N k = 1 m 1 η i , 2 , k 1 w i , 2 , k 2 + 1 2 i = 1 N k = 1 m ϵ i , 1 , k 2 + ϵ i , 2 , k 2 + 1 2 i = 1 N k = 1 m Γ i , 2 2 + 1 2 i = 1 N k = 1 m z i , 3 , k 2 + w i , 3 , k 2
By using the FODSC technique, one has
η i , 3 , k D α r i , 3 , k + r i , 3 , k = x i , 3 , k * , r i , 3 , k ( 0 ) = x i , 3 , k * ( 0 ) .
From (15) and (45), we obtain
D α w i , 3 , k = w i , 3 , k η i , 3 , k + M i , 3 , k
where η i , 3 , k is the positive designed parameter, M i , 3 , k = D α x i , 3 , k * and there exists a positive constant Γ i , 3 , k , | M i , 3 , k | Γ i , 3 , k .
Step p. The p-th error variable is defined as z i , p , k = x i , p , k r i , p , k and combined with (15), we have
D α z i , p , k = D α x i , p , k D α r i , p , k = z i , p + 1 , k + x i , p + 1 , k * + w i , p + 1 , k + ϑ i , p , k T φ i , p , k ( X i , n ) + ϑ ˜ i , p , k T φ i , p , k ( X i , n ) + ϵ i , p , k D α r i , p , k .
Through the FODSC technique, the next fractal differential equation is obtained as
η i , p , k D α r i , p , k + r i , p , k = x i , p , k * , r i , p , k ( 0 ) = x i , p , k * ( 0 ) .
According to Equations (15) and (48), we have
D α w i , p , k = w i , p , k η i , p , k + M i , p , k
where η i , p , k is the positive designed parameter, M i , k = D α x i , p , k * and there exists a positive constant Γ i , p , k , | M i , p , k | Γ i , p , k .
Let V p = V p 1 + 1 2 i = 1 N k = 1 m z i , p , k 2 + 1 γ i , p , k ϑ ˜ i , p , k T ϑ ˜ i , p , k + w i , p , k 2 where γ i , p , k is a positive designed parameter. Then we have
D α V p = D α V p 1 + i = 1 N k = 1 m z i , p , k D α z i , p , k + 1 γ i , p , k ϑ ˜ i , p , k T D α ϑ ˜ i , p , k + w i , p , k D α w i , p , k .
Substituting (49) into (50), it results in
D α V p = D α V p 1 + i = 1 N k = 1 m [ z i , p , k ( z i , p + 1 , k + x i , p + 1 , k * + w i , p + 1 , k + ϑ i , p , k T φ i , p , k ( X i , n ) + ϑ ˜ i , p , k T φ i , p , k ( X i , n ) + ϵ i , p , k D α r i , p , k ) + 1 γ i , p , k ϑ ˜ i , p , k T D α ϑ ˜ i , p , k + w i , p , k D α w i , p , k ] .
According to Young’s inequality, one has
z i , p , k z i , p + 1 , k + w i , p + 1 , k z i , p , k 2 + 1 2 z i , p + 1 , k 2 + w i , p + 1 , k 2
z i , p , k ϵ i , p , k 1 2 z i , p , k 2 + 1 2 ϵ i , p , k 2
Substituting (52) and (53) into (51), we obtain
D α V p = D α V p 1 + i = 1 N k = 1 m [ z i , p , k x i , p + 1 , k * + ϑ i , p , k T φ i , p , k ( X i , n ) + ϑ ˜ i , p , k T φ i , p , k ( X i , n ) D α r i , p , k + 3 2 z i , p , k 2 + 1 2 z i , p + 1 , k 2 + 1 2 ϵ i , p , k 2 + w i , p + 1 , k 2 + 1 γ i , p , k ϑ ˜ i , p , k T D α ϑ ˜ i , p , k + w i , p , k D α w i , p , k ] .
Design the virtual controller x i , p + 1 , k * and the update law θ i , p , k as follow
x i , p + 1 , k * = c i , p , k z i , p , k 2 z i , p , k ϑ i , p , k T φ i , p , k + x i , p , k * r i , p , k η i , p , k
D α ϑ i , p , k = γ i , p , k φ i , p , k ( X i , n ) z i , p , k ρ i , p , k ϑ i , p , k
where ρ i , p , k is the positive designed parameter. Substituting Equations (49), (55) and (56) into (54), then we have
D α V p D α V p 1 + i = 1 N k = 1 m [ z i , p , k ( c i , p , k z i , p , k 2 z i , p , k ϑ i , p , k T φ i , p , k ( X i , n ) + x i , p , k * r i , p , k η i , p , k + ϑ i , p , k T φ i , p , k ( X i , n ) + ϑ ˜ i , p , k T φ i , p , k ( X i , n ) D α r i , p , k ) + 3 2 z i , p , k 2 + 1 2 ( z i , p , k + 1 2 + w i , p , k + 1 2 + ϵ i , p , k 2 ) 1 γ i , p , k ϑ ˜ i , p , k T γ i , p , k φ i , p , k ( X i , n ) z i , p , k ρ i , p , k ϑ i , p , k + w i , p , k w i , p , k η i , p , k + M i , p , k ] .
Through (31) and (44), one obtains
D α V p 1 i = 1 N k = 1 m [ 2 c 1 3 2 λ m a x H 1 F ( y ) y T H 1 F ( y ) y l = 2 p 1 c i , l , k z i , l , k 2 + l = 1 p 1 ρ i , l , k γ i , l , k ϑ ˜ i , l , k T ϑ i , l , k + 1 2 l = 1 p 1 ϵ i , l , k 2 l = 2 p 1 1 η i , l , k 1 w i , l , k 2 + 1 2 l = 2 p 1 Γ i , l , k 2 + 1 2 z i , p , k 2 + w i , p , k 2 ] .
Using Young’s inequality, one has w i , p , k M i , p , k 1 2 w i , p , k 2 + 1 2 Γ i , p , k 2 . Combined with (57) and (58), we have
D α V p i = 1 N k = 1 m [ 2 c 1 3 2 λ m a x H 1 F ( y ) y T H 1 F ( y ) y l = 2 p c i , l , k z i , l , k 2 + l = 1 p ρ i , l , k γ i , l , k ϑ ˜ i , l , k T ϑ i , l , k l = 2 p 1 η i , l , k 1 w i , l , k 2 + 1 2 l = 1 p ϵ i , l , k 2 + 1 2 l = 2 p Γ i , l , k 2 + 1 2 z i , p + 1 , k 2 + w i , p + 1 , k 2 ] .
Step n. Define the n-th error variable as z i , n , k = x i , n , k r i , n , k . Then, one has
D α z i , n , k = D α x i , n , k D α r i , n , k = u i , k + ϑ i , n , k T φ i , n , k ( X i , n ) + ϑ ˜ i , n , k T φ i , n , k ( X i , n ) + ϵ i , n , k D α r i , n , k .
Through the FODSC technique, the following fractal differential equation can be obtained
η i , n , k D α r i , n , k + r i , n , k = x i , n , k * , r i , n , k ( 0 ) = x i , n , k * ( 0 ) .
By Equation (61), we have
D α w i , n , k = w i , n , k η i , n , k + M i , n , k
where η i , n , k is the positive designed parameter, M i , n , k = D α x i , n , k * and there exists a positive constant Γ i , n , k , | M i , n , k | Γ i , n , k . Let V n = V n 1 + 1 2 i = 1 N k = 1 m z i , n , k 2 + 1 γ i , n , k ϑ ˜ i , n , k T ϑ ˜ i , n , k + w i , n , k 2 where γ i , n , k is the positive designed parameter. Then we obtain
D α V n = D α V n 1 + i = 1 N k = 1 m z i , n , k D α z i , n , k + 1 γ i , n , k ϑ ˜ i , n , k T D α ϑ ˜ i , n , k + w i , n , k D α w i , n , k .
Substituting (62) into (63), it results in
D α V n = D α V n 1 + i = 1 N k = 1 m [ z i , n , k u i , k + ϑ i , n , k T φ i , n , k ( X i , n ) + ϑ ˜ i , n , k T φ i , n , k ( X i , n ) + ϵ i , n , k D α r i , n , k + 1 γ i , n , k ϑ ˜ i , n , k T D α ϑ ˜ i , n , k + w i , n , k D α w i , n , k ] .
By employing Young’s inequality, one has z i , n , k ϵ i , n , k 1 2 z i , n , k 2 + 1 2 ϵ i , n , k 2 . Combined with (64), we obtain
D α V n = D α V n 1 + i = 1 N k = 1 m [ z i , n , k u i , k + ϑ i , n , k T φ i , n , k ( X i , n ) + ϑ ˜ i , n , k T φ i , n , k ( X i , n ) D α r i , n , k + 1 2 z i , n , k 2 + 1 2 ϵ i , n , k 2 + 1 γ i , n , k ϑ ˜ i , n , k T D α ϑ ˜ i , n , k + w i , n , k D α w i , n , k ] .
Design the actual controller ϖ i , k ( t ) and the adaptive law ϑ i , n , k as
x ¯ i , n , k * = c i , n , k z i , n , k + 3 2 z i , n , k + ϑ i , n , k T φ i , n , k ( X i , n ) x i , n , k * r i , n , k η i , n , k
D α ϑ i , n , k = γ i , n , k φ i , n , k ( X i , n ) z i , n , k ρ i , n , k ϑ i , n , k
ϖ i , k t = 1 1 d x ¯ i , n , k * z i , n , k π i , 1 , k x ¯ i , n , k * 2 z i , n , k π i , 1 , k x ¯ i , n , k * 2 + π i , 2 , k 2 z i , n , k Π i , 1 , k 2 z i , n , k Π i , 1 , k 2 + π i , 2 , k 2
where c i , n , k , ρ i , n , k  , π i , 1 , k , π i , 2 , k and Π i , 1 , k are positive designed parameters. Based on the hysteresis quantizer technique and Lemma 5, one obtains
Ξ ( ϖ i , k ) ϖ i , k t x ¯ i , n , k * z i , n , k π i , 1 , k x ¯ i , n , k * 2 z i , n , k π i , 1 , k x ¯ i , n , k * 2 + π i , 2 , k 2 z i , n , k Π i , 1 , k 2 z i , n , k Π i , 1 , k 2 + π i , 2 , k 2
The event-triggered controller u i , k ( t ) is designed as
u i , k ( t ) = υ i , k ϖ i , k t ι , t ι , t ι + 1
and the trigger condition for the sampling instants are designed as
t ι + 1 = inf t R Δ i , k t π i , 1 u i , k t + Υ i , 1 , k
where Δ i , k t = υ i , k ϖ i , k t ι u i , k t is the event sampling error, 0 < π i , 1 , k < 1 and Υ i , 1 , k are positive designed parameters, t ι , ι z + is the update time for the controller.
According to (71), one has
Δ i , k t = υ i , k ϖ i , k t ι u i , k t = τ i , 1 , k t π i , 1 u i , k t + τ i , 2 , k t Υ i , 1 , k
where τ i , 1 , k t , τ i , 2 , k t are time-varying parameters satisfying τ i , 1 , k t 1 , τ i , 2 , k t 1 . Thus, we have
u i , k t = υ i , k ( ϖ i , k ( t ) ) τ i , 2 , k ( t ) Υ i , 1 , k 1 + τ i , 1 , k ( t ) π i , 1 , k
Combining (73), (62) and (67) with (65), it results in
D α V n = D α V n 1 + i = 1 N k = 1 m [ z i , n , k ( υ i , k ( ϖ i , k ( t ) ) τ i , 2 , k ( t ) Υ i , 1 , k 1 + τ i , 1 , k ( t ) π i , 1 , k + x i , n , k * x i , n , k * + ϑ i , n , k T φ i , n , k ( X i , n ) + ϑ ˜ i , n , k T φ i , n , k ( X i , n ) D α r i , n , k ) + 1 2 z i , n , k 2 + 1 2 ϵ i , n , k 2 1 γ i , n , k ϑ ˜ i , n , k T ( γ i , n , k φ i , p , k ( X i , k ) z i , n , k ρ i , n , k ϑ i , n , k ) + w i , n , k w i , n , k η i , n , k + M i , n , k ] .
Through Young’s inequality, we have w i , n , k M i , n , k 1 2 w i , n , k 2 + 1 2 Γ i , n , k 2 . Then, from (66), one has
D α V n = D α V n 1 + i = 1 N k = 1 m [ z i , n , k υ i , k ( ϖ i , k ( t ) ) τ i , 2 , k ( t ) Υ i , 1 , k 1 + τ i , 1 , k ( t ) π i , 1 , k + x i , n , k * c i , n , k z i , n , k 2 3 2 z i , n , k 2 + 1 2 z i , n , k 2 + 1 2 ϵ i , n , k 2 + ρ i , n , k γ i , n , k ϑ ˜ i , n , k T ϑ i , n , k w i , n , k 2 η i , n , k + 1 2 w i , n , k 2 + 1 2 Γ i , n , k 2 ] .
According to Lemma 3, (68) and (69), one has
D α V n = D α V n 1 + i = 1 N k = 1 m [ c i , n , k z i , n , k 2 3 2 z i , n , k 2 + 1 2 z i , n , k 2 + 1 2 ϵ i , n , k 2 + ρ i , n , k γ i , n , k ϑ ˜ i , n , k T ϑ i , n , k w i , n , k 2 η i , n , k + 1 2 w i , n , k 2 + 1 2 Γ i , n , k 2 + 1 2 z i , n , k 2 + ϖ min 2 2 ( 1 π i , 1 , k ) 2 + 2 π i , 2 , k 2 1 π i , 1 , k ] .
From (44) and (59), one obtains
D α V n 1 i = 1 N k = 1 m [ 2 c 1 3 2 λ m a x H 1 F ( y ) y T H 1 F ( y ) y l = 2 n 1 c i , l , k z i , l , k 2 + l = 1 n 1 ρ i , l , k γ i , l , k ϑ ˜ i , l , k T ϑ i , l , k + 1 2 l = 1 n 1 ϵ i , l , k 2 l = 2 n 1 1 η i , l , k 1 w i , l , k 2 + 1 2 l = 2 n 1 Γ i , l , k 2 + 1 2 z i , n , k 2 + w i , n , k 2 ] .
Thus, combining (76) and (77), it results in
D α V n i = 1 N k = 1 m [ 2 c 1 3 2 λ m a x H 1 F ( y ) y T H 1 F ( y ) y l = 2 n c i , l , k z i , l , k 2 + l = 1 n ρ i , l , k γ i , l , k ϑ ˜ i , l , k T ϑ i , l , k + 1 2 l = 1 n ϵ i , l , k 2 l = 2 n 1 η i , l , k 1 w i , l , k 2 + 1 2 l = 2 n Γ i , l , k 2 + ϖ min 2 2 ( 1 π i , 1 , k ) 2 + 2 π i , 2 , k 2 1 π i , 1 , k ] .
From Young’s inequality, one has
ϑ ˜ i , l , k T ϑ i , l , k 1 2 ϑ ˜ i , l , k T ϑ ˜ i , l , k + 1 2 ϑ i , l , k * T ϑ i , l , k *
Therefore, rewrite (78) as
D α V n i = 1 N k = 1 m [ 2 c 1 3 2 λ m a x H 1 F ( y ) y T H 1 F ( y ) y l = 2 n c i , l , k z i , l , k 2 l = 1 n ρ i , l , k 2 γ i , l , k ϑ ˜ i , l , k T ϑ ˜ i , l , k + 1 2 l = 1 n ϵ i , l , k 2 l = 2 n 1 η i , l , k 1 w i , l , k 2 + 1 2 l = 2 n Γ i , l , k 2 + ϖ min 2 2 ( 1 π i , 1 , k ) 2 + 2 π i , 2 , k 2 1 π i , 1 , k + l = 1 n ρ i , l , k 2 γ i , l , k ϑ i , l , k * T ϑ i , l , k * ] .
Denote
ε = 1 2 i = 1 N k = 2 m l = 2 n Γ i , l , k 2 + ϖ min 2 2 ( 1 π i , 1 , k ) 2 + 2 π i , 2 , k 2 1 π i , 1 , k + l = 1 n ρ i , l , k 2 γ i , l , k ϑ i , l , k * T ϑ i , l , k * + 1 2 l = 1 n ϵ i , l , k 2
Accordingly, the Equation (80) can be rewritten as follows
D α V n i = 1 N k = 1 m [ 2 c 1 3 2 λ m a x H 1 F ( y ) y T H 1 F ( y ) y l = 2 n c i , l , k z i , l , k 2 l = 1 n ρ i , l , k 2 γ i , l , k ϑ ˜ i , l , k T ϑ ˜ i , l , k l = 2 n 1 η i , l , k 1 w i , l , k 2 ] + ε
where 2 c 1 3 2 λ m a x > 0 , c i , l , k > 0 , ρ i , l , k 2 γ i , l , k > 0 , ( 1 η i , l , k 1 ) > 0 . Define Θ = min { 2 2 c 1 3 2 λ m a x , 2 c i , l , k , ρ i , l , k γ i , l , k , 2 ( 1 η i , l , k 1 ) } . Then, (82) becomes
D α V n ( t , x ) Θ V n ( t , x ) + ε .
From (83) and Lemma 4, it results in
lim t V n t ε μ Θ .
According to the Lyapunov function V 1 , we obtain that 1 2 z 1 T H z 1 ε μ Θ . Thus, one has z 1 2 ε μ Θ . Since z 1 = y 1 y * , y 1 y * 2 ε μ Θ holds. Then, it can be summarized that the error between agents’ outputs and the optimal trajectory is bounded. From the definition of Θ , one concludes that with designed parameters c 1 , c i , l , k increasing, the value of 2 ε μ Θ will be decreasing, which means that sufficiently large parameters c 1 and c i , l , k brings the error small enough. This complete the proof of Theorem 1. □
Next, the proof of avoiding Zeno phenomenon is as follows.
From Δ i , k ( t ) υ i , k ϖ i , k t ι u i , k t , one has D α | Δ i , k ( t ) | = s i g n ( Δ i , k ( t ) ) D α ( Δ i , k ( t ) ) | D α ( υ i , k ( ϖ i , k ( t ) ) ) | ( 1 + d ) | D α ( ϖ i , k ( t ) ) | . From the actual controller ϖ i , k ( t ) , it is known that D α ( ϖ i , k ( t ) ) is bounede in a closed interval [ 0 , t ] . Thus, given a positive constant h such that | D α ( ϖ i , k ( t ) ) | < h . From Δ ( t ι ) = 0 and lim t t ι + 1 Δ ( t ) = Υ i , 1 , k . Hence, there exists t * such that t * Υ i , 1 , k / h . Therefore, there exists t * 0 such that ι z + , t ι + 1 t ι t * , the Zeno phenomenon will not occur.
Remark 2.
In contrast to [7,8,50], where the high-order DOP is investigated for single-input–single-output agent, agents in this paper are described by MIMO dynamics, which means the developed control protocol are fitted in many practical engineering applications, like marine surface vehicles, unmanned aerial vehicles and wheeled multimobile robots.

5. Simulation

In this section, we propose a simulation exmaple to verify the abovementioned theoretical results. Construct a connected undirected graph with five agents (see Figure 1). The dynamic of each MIMO agent is described as
D α x i , 1 , k t = x i , 2 , k + h i , 1 , k X i , 2 D α x i , 2 , k t = u i , k t + h i , 2 , k X i , 2 y i , k = x i , 2 , k
where i = 1 , , 5 , k = 1 , 2 , X i , 2 = [ x i , 1 T , x i , 2 T ] T , h 1 , 1 , 1 ( X 1 , 2 ) = 0.02 x 1 , 1 , 1 0.05 x 1 , 2 , 1 , h 1 , 2 , 1 ( X 1 , 2 ) = 0.02 x 1 , 1 , 1 + 0.01 x 1 , 2 , 1 0.04 x 1 , 2 , 2 , h 1 , 1 , 2 ( X 1 , 2 ) = 0.05 x 1 , 1 , 2 + 0.02 x 1 , 1 , 1 ,
h 1 , 2 , 2 ( X 1 , 2 ) = 0.02 x 1 , 1 , 2 0.04 x 1 , 2 , 2 + 0.01 x 1 , 1 , 1 , h 2 , 1 , 1 ( X 2 , 2 ) = 0.03 x 2 , 1 , 1 + 0.01 x 2 , 2 , 1 ,
h 2 , 2 , 1 ( X 2 , 2 ) = 0.01 x 2 , 1 , 1 + 0.03 x 2 , 2 , 1 , h 2 , 1 , 2 ( X 2 , 2 ) = 0.03 x 2 , 1 , 2 0.01 x 2 , 2 , 2 , h 2 , 2 , 2 ( X 2 , 2 ) = 0.01 x 2 , 1 , 2 0.03 x 2 , 2 , 2 , h 3 , 1 , 1 ( X 3 , 2 ) = 0.05 x 3 , 1 , 1 + 0.05 sin ( x 3 , 2 , 1 ) , h 3 , 2 , 1 ( X 3 , 2 ) = x 3 , 1 , 1 + 0.08 x 3 , 2 , 1 x 3 , 1 , 1 2 sin ( x 3 , 2 , 1 ) 0.05 x 3 , 1 , 2 , h 3 , 1 , 2 ( X 3 , 2 ) = 0.05 x 3 , 1 , 2 0.05 sin ( x 3 , 2 , 2 ) ,
h 3 , 2 , 2 ( X 3 , 2 ) = x 3 , 1 , 2 0.08 x 3 , 2 , 2 x 3 , 1 , 2 2 sin ( x 3 , 2 , 2 ) 0.05 x 3 , 1 , 1 , h 4 , 1 , 1 ( X 4 , 2 ) = 0.06 x 4 , 1 , 1 0.01 x 4 , 1 , 1 x 4 , 2 , 1 , h 4 , 2 , 1 ( X 4 , 2 ) = x 4 , 1 , 1 0.01 x 4 , 2 , 1 + 0.02 x 4 , 1 , 1 2 0.01 x 4 , 1 , 1 x 4 , 2 , 1 , h 4 , 1 , 2 ( X 4 , 2 ) = 0.02 x 4 , 1 , 2 + 0.01 x 4 , 1 , 2 x 4 , 2 , 2 , h 4 , 2 , 2 ( X 4 , 2 ) = x 4 , 1 , 1 + 0.01 x 4 , 2 , 2 0.02 x 4 , 1 , 2 2 + 0.01 x 4 , 1 , 2 x 4 , 2 , 2 ,
h 5 , 1 , 1 ( X 5 , 2 ) = 0.02 ( x 5 , 1 , 1 x 5 , 1 , 2 ) + 0.02 x 5 , 1 , 2 , h 5 , 2 , 1 ( X 5 , 2 ) = 0.1 x 5 , 1 , 1 + 0.02 x 5 , 2 , 1 2 + sin ( 0.01 x 5 , 1 , 1 2 + 0.04 x 5 , 2 , 1 2 ) + sin ( 0.01 x 5 , 1 , 2 2 0.04 x 5 , 2 , 2 2 ) , h 5 , 1 , 2 ( X 5 , 2 ) = 0.02 ( x 5 , 1 , 2 x 5 , 1 , 1 ) + 0.02 x 5 , 1 , 1 and h 5 , 2 , 2 ( X 5 , 2 ) = 0.1 x 5 , 1 , 2 0.02 x 5 , 2 , 2 2 + sin ( 0.01 x 5 , 1 , 1 2 + 0.04 x 5 , 2 , 1 2 ) + sin ( 0.01 x 5 , 1 , 2 2 0.04 x 5 , 2 , 2 2 ) .
Figure 1. Communication topology.
Figure 1. Communication topology.
Fractalfract 07 00749 g001
Construct the local objective function for each agent as
f i ( y i ) = ( y i , 1 ( 0.9 + 0.05 i ) sin ( t ) ) 2 + ( y i , 2 + ( 0.9 + 0.05 i ) cos ( t ) ) 2
and the global objective function is
f ( y ) = i = 1 5 f i ( y i ) .
Accodring to the Theorem 1, design the virtual controller, the controller and adaptive laws as
x i , 2 , k * = c 1 μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k ϑ i , 1 , k T φ i , 1 , k ( X i , 2 )
x ¯ i , 2 , k * = c i , 2 , k z i , 2 , k + 3 2 z i , 2 , k + ϑ i , 2 , k T φ i , 2 , k ( X i , 2 ) x i , 2 , k * r i , 2 , k η i , 2 , k
ϖ i , k t = 1 1 d x ¯ i , 2 , k * z i , 2 , k π i , 1 , k x ¯ i , 2 , k * 2 z i , 2 , k π i , 1 , k x ¯ i , 2 , k * 2 + π i , 2 , k 2 z i , 2 , k Π i , 1 , k 2 z i , 2 , k Π i , 1 , k 2 + π i , 2 , k 2
u i , k ( t ) = υ i , k ϖ i , k t ι , t ι , t ι + 1
D α ϑ i , 1 , k = γ i , 1 , k φ i , 1 , k ( X i , 2 ) μ j N i a i j ( x i , 1 , k x j , 1 , k ) + 2 [ a i ( t ) y i + b i ( t ) ] k ρ i , 1 , k ϑ i , 1 , k
D α ϑ i , 2 , k = γ i , 2 , k φ i , 2 , k ( X i , 2 ) z i , 2 , k ρ i , 2 , k ϑ i , 2 , k
where the designed parameter a i j = 1 , c 1 = 1.6 , c i , 2 , k = 50 , η i , 2 , k = 40 , γ i , 2 , k = 1.5 , ρ i , 1 , k = 5 , ρ i , 2 , k = 0.3 , π i , 1 , k = 0.5 , π i , 2 , k = 2 , Π i , 1 , k = 2 , Υ i , 1 , k = 2 , ϖ min = 1 and d = 0.4 . Select the initial conditions of FOMASs as x 1 , 1 = [ 1.1 , 0.9 ] T , x 2 , 1 = [ 1.05 , 0.95 ] T , x 3 , 1 = [ 1 , 1 ] T , x 4 , 1 = [ 0.95 , 1.05 ] T and x 5 , 1 = [ 0.9 , 1.1 ] T .
Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 show the simulation results. The trajectory of FOMASs’ outputs are seen in Figure 2 and Figure 3, which show that all signals in FOMASs remain bounded and all agents reach consensus and follow the optimal trajectory. Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 release interval and display the trajectories of u i , k , q ( ϖ i , k ) and ( 1 d ) ϖ i , k with i = 1 , , 5 , k = 1 , 2 , which illustrate the boundness of u i , k . Figure 14 shows the value of global objective function f ( y ) , from which we can summarize that the proposed control protocol minimizes the global objective function f ( y ) and deals with the DOP for FOMASs with small errors. Figure 15 shows errors between agents’ outputs and the optimal solution of global objective function, from which we can conlude that the optimal consensus errors are bounded and close to zero. Figure 16 shows trajectories of RBFNN and h 1 , 1 , 1 ( X 1 , 2 ) D α y 1 , 1 * . It can be seen that RBFNN can track the unknown nonlinear function with small errors. From the simulation results, it can be concluded that the proposed algorithm can ensure all MIMO agents reach the optimal trajectory with lower computation in the uncertain FOMASs.

6. Conclusions

This paper deals with a class of nonlinear FOMASs where each agent is described by MIMO dynamics. To make all agents not only reach consensus, but also achieve the optimal solution of DOP, the penalty term is constructed by using the property of connected undirected communication graph and the global objected function is reconstructed. Fractional derivatives of virtual controllers are acquired by FODSC technique while avoiding “explosion of complexity”. Compared to existing literatures which only investigate DOPs for the first-order or second-order linear MASs, the DOP for high-order uncertain nonlinear MIMO MASs is solved by constructing a novel event-trigger based quantized adaptive backstepping control protocol using RBFNNs technique, which reduces the utilization of communication resources. Simulation results demonstrate that the developed algorithm makes all agents reach the optimal trajectory with bounded errors and the smaller sampling frequency of the control input. It should be noted that the algorithm in this paper is developed based on Lyapunov asymptotically stability theorem and the state trajectories converge to stability in a sufficiently long time. In future work, we plan to investigate the finite-time DOP for high-order nonlinear MASs and apply this control scheme to real physical systems.

Author Contributions

Conceptualization, X.Y.; methodology, X.Y.; software, X.Y.; formal analysis, X.Y.; investigation, J.Y; resources, H.Y.; data curation, X.Y.; writing—original draft preparation, X.Y.; writing—review and editing, J.Y.; supervision, T.C.; project administration, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant Number: 5217052158.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tang, Y.; Deng, Z.; Hong, Y. Optimal output consensus of high-order multiagent systems with embedded technique. IEEE Trans. Cybern. 2018, 49, 1768–1779. [Google Scholar] [CrossRef] [PubMed]
  2. Li, G.; Wang, X.; Li, S. Finite-time distributed approximate optimization algorithms of higher order multiagent systems via penalty-function-based method. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 6174–6182. [Google Scholar] [CrossRef]
  3. Zhang, Y.; Lou, Y.; Hong, Y.; Xie, L. Distributed projection-based algorithms for source localization in wireless sensor networks. IEEE Trans. Wirel. Commun. 2015, 14, 3131–3142. [Google Scholar] [CrossRef]
  4. Chen, Q.; Ge, M.F.; Liang, C.D.; Gu, Z.W.; Liu, J. Distributed optimization of networked marine surface vehicles: A fixed-time estimator-based approach. Ocean Eng. 2023, 284, 115275. [Google Scholar] [CrossRef]
  5. Chen, G.; Ren, J.; Feng, E.N. Distributed finite-time economic dispatch of a network of energy resources. IEEE Trans. Smart Grid 2016, 8, 822–832. [Google Scholar] [CrossRef]
  6. Huang, B.; Liu, L.; Zhang, H.; Li, Y.; Sun, Q. Distributed optimal economic dispatch for microgrids considering communication delays. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 1634–1642. [Google Scholar] [CrossRef]
  7. Yang, X.; Zhao, W.; Yuan, J.; Chen, T.; Zhang, C.; Wang, L. Distributed Optimization for Fractional-Order Multi-Agent Systems Based on Adaptive Backstepping Dynamic Surface Control Technology. Fractal Fract. 2022, 6, 642. [Google Scholar] [CrossRef]
  8. Yang, X.; Yuan, J.; Chen, T.; Zhang, C.; Yang, H.; Hu, S. Distributed convex optimization of higher order nonlinear uncertain multi-agent systems with switched parameters and topologies. J. Vib. Control 2023, 10775463231179271. [Google Scholar] [CrossRef]
  9. Meng, X.; Sun, J.; Liu, Q.; Chi, G. A discrete-time distributed optimization algorithm for cooperative transportation of multi-robot system. Complex Intell. Syst. 2023, 1–13. [Google Scholar] [CrossRef]
  10. Lu, K.; Xu, H. Online distributed optimization with strongly pseudoconvex-sum cost functions and coupled inequality constraints. Automatica 2023, 156, 111203. [Google Scholar] [CrossRef]
  11. Yu, Z.; Sun, J.; Yu, S.; Jiang, H. Fixed-time distributed optimization for multi-agent systems with external disturbances over directed networks. Int. J. Robust Nonlinear Control 2023, 33, 953–972. [Google Scholar] [CrossRef]
  12. Meng, X.; Liu, Q. A consensus algorithm based on multi-agent system with state noise and gradient disturbance for distributed convex optimization. Neurocomputing 2023, 519, 148–157. [Google Scholar] [CrossRef]
  13. Liu, Y.; Xia, Z.; Gui, W. Multi-objective distributed optimization via a predefined-time multi-agent approach. IEEE Trans. Autom. Control 2023, 1–8. [Google Scholar] [CrossRef]
  14. He, X.; Wei, B.; Wang, H. A fixed-time gradient algorithm for distributed optimization with inequality constraints. Neurocomputing 2023, 532, 106–113. [Google Scholar] [CrossRef]
  15. Yu, Z.; Sun, J.; Yu, S.; Jiang, H. Fixed-time consensus for multi-agent systems with objective optimization on directed detail-balanced networks. Inf. Sci. 2022, 607, 1583–1599. [Google Scholar] [CrossRef]
  16. Guo, F.; Chen, X.; Yue, M.; Jiang, H.; Chen, S. Distributed Optimization for Resource Allocation Problem with Dynamic Event-Triggered Strategy. Entropy 2023, 25, 1019. [Google Scholar] [CrossRef] [PubMed]
  17. Li, Q.; Wang, M.; Sun, H.; Qin, S. An adaptive finite-time neurodynamic approach to distributed consensus-based optimization problem. Neural Comput. Appl. 2023, 35, 20841–20853. [Google Scholar] [CrossRef]
  18. Chen, G.; Yi, P.; Hong, Y.; Chen, J. Distributed Optimization With Projection-Free Dynamics: A Frank-Wolfe Perspective. IEEE Trans. Cybern. 2023, 1–12. [Google Scholar] [CrossRef]
  19. Kang, J.; Guo, G. Distributed Optimization of Disturbed Nonlinear Multi-agent Systems via Adaptive Fault-tolerant Output Regulation. IEEE Trans. Circuits Syst. II Express Briefs 2023, 1. [Google Scholar] [CrossRef]
  20. Hu, Z.; Yang, J. Distributed finite-time optimization for second order continuous-time multiple agents systems with time-varying cost function. Neurocomputing 2018, 287, 173–184. [Google Scholar] [CrossRef]
  21. Deng, Z. Distributed algorithm design for resource allocation problems of second-order multiagent systems over weight-balanced digraphs. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 3512–3521. [Google Scholar] [CrossRef]
  22. Li, S.; Nian, X.; Deng, Z. Distributed optimization of second-order nonlinear multiagent systems with event-triggered communication. IEEE Trans. Control Netw. Syst. 2021, 8, 1954–1963. [Google Scholar] [CrossRef]
  23. Wang, X.; Wang, G.; Li, S. Distributed finite-time optimization for disturbed second-order multiagent systems. IEEE Trans. Cybern. 2020, 51, 4634–4647. [Google Scholar] [CrossRef] [PubMed]
  24. Chen, J.; Yang, Y.; Qin, S. A Distributed Optimization Algorithm for Fixed-Time Flocking of Second-Order Multiagent Systems. IEEE Trans. Netw. Sci. Eng. 2023, 1–10. [Google Scholar] [CrossRef]
  25. Wang, D.; Zhou, J.; Wen, G.; Lü, J.; Chen, G. Event-Triggered Optimal Consensus of Second-Order MASs With Disturbances and Cyber Attacks on Communications Edges. IEEE Trans. Netw. Sci. Eng. 2023, 1–12. [Google Scholar] [CrossRef]
  26. Huang, F.; Duan, M.; Su, H.; Zhu, S. Distributed Optimal Formation Control of Second-Order Multiagent Systems with Obstacle Avoidance. IEEE Control Syst. Lett. 2023, 7, 2647–2652. [Google Scholar] [CrossRef]
  27. Yuan, J.; Zhang, C.; Chen, T. Command Filtered Adaptive Neural Network Synchronization Control of Nonlinear Stochastic Systems With Lévy Noise via Event-Triggered Mechanism. IEEE Access 2021, 9, 146195–146202. [Google Scholar] [CrossRef]
  28. Cao, Y.; Zhao, L.; Zhong, Q.; Wen, S.; Shi, K.; Xiao, J.; Huang, T. Adaptive fixed-time output synchronization for complex dynamical networks with multi-weights. Neural Netw. 2023, 163, 28–39. [Google Scholar] [CrossRef] [PubMed]
  29. Jiang, B.; Karimi, H.R.; Zhang, X.; Wu, Z. Adaptive neural-network-based sliding mode control of switching distributed delay systems with Markov jump parameters. Neural Netw. 2023, 165, 846–859. [Google Scholar] [CrossRef]
  30. Gao, T.; Li, T.; Liu, Y.J.; Tong, S.; Liu, L. Adaptive Event-Triggered Fuzzy Control of State-Constrained Stochastic Nonlinear Systems Using IBLFs. IEEE Trans. Fuzzy Syst. 2023, 1–13. [Google Scholar] [CrossRef]
  31. Hou, Y.; Liu, Y.J.; Tang, L.; Tong, S. Adaptive Fuzzy-based Event-Triggered Control for MIMO Switched Nonlinear System with Unknown Control Directions. IEEE Trans. Fuzzy Syst. 2023, 1–11. [Google Scholar] [CrossRef]
  32. Chen, Z.; Zhang, H.; Liu, J.; Wang, Q.; Wang, J. Adaptive prescribed settling time periodic event-triggered control for uncertain robotic manipulators with state constraints. Neural Netw. 2023, 166, 1–10. [Google Scholar] [CrossRef] [PubMed]
  33. Chen, L.; Tong, S. Observer-Based Adaptive Fuzzy Consensus Control of Nonlinear Multi-Agent Systems Encountering Deception Attacks. IEEE Trans. Ind. Inform. 2023, 1–9. [Google Scholar] [CrossRef]
  34. Olfati-Saber, R. Flocking for multi-agent dynamic systems: Algorithms and theory. IEEE Trans. Autom. Control 2006, 51, 401–420. [Google Scholar] [CrossRef]
  35. Radwan, A.G.; Taher Azar, A.; Vaidyanathan, S.; Munoz-Pacheco, J.M.; Ouannas, A. Fractional-order and memristive nonlinear systems: Advances and applications. Complexity 2017, 2017, 3760121. [Google Scholar] [CrossRef]
  36. Yuan, J.; Chen, T. Switched fractional order multiagent systems containment control with event-triggered mechanism and input quantization. Fractal Fract. 2022, 6, 77. [Google Scholar] [CrossRef]
  37. Yuan, J.; Chen, T. Observer-based adaptive neural network dynamic surface bipartite containment control for switched fractional order multi-agent systems. Int. J. Adapt. Control Signal Process. 2022, 36, 1619–1646. [Google Scholar] [CrossRef]
  38. Chen, T.; Yuan, J.; Yang, H. Event-triggered adaptive neural network backstepping sliding mode control of fractional-order multi-agent systems with input delay. J. Vib. Control 2022, 28, 3740–3766. [Google Scholar] [CrossRef]
  39. Chen, T.; Cao, D.; Yuan, J.; Yang, H. Observer-based adaptive neural network backstepping sliding mode control for switched fractional order uncertain nonlinear systems with unmeasured states. Meas. Control 2021, 54, 1245–1258. [Google Scholar] [CrossRef]
  40. Hu, L.; Yu, H.; Xia, X. Fuzzy adaptive tracking control of fractional-order multi-agent systems with partial state constraints and input saturation via event-triggered strategy. Inf. Sci. 2023, 646, 119396. [Google Scholar] [CrossRef]
  41. Xia, X.; Bai, J.; Li, X.; Wen, G. Containment control for fractional order MASs with nonlinearity and time delay via pull-based event-triggered mechanism. Appl. Math. Comput. 2023, 454, 128094. [Google Scholar] [CrossRef]
  42. Li, Y.; Chen, Y.; Podlubny, I. Mittag–Leffler stability of fractional order nonlinear dynamic systems. Automatica 2009, 45, 1965–1969. [Google Scholar] [CrossRef]
  43. Podlubny, I. An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Mathematics in Science and Engineering; Academic Press: San Diego, CA, USA, 1999; Volume 198, p. 340. [Google Scholar]
  44. Wang, X.; Chen, Z.; Yang, G. Finite-time-convergent differentiator based on singular perturbation technique. IEEE Trans. Autom. Control 2007, 52, 1731–1737. [Google Scholar] [CrossRef]
  45. Liu, W.; Lim, C.C.; Shi, P.; Xu, S. Backstepping fuzzy adaptive control for a class of quantized nonlinear systems. IEEE Trans. Fuzzy Syst. 2016, 25, 1090–1101. [Google Scholar] [CrossRef]
  46. Sun, W.; Wu, J.; Su, S.F.; Zhao, X. Neural network-based fixed-time tracking control for input-quantized nonlinear systems with actuator faults. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–11. [Google Scholar] [CrossRef] [PubMed]
  47. Wang, X.; Wang, G.; Li, S. A distributed fixed-time optimization algorithm for multi-agent systems. Automatica 2020, 122, 109289. [Google Scholar] [CrossRef]
  48. Wang, D.; Huang, J. Neural network-based adaptive dynamic surface control for a class of uncertain nonlinear systems in strict-feedback form. IEEE Trans. Neural Netw. 2005, 16, 195–202. [Google Scholar] [CrossRef] [PubMed]
  49. Yu, J.; Shi, P.; Dong, W.; Chen, B.; Lin, C. Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors. IEEE Trans. Neural Netw. Learn. Syst. 2014, 26, 640–645. [Google Scholar] [CrossRef]
  50. Liu, D.; Shen, M.; Jing, Y.; Wang, Q.G. Distributed Optimization of Nonlinear Multiagent Systems via Event-Triggered Communication. IEEE Trans. Circuits Syst. II Express Briefs 2022, 70, 2092–2096. [Google Scholar] [CrossRef]
Figure 2. Outputs of subsystem 1.
Figure 2. Outputs of subsystem 1.
Fractalfract 07 00749 g002
Figure 3. Outputs of subsystem 2.
Figure 3. Outputs of subsystem 2.
Fractalfract 07 00749 g003
Figure 4. Controller u 1 , 1 .
Figure 4. Controller u 1 , 1 .
Fractalfract 07 00749 g004
Figure 5. Controller u 1 , 2 .
Figure 5. Controller u 1 , 2 .
Fractalfract 07 00749 g005
Figure 6. Controller u 2 , 1 .
Figure 6. Controller u 2 , 1 .
Fractalfract 07 00749 g006
Figure 7. Controller u 2 , 2 .
Figure 7. Controller u 2 , 2 .
Fractalfract 07 00749 g007
Figure 8. Controller u 3 , 1 .
Figure 8. Controller u 3 , 1 .
Fractalfract 07 00749 g008
Figure 9. Controller u 3 , 2 .
Figure 9. Controller u 3 , 2 .
Fractalfract 07 00749 g009
Figure 10. Controller u 4 , 1 .
Figure 10. Controller u 4 , 1 .
Fractalfract 07 00749 g010
Figure 11. Controller u 4 , 2 .
Figure 11. Controller u 4 , 2 .
Fractalfract 07 00749 g011
Figure 12. Controller u 5 , 1 .
Figure 12. Controller u 5 , 1 .
Fractalfract 07 00749 g012
Figure 13. Controller u 5 , 2 .
Figure 13. Controller u 5 , 2 .
Fractalfract 07 00749 g013
Figure 14. Value of the global objective function f ( y ) .
Figure 14. Value of the global objective function f ( y ) .
Fractalfract 07 00749 g014
Figure 15. Errors between agents’ outputs and the optimal solution.
Figure 15. Errors between agents’ outputs and the optimal solution.
Fractalfract 07 00749 g015
Figure 16. Trajectories of RBFNN and h 1 , 1 , 1 ( X 1 , 2 ) D α y 1 , 1 * .
Figure 16. Trajectories of RBFNN and h 1 , 1 , 1 ( X 1 , 2 ) D α y 1 , 1 * .
Fractalfract 07 00749 g016
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, X.; Yuan, J.; Chen, T.; Yang, H. Distributed Adaptive Optimization Algorithm for Fractional High-Order Multiagent Systems Based on Event-Triggered Strategy and Input Quantization. Fractal Fract. 2023, 7, 749. https://doi.org/10.3390/fractalfract7100749

AMA Style

Yang X, Yuan J, Chen T, Yang H. Distributed Adaptive Optimization Algorithm for Fractional High-Order Multiagent Systems Based on Event-Triggered Strategy and Input Quantization. Fractal and Fractional. 2023; 7(10):749. https://doi.org/10.3390/fractalfract7100749

Chicago/Turabian Style

Yang, Xiaole, Jiaxin Yuan, Tao Chen, and Hui Yang. 2023. "Distributed Adaptive Optimization Algorithm for Fractional High-Order Multiagent Systems Based on Event-Triggered Strategy and Input Quantization" Fractal and Fractional 7, no. 10: 749. https://doi.org/10.3390/fractalfract7100749

APA Style

Yang, X., Yuan, J., Chen, T., & Yang, H. (2023). Distributed Adaptive Optimization Algorithm for Fractional High-Order Multiagent Systems Based on Event-Triggered Strategy and Input Quantization. Fractal and Fractional, 7(10), 749. https://doi.org/10.3390/fractalfract7100749

Article Metrics

Back to TopTop