Next Article in Journal
Controlled Parameter Estimation for The AR(1) Model with Stationary Gaussian Noise
Next Article in Special Issue
High-Tracking-Precision Sensorless Control of PMSM System Based on Fractional Order Model Reference Adaptation
Previous Article in Journal
Synchronization of Fractional-Order Uncertain Delayed Neural Networks with an Event-Triggered Communication Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Optimization for Fractional-Order Multi-Agent Systems Based on Adaptive Backstepping Dynamic Surface Control Technology

1
School of Air Transportation, Shanghai University of Engineering Science, Shanghai 201620, China
2
Low Speed Aerodynamics Institute, China Aerodynamics Research and Development Center, Mianyang 621000, China
3
College of Engineering, China Agricultural University, Beijing 169334, China
4
School of Aeronautics and Astronautics, Shanghai Jiao Tong University, Room 2410, Dongchuan Road No. 800, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(11), 642; https://doi.org/10.3390/fractalfract6110642
Submission received: 3 September 2022 / Revised: 28 October 2022 / Accepted: 29 October 2022 / Published: 3 November 2022

Abstract

:
In this article, the distributed optimization problem is studied for a class of fractional-order nonlinear uncertain multi-agent systems (MASs) with unmeasured states. Each agent is represented through a system with unknown nonlinearities, unmeasurable states and a local objective function described by a quadratic polynomial function. A penalty function is constructed by a sum of local objective functions and integrating consensus conditions of the MASs. Radial basis function Neural-networks (RBFNNs) and Neural networks (NN) state observer are applied to approximate the unknown nonlinear dynamics and estimate unmeasured states, respectively. By combining the NN state observer and the penalty function, and the stability theory of the Lyapunov function, the distributed observer-based adaptive optimized backstepping dynamic surface control protocol is proposed to ensure the outputs of all agents asymptotically reach consensus to the optimal solution of the global objective function. Simulations demonstrate the effectiveness of the proposed control scheme.

1. Introduction

In past few years, the distributed cooperative control of MASs has received a great deal of interest and has became one of the research hotspots due to its potential applications in various fields, including formation control [1], smart grids [2], sensor networks [3], distributed energy resources [4], robotic systems [5], fractional-order systems [6,7,8], multi-satellites system [9], multiple spacecraft [10,11] and so on. As one of the underlying issues with cooperative control, the consensus refers to the construction of the appropriate decentralized algorithms, which can make the states of all agents ultimately come to an agreement. As an extension of the MASs’ consensus issue, the distributed optimization consensus problem considers optimization on the basis of consensus, which is reasonable. For instance, in the aviation mission completed by multiple aircraft cooperatively, there are optimization problems in tasks such as path planning, optimal coverage, and minimum fuel. In the distributed optimization problem, the global optimization objective is a sum of all agents’ local optimization objectives.
The main objective of the distributed optimization consensus of MASs is to design appropriate controllers, which is driving all agents to converge to the optimal solution of the optimization problem cooperatively. For different problems, various control protocols have been developed in the past several years. In [12], a distributed active antidisturbance control algorithm was developed to solve the distributed optimization problem of second-order MASs with both mismatched and matched disturbances. Based on the state-integral feedback and adaptive control method, a two-layer control framework was developed for the distributed optimization problem of second-order MASs with unmatched constant disturbances in [13]. In [14], an integral sliding mode controller was designed to give robustness to MASs affected by perturbations and uncertain agent dynamics. An adaptive distributed method was designed to handle the distributed optimization problem for a class of the heterogenous nonlinear MASs on the weight-balanced directed graph in [15]. In [16], continuous distributed algorithms were proposed for the finite-time distributed convex optimization problems of MASs with local disturbance signals. Specifically, an adaptive backstepping method was presented to handle the distributed optimization problem of nonlinear MASs, in which each agent is denoted through the high-order nonlinear strict-feedback form limited by mismatched parametric uncertainties in [17].
However, the aforementioned optimization algorithms may not work when there are nonlinear uncertain functions in the MASs. Due to the inaccuracy of modeling or the existence of unknown disturbance, nonlinear uncertainty exists objectively. Fortunately, NNs and fuzzy logic systems (FLSs) are used to approximate arbitrary nonlinear functions, and achieve arbitrary approximation accuracy. Combined with NNs or FLSs, a variety of adaptive intelligent control methods are developed for nonlinear MASs [18,19,20,21,22,23,24]. The distributed dynamic surface technique was developed to design the local consensus controller, and the NNs were employed for function approximation in [25]. A distributed adaptive NNs backstepping controllers were constructed for MASs with nonlinear input in [26]. In [27], a finite-time adaptive NNs controller was developed by using the command filter approach for uncertain nonlinear MASs with prescribed performance and input saturation. In order to deal with the unmeasured states, several observer-based fuzzy or NNs adaptive distributed control schemes were developed for uncertain nonlinear MASs [28,29,30,31,32]. Based on high-gain observer theory and fuzzy technique, a decentralized control approach for double-integrator uncertain MASs with disturbances, unmeasured states and unknown nonlinear dynamics was presented in [33]. In [34], a distributed adaptive control protocol based on the command filtered backstepping method was developed for nonlinear MASs with input saturation. After literature investigation, it is worth mentioning that we have not found that the adaptive intelligent control is used for multi-agent optimization problems. Therefore, how to develop an adaptive intelligent output feedback control method that can make the uncertain MASs with nonlinear dynamics ensure the consensus conditions and satisfy the optimization objective function at the same time is a challenging and meaningful scientific problem.
Furthermore, the above research is limited to integer-order MASs. Actually, it is reasonable to regard the fractional-order MASs (FOMASs) as a generalization of integer-order MASs. FOMASs have a wide range of potential applications in reality [35], such as robotic systems operating on muddy roads, swamp flotation devices and supercoils, and so on. The consensus control of FOMASs has drawn much attention and has become another research hotspot [36,37,38,39,40]. The consensus of FOMASs with linear models using the observer technique was proposed in [41]. In [42], an observer-based Linear Matrix Inequality controller for the consensus of FOMASs depicted by the general linear dynamics with positive constraint was designed. However, to the best of our knowledge, the distributed optimization consensus problem of FOMASs with unmeasured states and nonlinear uncertain dynamics has not been investigated in the existing study, which gives us great motivation for the research presented in this article.
Motivated by the above observations, the purpose of this paper lies in the design of an adaptive backstepping DSC algorithm to solve a distributed constrained optimization problem for FOMASs with unmeasured states and unknown nonlinearities. In the MASs, it is assumed that each agent only accesses its local objective function and is constrained by a quadratic polynomial function. By combining a global objective function and consensus conditions, a penalty function is established. The RBFNNs are used to approximate the unknown nonlinear dynamics, and a NN state observer is utilized to estimate the unmeasured states of each agent. A distributed optimized controller for each agent can be derived by applying the adaptive backstepping DSC technology to accomplish consensus tracking of the optimal solution, which is obtained by minimizing the penalty function. Compared with the aforementioned work, the novel contributions of this paper are:
(1)
Compared with [19,20,21,22,23], where the NNs-based adaptive backstepping DSC design algorithm was proposed for the consensus problem of MASs, we solve the distributed optimization problem for the FOMASs with unmeasured states. To accomplish this difficult task, based on the penalty function and negative gradient method, we construct a Lyapunov function for the control strategy. The distributed control method proposed in this paper ensures that all the agents’ outputs reach consensus to the optimal solution of the global objective function asymptotically instead of tracking the reference trajectory.
(2)
Different from [12,16], in which the distributed optimization algorithms are developed for first-order and second-order MASs, this article mainly focuses on the distributed optimization problem for fractional high-order MASs with unmeasured states and the unknown nonlinear functions. To overcome this challenge, we propose the distributed optimized adaptive backstepping controller based on the fractional DSC method, which is widely used in fractional high-order MASs, and the controller we propose has excellent performance in fractional-order MASs, which will be reflected in the Simulation section when comparing with [16].
(3)
In contrast to [17], which investigates an adaptive backstepping protocol to the distributed optimization for the integer order MASs, with each agent modeled by the strict-feedback form, this article will address the FOMASs with unmeasured states. The unknown nonlinear functions are considered in our article and will be approximated by the NNs.
The reminder of this study is constructed as follows. Section 2 gives the formulation and preliminaries of this paper. In Section 3, by employing the adaptive DSC technology, a NN state observer-based distributed controller is proposed. Section 4 provides the numerical simulations to elaborate on the feasibility of the proposed control algorithms. Section 5 gives the conclusions.

2. Preliminaries

2.1. Fractional Calculus

Define the R-L fractional derivative as
0 R L I t α f t = 1 Γ n α d n d t n 0 t f τ t τ 1 + α n d τ
where n N and n 1 < α n , Γ z = 0 t z 1 e t d t is the Gamma function. f t is an arbitrary integrable smooth function on 0 , t .
In this paper, we mainly adopt the Caputo fractional derivative [43] as defined with
0 C D t α f t = 1 Γ n α 0 t f n τ t τ 1 + α n d τ .
Remark 1.
To simplify the notation, we set 0 C D t α f ( t ) = D α f ( t ) .
For a two-parameter function of the Mittag-Leffler type
E α , β ( ς ) = k = 0 ς k Γ ( α k + β ) , ( α > 0 ) , ( β > 0 )
we have the next Lemma.
Lemma 1
([44]). For real numbers β, α and v satisfying α 0 , 1
π α 2 < v < π α
and for integers n 1 , it is obtained that
E α , β ς = j = 1 n ζ j Γ β α j + o 1 ς n + 1
when ς , v arg ς π .
Lemma 2
([44]). Let α 0 , 2 and β be an arbitrary real number. For ( π α / 2 ) < υ min { π , π α } , it is proven that
E α , β ς μ 1 + ς
where μ > 0 , υ arg ς π , and ς 0 .
Lemma 3
([45]). Define a vector of continuous and differentiable functions x ( t ) = [ x 1 ( t ) , , x n ( t ) ] R n . Then, the following relationship holds
1 2 D α x T t P x t x T t P D α x t x 0 = x 0 , α 0 , 1 , t > t 0
where t 0 = 0 , P = d i a g p 1 , p 2 , , p n and p i > 0 , i = 1 , 2 , , n .
Lemma 4
([46]). For any x , y R n , the following inequality relationship holds
x T y c a a x a + 1 b c b y b
where a > 1 , b > 1 , c > 0 , and a 1 b 1 = 1 .
Lemma 5
([43]). In fractional-order nonlinear system, if the α-order derivative of Lyapunov function V t , x is satisfying
D α V t , x C V t , x + ζ
then
V t , x V 0 E α C t α + ζ μ C , t 0
and 0 < α < 1 , C > 0 , and ζ 0 . Then, V t , x is bounded on 0 , t and fractional order systems are stable and μ is defined in Lemma 2.

2.2. Graph Theory

Suppose that there exist N agents. A directed graph G = W , E , A ¯ is used for the information exchange between agents, where W = 1 , , N is a node set and E = i , j W × W is an edge set. A ¯ = a i j R N × N is the adjacency matrix and an edge ( i , j ) E , if and only if a i j = 1 , means that there is information exchange between node i and node j. Assume that there is no self-loop in diagraph, therefore a i i = 0 for i = 1 , , N . Utilize N i = j | ( i , j ) E as the neighbor set of node i. Define the matrix D = d i a g d 1 , d 2 , , d N as the degree matrix of directed graph G , in which d i = j N i a i j . Define the Laplacian matrix as L = D A ¯ .
Lemma 6
([47]). Define a column vector 1 N with N elements and all elements being one. Denote a symmetric matrix L R N × N as the Laplacian matrix of a directed graph G . 1 N is the eigenvector for eigenvalue 0 of Laplacian matrix.

2.3. Fractional-Order Nonlinear Multi-Agent System

In this paper, the following FOMAS with nonlinear uncertain dynamics for agent i is considered.
D α x i , 1 ( t ) = x i , 2 ( t ) + g i , 1 ( x i , 1 ( t ) ) D α x i , l ( t ) = x i , l + 1 ( t ) + g i , l ( x i , 1 ( t ) , x i , 2 ( t ) , , x i , l ( t ) ) D α x i , n ( t ) = u i ( t ) + g i , n ( x i , 1 ( t ) , x i , 2 ( t ) , , x i , n ( t ) ) y i = x i , 1 ( t )
where l = 2 , , n 1 , u i is the control input, y i is the system output and g i , l ( x i , l ( t ) , x i , 2 ( t ) , , x i , l ( t ) ) is an unknown nonlinear function defined on the system state vector. Define X i , l = ( x i , 1 ( t ) , x i , 2 ( t ) , , x i , l ( t ) ) T R l are the system state vectors for agent i.
Rewrite the system of agent i:
D α X i , n = A i X i , n + K i y i + l = 1 n B i , l g i , l ( X i , l ) + B i u i ( t ) y i = C i X i , n
where A i = k i , 1 I n 1 k i , n 0 0 , K i = k i , 1 k i , n , B i = 0 1 , B i , l = 0 1 0 T , C i = 1 0 0 .
For a given positive matrix Q i T = Q i , there exists a positive matrix P i T = P i satisfying
A i T P i + P i A i = 2 Q i .
Remark 2.
To simplify the notation, we set x i , j = x i , j ( t ) .

2.4. Convex Analysis

A function f ( · ) : R n R is convex if
f ( α x + ( 1 α ) y ) α f ( x ) + ( 1 α ) f ( y ) , x , y R n , 0 α 1 .
A differentiable function f ( · ) : R n R is strong convex on R n if
( x y ) T ( f ( x ) f ( y ) ) ω | | x y | | 2 , x , y R n , ω > 0 .
A function f ( · ) : R n R is Λ -Lipschitz( Λ > 0 ) on R n if
f ( x ) f ( y ) Λ x y , x , y R n .

2.5. Problem Formulation

In this paper, we solve the distributed optimization problem for N agents with path tracking into the account. The local objective function f i : R n R for each agent is defined as
f i ( x i , 1 ) = a i x i , 1 x d 2 + c = a i x i , 1 2 + b i x i , 1 + c i
where x d is the reference signal for agents to track, a i > 0 , b i = 2 a i x d , c i = a i x d 2 + c , 1 i N and a i , c are scalars. Define the global objective function f : R R as
f x i , 1 = i = 1 N f i ( x i , 1 ) .
Considering that the local objective function f i is differentiable and strictly convex, the global objective function f is differentiable and strictly convex as well. Define x 1 = x 1 , 1 x 2 , 1 x N , 1 T . According to Lemma 6, for some α R , if x 1 = α · 1 N , we obtain
L x 1 = 0 .
Then, we design the penalty term as follows
x 1 T L x 1 = 0 .
Penalty function is defined as
P ( x 1 ) = i = 1 N f i ( x i , 1 ) + x 1 T L x 1 .
Due to the global objective function being strictly convex, we draw the conclusion that the penalty function is convex as well.
In this paper, we aim at designing controllers ( u 1 , , u N ) in order that for every i = 1 , , N , l i m t x i , 1 x i , 1 * . Define x 1 * = x 1 , 1 * , , x N , 1 * . The optimal solution x i , 1 * is defined as
x 1 , 1 * , , x N , 1 * = arg min ( x 1 , 1 , , x N , 1 ) P ( x 1 ) .
Remark 3.
According to (19), we find that the penalty function consists of two parts. The first part is i = 1 N f i ( x i , 1 ) , which is used to ensure the system to minimize the global objective function and track the reference signal. The second part is x i T L x 1 , which is used to reach consensus for all agents. By minimizing the penalty function, the distributed optimization consensus problem can be solved.
Control objectives: This paper focuses on proposing a distributed optimized controller based on observer-based adaptive neural network DSC technology to make sure that all the agents’ signals remain bounded, the output errors between all agents converge to near zero and the tracking error is as small as possible in the closed-loop system, while keeping the consensus to the optimal solution of the global objective function.

3. Main Results

3.1. State Observer Design

Due to the nonlinear functions g i , l X i , l being unknown, the RBFNNs technology is utilized to detect nonlinear functions. The following two assumptions are needed.
Assumption 1.
The unknown functions g i , l X i , l , i = 1 , , n are expressed as
g i , l X i , l | θ i , l = θ i , l T ψ i , l X i , l , 1 i n
where the node number of NN is q, θ i , l is the unknown constant vector and ψ i , l X i , l = ψ i , l 1 X i , l , ψ i , l 2 X i , l , , ψ i , l q X i , l T represents the radial basis function vector. A typical basis Gaussian function is given by
ψ i , l p X i , l = e x p X i , l c i , l p 2 b i , l 2 , p = 1 , , q
where c i , l p R l is the center of the receptive field and b i , l R is the width of the Gaussian function. Define c i , l = c i , l 1 , c i , l 2 , , c i , l q . According to [48,49], more nodes mean more accurate approximation.
In this paper, we assume that the state variables of the system (9) are not available. In this case, we need to estimate the system states, and the state observer for agent i is designed as follows
D α X ^ i , n = A i X ^ i , n + K i y i + l = 1 n B i , l g ^ i , l X ^ i , l | θ i , l + B i u i ( t ) y i ^ = C i X ^ i , n
where C i = 1 0 0 and X ^ i , l = x ^ i , 1 , x ^ i , 2 , , x ^ i , l T are the estimated values of X i , l .
Define the state observation errors vector as e i = X i , n X ^ i , n . According to Equations (10) and (23), we obtain
D α e i = A i e i + l = 1 n B i , l g i , l X ^ i , l g ^ i , l X ^ i , l θ i , l + Δ g i , l
where Δ g i , l = g i , l X i , l g i , l X ^ i , l .
By Assumption 1, we obtain
g ^ i , l X ^ i , l θ i , l = θ i , l T ψ i , l X ^ i , l .
Then, the vector of optimal parameters is defined as
θ i , l * = arg min θ i , l Ω i , l sup X ^ i , l U i , l g ^ i , l X ^ i , l θ i , l g i , l X ^ i , l
where 1 l n , Ω i , l and U i , l are compact regions for θ i , l , X i , l and X ^ i , l .
Define the optimal approximation error δ i , l and the parameter estimation error θ ˜ i , l as
δ i , l = g i , l X ^ i , l g ^ i , l X ^ i , l θ i , l * θ ˜ i , l = θ i , l * θ i , l , l = 1 , 2 , , n .
Assumption 2.
The optimal approximation errors remain bounded and there exist positive constants δ i 0 , satisfying δ i , l δ i 0 .
Assumption 3.
There exists a set of known constants γ i , l and the following relationship holds
g i , l X i , l g i , l X ^ i , l γ i , l X i , l X ^ i , l .
By Equations (24) and (27), we have
D α e i = A i e i + l = 1 n B i , l g i , l X ^ i , l g ^ i , l X ^ i , l θ i , l + Δ g i , l = A i e i + l = 1 n B i , l δ i , l + Δ g i , l + θ ˜ i , l T ψ i , l X ^ i , l = A i e i + Δ g i + δ i + l = 1 n B i , l θ ˜ i , l T ψ i , l X ^ i , l
where δ i = δ i , 1 , , δ i , n T , Δ g i = Δ g 1 , , Δ g n T .
Construct the first Lyapunov function:
V 0 = i = 1 N V i , 0 = i = 1 N 1 2 e i T P i e i .
According to Lemma 3 and (29), we have
D α V 0 i = 1 N 1 2 e i T P i A i T + A i P i e i + e i T P i δ i + Δ g i + l = 1 n e i T P i B i , l θ ˜ i , l T ψ i , l X ^ i , l i = 1 N e i T Q i e i + e i T P i δ i + Δ g i + e i T P i l = 1 n B i , l θ ˜ i , l T ψ i , l X ^ i , l .
By Lemma 4 and Assumption 3, we have
e i T P i δ i + Δ g i e i T P i δ i + e i T P i Δ g i 1 2 e i 2 + 1 2 P i δ i 2 + 1 2 e i 2 + 1 2 P i 2 Δ g i 2 e i 2 + 1 2 P i δ i 2 + 1 2 P i 2 l = 1 n Δ g i , l 2 e i 2 + 1 2 e i 2 P i 2 l = 1 n γ i , l 2 + 1 2 P i δ i 2 e i 2 1 + 1 2 P i 2 l = 1 n γ i , l 2 + 1 2 P i δ i 2
and
e i T P i l = 1 n B i , l θ ˜ i , l T ψ i , l X ^ i , l 1 2 e i T P i T P i e i + 1 2 l = 1 n θ ˜ i , l T ψ i , l X ^ i , l ψ i , l T X ^ i , l θ ˜ i , l 1 2 λ i , max 2 P i e i 2 + 1 2 l = 1 n θ ˜ i , l T θ ˜ i , l ,
where 0 < ψ i , l · ψ i , l T · 1 and λ i , m a x ( P i ) is the maximum eigenvalue of positive matrix P i . By Equations (30)–(32), we obtain
D α V 0 i = 1 N q i , 0 e i 2 + 1 2 P i δ i * 2 + 1 2 l = 1 n θ ˜ i , l T θ ˜ i , l ,
where q i , 0 = λ i , min Q i 1 + 1 2 P i 2 l = 1 n γ i , l 2 + 1 2 λ i , max 2 P i .
Then, we can obtain
D α V 0 q 0 e 2 + 1 2 P ε 2 + i = 1 N l = 1 n 1 2 θ ˜ i , l T θ ˜ i , l ,
where q 0 = i = 1 N q i , 0 .

3.2. Controller Design

Theorem 1.
Considering the uncertain nonlinear FOMASs (9) under Assumptions 1–3, there exist state observer (23), virtual control laws (52), (67), (83), adaptive laws (53), (68), (84), (104) and an observer-based adaptive optimized NN dynamic surface controller (103), such that: (1) signals x i , 1 in the closed-loop system remain semi-global uniformly ultimately bounded, (2) signals x i , 1 converge to the optimal solution x i , 1 * of the distributed optimization problem.
Proof. Step 1.
Define the error variable as follows:
s i , 1 = x i , 1 x i , 1 * s i , l = x ^ i , l v i , l w i , l = v i , l x i , l * l = 2 , , n
where s i , l is the tracking error, v i , l is the output of a filter, which can be obtained through virtual controller x i , l * , and w i , l is the output error between the filter output v i , l and the virtual controller x i , l * and x ^ i , l is the estimation of x i , l .
First, calculate the gradient of the penalty function given with (19)
P ( x 1 ) x 1 = v e c f i x i , 1 ( t ) x i , 1 + L x 1
where v e c f i x i , 1 ( t ) x i , 1 is a column vector.
Considering that the penalty function P ( x 1 ) is strictly convex, the necessary condition for the optimal solution to the distributed optimization problem is
P ( x 1 * ) x 1 * = 0 .
Then, from (19) and (36), for the agent i, we have
f i ( x i , 1 * ( t ) ) x i , 1 * + j N i a i j ( x i , 1 * x j , 1 * ) = 0 .
According to (15) and (37),we have
2 a i ( x i , 1 * x d ) + j N i a i j ( x i , 1 * x j , 1 * ) = 0 .
Then, according to (35) and (38), we have
P ( x 1 ) x i , 1 = f i x i , 1 ( t ) x i , 1 + j N i a i j ( x i , 1 x j , 1 ) = 2 a i ( x i , 1 x d ) + j N i a i j ( x i , 1 x j , 1 ) = 2 a i ( x i , 1 x d ) + j N i a i j ( x i , 1 x j , 1 ) 2 a i ( x i , 1 * x d ) + j N i a i j ( x i , 1 * x j , 1 * ) = 2 a i s i , 1 + j N i a i j ( s i , 1 s j , 1 ) .
Let s 1 = [ s 1 , 1 s N , 1 ] T , according to (39), we have
P ( x 1 ) x 1 = H s 1
where H = A + L and A = d i a g { 2 a i } .
Construct Lyapunov function:
V 1 = V 0 + 1 2 P ( x 1 ) x 1 T H 1 P ( x 1 ) x 1 + i = 1 N 1 σ i , 1 θ ˜ i , 1 T θ ˜ i , 1 = V 0 + 1 2 s 1 T H s 1 + i = 1 N 1 σ i , 1 θ ˜ i , 1 T θ ˜ i , 1
where s 1 = [ s 1 , 1 s N , 1 ] T and σ i , 1 is a designed parameter. According to (9), (27) and (35), we have
D α s i , 1 = x ^ i , 2 + θ i , 1 T ψ i , 1 + θ ˜ i , 1 T ψ i , 1 + Δ g i , 1 + δ i , 1 + e i , 2 .
Then, according to (40) and (41), we can obtain
D α V 1 = D α V 0 + s 1 T H D α s 1 + i = 1 N 1 σ i , 1 θ ˜ i , 1 T D α θ ˜ i , 1 = D α V 0 + s 1 T H x ^ 2 + v e c θ i , 1 T ψ i , 1 + v e c θ ˜ i , 1 T ψ i , 1 + Δ g 1 + δ 1 + e 2 + i = 1 N 1 σ i , 1 θ ˜ i , 1 T D α θ ˜ i , 1 = D α V 0 + s 1 T H s 2 + w 2 + x 2 * + v e c θ i , 1 T ψ i , 1 + v e c θ ˜ i , 1 T ψ i , 1 + Δ g 1 + δ 1 + e 2 + i = 1 N 1 σ i , 1 θ ˜ i , 1 T D α θ ˜ i , 1 = D α V 0 + s 1 T H s 2 + s 1 T H w 2 + s 1 T H x 2 * + v e c ( θ i , 1 T ψ i , 1 + v e c θ ˜ i , 1 T ψ i , 1 ) + s 1 T H Δ g 1 + s 1 T H ε 1 + s 1 T H e 2 i = 1 N 1 σ i , 1 θ ˜ i , 1 T D α θ i , 1
where s 2 = [ s 1 , 2 s 2 , 2 s N , 2 ] T , w 2 = [ w 1 , 2 w 2 , 2 w N , 2 ] T , x 2 * = [ x 1 , 2 * x 2 , 2 * x N , 2 * ] T , Δ g 1 = [ Δ g 1 , 1 Δ g 2 , 1 Δ g N , 1 ] T , δ 1 = [ δ 1 , 1 δ 2 , 1 δ N , 1 ] T , e 2 = [ e 1 , 2 e 2 , 2 e N , 2 ] T , v e c θ i , 1 T ψ i , 1 and v e c θ ˜ i , 1 T ψ i , 1 are column vectors.
According to Lemma 4, we have
s 1 T H s 2 1 2 s 1 T H H T s 1 + 1 2 s 2 T s 2
s 1 T H w 2 1 2 s 1 T H H T s 1 + 1 2 w 2 T w 2
s 1 T H Δ g 1 s 1 T H γ 1 e 1 1 2 s 1 T H γ 1 γ 1 T H T s 1 + 1 2 e 1 T e 1
s 1 T H ε 1 1 2 s 1 T H H T s 1 + 1 2 δ 1 T δ 1
s 1 T H e 2 1 2 s 1 T H H T s 1 + 1 2 e 2 T e 2
where γ 1 = d i a g [ γ i , 1 ] , e 1 = [ e 1 , 1 e 2 , 1 e N , 1 ] T . Substituting (43)–(47) into (42), we have
D α V 1 D α V 0 + s 1 T H x 2 * + v e c θ i , 1 T ψ i , 1 + v e c θ ˜ i , 1 T ψ i , 1 + 1 2 s 1 T H H T s 1 + 1 2 w 2 T w 2 + 1 2 s 1 T H H T s 1 + 1 2 s 2 T s 2 + 1 2 s 1 T H γ 1 γ 1 H T s 1 + 1 2 e 1 T e 1 + 1 2 s 1 T H H T s 1 + 1 2 δ 1 T δ 1 + 1 2 s 1 T H H T s 1 + 1 2 e 2 T e 2 i = 1 N 1 σ i , 1 θ ˜ i , 1 T D α θ i , 1 .
According to the definition of H, we have
s 1 T H = s 1 T A + s 1 T L = s 1 , 1 s 2 , 1 s N , 1 2 a 1 2 a 2 2 a N + s 1 , 1 s 2 , 1 s N , 1 j N i a 1 j a 1 N a 21 a 2 N a N 1 j N i a N j = [ 2 a 1 s 1 , 1 2 a 2 s 2 , 1 2 a N s N , 1 ] + s 1 , 1 s 2 , 1 s N , 1 j N i a 1 j a N 1 a 12 a N 2 a 1 N j N i a N j = [ 2 a 1 s 1 , 1 2 a 2 s 2 , 1 2 a N s N , 1 ] + j N i a 1 j ( s 1 , 1 s j , 1 ) j N i a N j ( s N , 1 s j , 1 ) = 2 a 1 s 1 , 1 + j N i a 1 j ( s 1 , 1 s j , 1 ) 2 a N s N , 1 + j N i a N j ( s N , 1 s j , 1 ) .
Then, we have
s 1 T H H T s 1 = 2 a 1 s 1 , 1 + j N i a 1 j ( s 1 , 1 s j , 1 ) 2 + + 2 a N s N , 1 + j N i a N j ( s N , 1 s j , 1 ) 2 = i = 1 N 2 a i x i , 1 x d + j N i a i j ( x i , 1 x j , 1 ) 2 ,
s 1 T H γ 1 γ 1 T H T s 1 = γ 1 , 1 2 2 a 1 s 1 , 1 + j N i a 1 j ( s 1 , 1 s j , 1 ) 2 + +   γ N , 1 2 2 a N s N , 1 + j N i a N j ( s N , 1 s j , 1 ) 2 = i = 1 N γ i , 1 2 2 a i x i , 1 x d + j N i a i j ( x i , 1 x j , 1 ) 2 .
According to (48), (50) and (51), design the first virtual controller x i , 2 * and update law θ i , 1 as
x i , 2 * = c i , 1 2 a i x i , 1 x d + j N i a i j ( x i , 1 x j , 1 ) θ i , 1 T ψ i , 1
D α θ i , 1 = σ i , 1 ψ i , 1 2 a i x i , 1 x d + j N i a i j ( x i , 1 x j , 1 ) ρ i , 1 θ i , 1
where c i , 1 = 2 + γ i , 1 2 2 and ρ i , 1 is the designed parameters. Substituting (52) and (53) into (48), after (34) we obtain
D α V 1 q 0 e 2 + 1 2 P ε 2 + i = 1 N l = 1 n 1 2 θ ˜ i , l T θ ˜ i , l + 1 2 e 1 T e 1 + 1 2 e 2 T e 2 + 1 2 δ 1 T ε 1 + i = 1 N ρ i , 1 σ i , 1 θ ˜ i , 1 T θ i , 1 + i = 1 N 1 2 s i , 2 2 + i = 1 N 1 2 w i , 2 2 q 1 e 2 + η 1 + i = 1 N l = 1 n 1 2 θ ˜ i , l T θ ˜ i , l + i = 1 N ρ i , 1 σ i , 1 θ ˜ i , 1 T θ i , 1 + i = 1 N 1 2 s i , 2 2 + i = 1 N 1 2 w i , 2 2
where q 1 = q 0 N , η 1 = 1 2 P ε 2 + 1 2 δ 1 T ε 1 .
Based on DSC technique, we have the state variable v i , 2 as a solution of the fractal differential equation
λ i , 2 D α v i , 2 + v i , 2 = x i , 2 * v i , 2 ( 0 ) = x i , 2 * ( 0 ) .
According to Equations (35) and (55), we obtain
D α w i , 2 = D α v i , 2 D α x i , 2 * = v i , 2 x i , 2 * λ i , 2 D α x i , 2 * = w i , 2 λ i , 2 + B i , 2
where λ i , 2 is designed parameter and B i , 2 is a continuous function depanding on variables x i , 1 , x j , 1 , s i , 2 , s j , 2 , w i , 2 , w j , 2 , θ i , 1 , θ j , 1 , x d , D α x d . According to [50,51], there exist constants M i , 2 > 0 , i = 1 , , N , such that | B i , 2 | M i , 2 holds. □
Remark 4.
In Equation (52), the designed virtual controller contains three parts. The first part 2 c i , 1 a i ( x i , 1 x d ) is used to ensure that the system can track reference signal. The second part c i , 1 j N i a i j ( x i , 1 x j , 1 ) can make sure that all agents achieve consensus. Third part θ i , 1 T ψ i , 1 is used to approximate the unknown nonlinear functions g i , 1 ( x i , 1 ) . Note that there is no conflict between three parts.
Step 2.
Define the error variable s i , 2 = x ^ i , 2 v i , 2 . Then, we obtain after (21) and (25)
D α s i , 2 = D α x ^ i , 2 D α v i , 2 = x ^ i , 3 + k i , 2 e i , 1 + θ i , 2 T ψ i , 2 + θ ˜ i , 2 T ψ i , 2 + δ i , 2 + Δ g i , 2 D α v i , 2 .
According to (35), we have
D α s i , 2 = s i , 3 + x i , 3 * + w i , 3 + k i , 2 e i , 1 + θ i , 2 T ψ i , 2 +   θ ˜ i , 2 T ψ i , 2 + δ i , 2 + Δ g i , 2 D α v i , 2 .
Construct the Lyapunov function
V 2 = V 1 + i = 1 N V i , 2 = V 1 + 1 2 i = 1 N s i , 2 2 + 1 σ i , 2 θ ˜ i , 2 T θ ˜ i , 2 + w i , 2 2
where σ i , 2 is designed parameter. Then, we have
D α V 2 = D α V 1 + i = 1 N s i , 2 D α s i , 2 + 1 σ i , 2 θ ˜ i , 2 T D α θ ˜ i , 2 + w i , 2 D α w i , 2 .
Substituting (58) into (60), we have
D α V 2 = D α V 1 + i = 1 N [ s i , 2 ( s i , 3 + x i , 3 * + w i , 3 + k i , 2 e i , 1 + θ i , 2 T ψ i , 2 + θ ˜ i , 2 T ψ i , 2 +   δ i , 2 + Δ g i , 2 D α v i , 2 ) + 1 σ i , 2 θ ˜ i , 2 T D α θ ˜ i , 2 + w i , 2 D α w i , 2 ] .
According to Lemma 4, we have
s i , 2 k i , 2 e i , 1 1 2 s i , 2 2 + 1 2 k i , 2 2 e i , 1 2
s i , 2 s i , 3 + w i , 3 s i , 2 2 + 1 2 s i , 3 2 + w i , 3 2
s i , 2 δ i , 2 1 2 s i , 2 2 + 1 2 δ i , 2 2
s i , 2 Δ g i , 2 1 2 s i , 2 2 + 1 2 γ i , 2 2 e i , 2 2 .
Substituting (62)–(65) into (61), we have
D α V 2 D α V 1 + i = 1 N [ s i , 2 x i , 3 * + θ i , 2 T ψ i , 2 + θ ˜ i , 32 T ψ i , 2 D α v i , 2 + 5 2 s i , 2 2 +   1 2 s i , 3 2 + w i , 3 2 + 1 2 k i , 2 2 e i , 1 2 + 1 2 δ i , 2 2 + 1 2 γ i , 2 2 e i , 2 2   1 σ i , 2 θ ˜ i , 2 T D α θ i , 2 + w i , 2 D α w i , 2 ] .
According to Theorem 1, the second virtual controller x i , 3 * and update laws θ i , 2 are designed as follows
x i , 3 * = c i , 2 s i , 2 3 s i , 2 θ i , 2 T ψ i , 2 + x i , 2 * v i , 2 λ i , 2
D α θ i , 2 = σ i , 2 ψ i , 2 ( X ^ i , 2 ) s i , 2 ρ i , 2 θ i , 2
where ρ i , 2 is designed parameter.
Substituting Equations (67), (68), (54) and (56) into (66), the following inequalities hold
D α V 2   q 1 e 2 + η 1 + i = 1 N l = 1 n 1 2 θ ˜ i , l T θ ˜ i , l + i = 1 N ρ i , 1 σ i , 1 θ ˜ i , 1 T θ i , 1 + i = 1 N 1 2 s i , 2 2 + i = 1 N 1 2 w i , 2 2 + i = 1 N [ s i , 2 ( c i , 2 s i , 2 3 s i , 2 θ i , 2 T ψ i , 2 + x i , 2 * v i , 2 λ i , 2 + θ i , 2 T ψ i , 2 + θ ˜ i , 2 T ψ i , 2   D α v i , 2 ) + 5 2 s i , 2 2 + 1 2 s i , 3 2 + w i , 3 2 + 1 2 k i , 2 2 e i , 1 2 + 1 2 δ i , 2 2 + 1 2 γ i , 2 2 e i , 2 2   1 σ i , 2 θ ˜ i , 2 T σ i , 2 ψ i , 2 ( X ^ i , 2 ) s i , 2 ρ i , 2 θ i , 2 + w i , 2 w i , 2 λ i , 2 + B i , 2 ] .
According to Lemma 4, we have w i , 2 B i , 2 1 2 w i , 2 2 + 1 2 M i , 2 2 . Then, we obtain
D α V 2   q 2 e 2 + η 2 + i = 1 N l = 1 n 1 2 θ ˜ i , l T θ ˜ i , l + i = 1 N ρ i , 1 σ i , 1 θ ˜ i , 1 T θ i , 1 + i = 1 N ρ i , 2 σ i , 2 θ ˜ i , 2 T θ i , 2 i = 1 N c i , 2 s i , 2 2 i = 1 N 1 λ i , 2 1 w i , 2 2 + 1 2 i = 1 N M i , 2 2 + 1 2 i = 1 N s i , 3 2 + w i , 3 2
where
q 2 = q 1 1 2 i = 1 N k i , 2 2 + γ i , 2 2 η 2 = η 1 + 1 2 i = 1 N δ i , 2 2 .
By using the DSC technique, we have the next fractal differential equation
λ i , 3 D α v i , 3 + v i , 3 = x i , 3 * , v i , 3 ( 0 ) = x i , 3 * ( 0 ) .
According to (71), we can obtain
D α w i , 3 = D α v i , 3 D α x i , 3 * =   v i , 3 x i , 3 * λ i , 3 D α x i , 3 * =   w i , 3 λ i , 3 + B i , 3
where λ i , 3 is designed parameter and B i , 3 = D α x i , 3 * .
Step k.
Defining the k-th error variable s i , k = x ^ i , k v i , k , we have
D α s i , k = D α x ^ i , k D α v i , k = x ^ i , k + 1 + k i , k e i , 1 + θ i , k T ψ i , k + θ ˜ i , k T ψ i , k + δ i , k + Δ g i , k D α v i , k .
Substituting (35) into (73), we can obtain
D α s i , k = s i , k + 1 + x i , k + 1 * + w i , k + 1 + k i , k e i , 1 +   θ i , k T ψ i , k + θ ˜ i , k T ψ i , k + δ i , k + Δ g i , k D α v i , k .
Construct the Lyapunov function
V k = V k 1 + i = 1 N V i , k = V k 1 + 1 2 i = 1 N s i , k 2 + 1 σ i , k θ ˜ i , k T θ ˜ i , k + w i , k 2
where σ i , k is designed parameter. Then, we have
D α V k = D α V k 1 + i = 1 N s i , k D α s i , k + 1 σ i , k θ ˜ i , k T D α θ ˜ i , k + w i , k D α w i , k .
Substituting (74) into (76), we have
D α V k = D α V k 1 + i = 1 N [ s i , k ( s i , k + 1 + x i , k + 1 * + w i , k + 1 +   k i , k e i , 1 + θ i , k T ψ i , k + θ ˜ i , k T ψ i , k + δ i , k + Δ g i , k   D α v i , k ) + 1 σ i , k θ ˜ i , k T D α θ ˜ i , k + w i , k D α w i , k ] .
According to Lemma 4, we have
s i , k k i , k e i , 1 1 2 s i , k 2 + 1 2 k i , k 2 e i , 1 2
s i , k s i , k + 1 + w i , k + 1 s i , k 2 + 1 2 s i , k + 1 2 + w i , k + 1 2
s i , k δ i , k 1 2 s i , k 2 + 1 2 δ i , k 2
s i , k Δ g i , k 1 2 s i , k 2 + 1 2 γ i , k 2 e i , k 2 .
Substituting (78)–(81) into (77), we can obtain
D α V k D α V k 1 + i = 1 N [ s i , k x i , k + 1 * + θ i , k T ψ i , k + θ ˜ i , k T ψ i , k D α v i , k +   5 2 s i , k 2 + 1 2 s i , k + 1 2 + w i , k + 1 2 + 1 2 k i , k 2 e i , 1 2 + 1 2 δ i , k 2 +   1 2 γ i , k 2 e i , k 2 1 σ i , k θ ˜ i , k T D α θ i , k + w i , k D α w i , k ] .
According to the Theorem 1, design the k-th virtual controller x i , k + 1 * and update laws θ i , k , as follows
x i , k + 1 * = c i , k s i , k 3 s i , k θ i , k T ψ i , k + x i , k * v i , k λ i , k
D α θ i , k = σ i , k ψ i , k ( X ^ i , k ) s i , k ρ i , k θ i , k
where ρ i , k is the designed parameter. Generally, by using the DSC technique, we have the next fractal differential equation
λ i , k D α v i , k + v i , k = x i , k * , v i , k ( 0 ) = x i , k * ( 0 ) .
According to Equation (85), we have
D α w i , k = D α v i , k D α x i , k * =   v i , k x i , k * λ i , k D α x i , k * =   w i , k λ i , k + B i , k
where λ i , k is the designed parameter and B i , k = D α x i , k * .
Substituting Equations (83), (84) and (86) into (82), then we have
D α V k D α V k 1 + i = 1 N [ s i , k ( c i , k s i , k 3 s i , k θ i , k T ψ i , k + x i , k * v i , k λ i , k +   θ i , k T ψ i , k + θ ˜ i , k T ψ i , k D α v i , k ) + 5 2 s i , k 2 + 1 2 s i , k + 1 2 + w i , k + 1 2 +   1 2 k i , k 2 e i , 1 2 + 1 2 δ i , k 2 + 1 2 γ i , k 2 e i , k 2 1 σ i , k θ ˜ i , k T ( σ i , k ψ i , k s i , k   ρ i , k θ i , k ) + w i , k w i , k λ i , k + B i , k ] .
According to Lemma 4, we have w i , k B i , k 1 2 w i , k 2 + 1 2 M i , k 2 . The following inequalities hold
D α V k D α V k 1 + i = 1 N [ s i , k ( c i , k s i , k 3 s i , k θ i , k T ψ i , k + x i , k * v i , k λ i , k +   θ i , k T ψ i , k + θ ˜ i , k T ψ i , k D α v i , k ) + 5 2 s i , k 2 + 1 2 s i , k + 1 2 + w i , k + 1 2 +   1 2 k i , k 2 e i , 1 2 + 1 2 δ i , k 2 + 1 2 γ i , k 2 e i , k 2 1 σ i , k θ ˜ i , k T ( σ i , k ψ i , k s i , k   ρ i , k θ i , k ) w i , k 2 λ i , k + 1 2 w i , k 2 + 1 2 M i , k 2 ] .
Combining (34), (54) and (70), we can obtain
D α V k 1   q k 1 e 2 + η k 1 + i = 1 N l = 1 n 1 2 θ ˜ i , l T θ ˜ i , l + i = 1 N [ l = 1 k 1 ρ i , l σ i , l θ ˜ i , l T θ i , l l = 2 k 1 c i , l s i , l 2 + l = 2 k 1 1 λ i , l 1 w i , l 2 + 1 2 l = 2 k 1 M i , k 1 2 + 1 2 s i , k 2 + w i , k 2 ] .
Substituting (89) into (88), we have
D α V k   q k e 2 + η k + i = 1 N l = 1 n 1 2 θ ˜ i , l T θ ˜ i , l + i = 1 N [ l = 1 k ρ i , l σ i , l θ ˜ i , l T θ i , l l = 2 k c i , l s i , l 2 l = 2 k 1 λ i , l 1 w i , l 2 + 1 2 l = 2 k M i , k 2 + 1 2 s i , k + 1 2 + w i , k + 1 2 ]
where
q k = q k 1 1 2 i = 1 N k i , k 2 + γ i , k 2 η k = η k 1 + 1 2 i = 1 N δ i , k 2 .
Step n.
Define the n-th error variable and the output error of the filter, as follows
s i , n = x ^ i , n v i , n
w i , n = v i , n x i , n * .
Then, we have
D α s i , n = D α x ^ i , n D α v i , n = u i + k i , n e i , 1 + θ i , n T ψ i , n + θ ˜ i , n T ψ i , n + δ i , n + Δ g i , n D α v i , n .
By using the DSC technique, we have the next fractal differential equation
λ i , n D α v i , n + v i , n = x i , n * , v i , n ( 0 ) = x i , n * ( 0 ) .
By Equation (92), we have
D α w i , n = D α v i , n D α x i , n * = w i , n λ i , n + B i , n
where λ i , n is the designed parameter and B i , n = D α x i , n * .
Construct Lyapunov function
V n = V n 1 + i = 1 N V i , n = V n 1 + 1 2 i = 1 N s i , n 2 + 1 σ i , n θ ˜ i , n T θ ˜ i , n + w i , n 2
where σ i , n is designed parameter. Then, we have
D α V n = D α V n 1 + i = 1 N s i , n D α s i , n + 1 σ i , n θ ˜ i , n T D α θ ˜ i , n + w i , n D α w i , n .
Substituting (93) into (97), we have
D α V n = D α V n 1 + i = 1 N [ s i , n u i + k i , m e i , 1 + θ i , n T ψ i , n + θ ˜ i , n T ψ i , n + δ i , n + Δ g i , n D α v i , n +   1 σ i , n θ ˜ i , n T D α θ ˜ i , n + w i , n D α w i , n ] .
According to Lemma 4, we have
s i , n k i , n e i , 1 1 2 s i , n 2 + 1 2 k i , n 2 e i , 1 2
s i , n δ i , n 1 2 s i , n 2 + 1 2 δ i , n 2
s i , n Δ g i , n 1 2 s i , n 2 + 1 2 γ i , n 2 e i , n 2 .
From (99)–(101), Equation (98) can be written as
D α V n D α V n 1 + i = 1 N [ s i , n u i + θ i , n T ψ i , n + θ ˜ i , n T ψ i , n D α v i , n + 3 2 s i , n 2 + 1 2 k i , n 2 e i , 1 2 +   1 2 δ i , n 2 + 1 2 γ i , n 2 e i , n 2 1 σ i , n θ ˜ i , n T D α θ i , n + w i , n D α w i , n ] .
Design the controller u i and update laws θ i , n
u i = c i , n s i , n 2 s i , n θ i , n T ψ i , n + x i , n * v i , n λ i , n
D α θ i , n = σ i , n ψ i , n ( X ^ i , n ) s i , n ρ i , n θ i , n
where ρ i , n is designed parameter. According to (90), substituting Equations (95), (103) and (104) into (102), then the following inequalities hold
D α V n   q n 1 e 2 + η n 1 + i = 1 N l = 1 n 1 2 θ ˜ i , l T θ ˜ i , l + i = 1 N [ l = 1 n 1 ρ i , l σ i , l θ ˜ i , l T θ i , l l = 2 n 1 c i , l s i , l 2 + l = 2 n 1 1 λ i , l 1 w i , l 2 + 1 2 l = 2 n 1 M i , l 2 + 1 2 s i , n 2 + w i , n 2 ] + i = 1 N [ s i , n ( c i , n s i , n 2 s i , n θ i , n T ψ i , n + x i , n * v i , n λ i , n + θ i , n T ψ i , n +   θ ˜ i , n T ψ i , n D α v i , n ) + 3 2 s i , n 2 + 1 2 k i , n 2 e i , 1 2 + 1 2 δ i , n 2 + 1 2 γ i , n 2 e i , n 2   1 σ i , n θ ˜ i , n T σ i , n ψ i , n s i , n ρ i , n θ i , n + w i , n w i , n λ i , n + B i , n ] .
According to Lemma 4, we obtain w i , n B i , n 1 2 w i , n 2 + 1 2 M i , n 2 , then we have
D α V n   q n e 2 + η n + i = 1 N l = 1 n 1 2 θ ˜ i , l T θ ˜ i , l + i = 1 N [ l = 1 n ρ i , l σ i , l θ ˜ i , l T θ i , l l = 2 n c i , l s i , l 2 l = 2 n 1 λ i , l 1 w i , l 2 + 1 2 l = 2 n M i , l 2 ]
where
q n = q n 1 1 2 i = 1 N k i , n 2 + γ i , n 2 η n = η n 1 + 1 2 i = 1 N δ i , n 2 .
According to Lemma 4, we can obtain
θ ˜ * , l T θ * , l 1 2 θ ˜ * , l T θ ˜ * , l + 1 2 θ * , l * T θ * , l * .
Then, we have
D α V n   q n e 2 + η n + i = 1 N l = 1 n 1 2 θ ˜ i , l T θ ˜ i , l + i = 1 N [ l = 1 n ρ i , l 2 σ i , l θ ˜ i , l T θ ˜ i , l + l = 1 n ρ i , l 2 σ i , l θ i , l * T θ i , l * l = 2 n c i , l s i , l 2 l = 2 n 1 λ i , l 1 w i , l 2 + 1 2 l = 2 n M i , l 2 ] .
Define
ζ = η n + i = 1 N l = 1 n ρ i , l 2 σ i , l θ i , l * T θ i , l * + 1 2 l = 2 n M i , l 2 .
Then, according to Equation (108), the following inequalities hold
D α V n q n e 2 + i = 1 N l = 2 n c i , l s i , l 2 l = 1 n ρ i , l 2 σ i , l 1 2 θ ˜ i , l T θ ˜ i , l l = 2 n 1 λ i , l 1 w i , l 2 + ζ
where c i , l > 0 , ( l = 2 , , n ) , ρ i , l 2 σ i , l 1 2 > 0 , ( l = 1 , , n ) , 1 λ i , l 1 > 0 , ( l = 2 , , n ) .
Define
C = m i n 2 q n λ m i n ( P ) , 2 c i , l , 2 ρ i , l 2 σ i , l 1 2 , 2 1 λ i , l 1 .
Then, Equation (110) becomes
D α V n t , x C V n ( t , x ) + ζ .
According to (112) and Lemma 5, we have
V n V ( 0 ) E α C t α + ζ μ C t 0 .
Then, we have
lim t V n ( t ) ζ μ C .
Since 1 2 s i , 1 2 V n ( T ) , we have
lim t s i , 1 2 ζ μ C .
Then, we can conclude that all the signals of system (9) remain bounded in the closed-loop system and converge to the optimal solution x * . The output errors and the consensus tracking errors converge to close to zero.
Remark 5.
The algorithm proposed in this paper represents two kinds of protocols in the existing literature. For example, if the reference signal is time-invariant, it becomes the algorithm solving the distributed time-invariant convex optimization [12,13,14,15,16,17]. If we set the reference signal as a time-varying signal, it becomes the algorithm solving the distributed time-varying convex optimization [52,53,54,55].

4. Simulation

In this section, two examples are given to verify the validity of the proposed method.
Example 1.
Consider the following fractional Duffing-Holmes chaotic system [56].
D α x i , 1 = x i , 2 + g i , 1 x i , 1 D α x i , 2 = u i + g i , 2 x i , 1 , x i , 2 y i = x i , 1
with i = 1 , 2 , 3 , 4 , 5 and the initial states are selected as x 1 ( 0 ) = [ 0.1 , 0.1 ] , x 2 ( 0 ) = [ 0.2 , 0.2 ] , x 3 ( 0 ) = [ 0.3 , 0.3 ] , x 4 ( 0 ) = [ 0.4 , 0.4 ] , x 5 ( 0 ) = [ 0.5 , 0.5 ] . In this paper, a communication graph for five agents is given by Figure 1. The phase portrait depicting such behavior is shown in Figure 2 and Figure 3. x d = sin t is defined as the reference signal. Define the unknown functions in system (116) as
g 1 , 1 ( X 1 , 1 ) = g 2 , 1 ( X 2 , 1 ) = g 3 , 1 ( X 3 , 1 ) = g 4 , 1 ( X 4 , 1 ) = g 5 , 1 ( X 5 , 1 ) = 0 g 1 , 2 ( X 1 , 2 ) = x 1 , 1 0.25 x 1 , 2 x 1 , 1 3 + 0.3 cos t g 2 , 2 ( X 2 , 2 ) = x 2 , 1 0.25 x 2 , 2 x 2 , 1 3 + 0.1 x 2 , 1 2 + x 2 , 2 2 1 / 2 + 0.3 cos t g 3 , 2 ( X 3 , 2 ) = x 3 , 1 0.25 x 3 , 2 x 3 , 1 3 + 0.2 sin t x 3 , 1 2 + 2 x 3 , 2 2 1 / 2 + 0.3 cos t g 4 , 2 ( X 4 , 2 ) = x 4 , 1 0.25 x 4 , 2 x 4 , 1 3 + 0.2 sin t 2 x 4 , 1 2 + 2 x 4 , 2 2 1 / 2 + 0.3 cos t g 5 , 2 ( X 5 , 2 ) = x 5 , 1 0.1 x 5 , 2 x 5 , 1 3 + 0.2 sin t x 5 , 1 2 + x 5 , 2 2 1 / 2 + 0.3 cos t .
The local objective functions of (15) of each of the five agents are given as follows
f 1 ( x 1 , 1 ) = 3 x 1 , 1 2 6 x d x 1 , 1 + 3 x d 2 + 0.1 f 2 ( x 2 , 1 ) = 4.6 x 2 , 1 2 9.2 x d x 2 , 1 + 4.6 x d 2 + 0.2 f 3 ( x 3 , 1 ) = 3.5 x 3 , 1 2 7 x d x 3 , 1 + 3.5 x d 2 + 1 f 4 ( x 4 , 1 ) = 2.5 x 4 , 1 2 5 x d x 4 , 1 + 2.5 x d 2 + 0.3 f 5 ( x 5 , 1 ) = 2.3 x 5 , 1 2 4.6 x d x 5 , 1 + 2.3 x d 2 + 0.4 .
Then, define the penalty function
P ( x 1 ) = i = 1 5 f i ( x i , 1 ) + x 1 T L x 1 .
Base on the penalty function (117), the necessary condition for the optimal solution x 1 * to the distributed optimization problem is as follows
P ( x 1 * ) x 1 * = 0
where x 1 * = x 1 , 1 * , x 2 , 1 * , , x 5 , 1 * T .
The virtual controller, the parameters update laws and the control input are developed by the observer. For the observer, the design parameters are selected as k 1 , 1 = k 2 , 1 = k 3 , 1 = k 4 , 1 = k 5 , 1 = 50 , k 1 , 2 = k 2 , 2 = k 3 , 2 = k 4 , 2 = k 5 , 2 = 1000 and the initial states for 5 agents are selected as x ^ 1 = [ 0.2 , 0.2 ] , x ^ 2 = [ 0.3 , 0.3 ] , x ^ 3 = [ 0.4 , 0.4 ] , x ^ 4 = [ 0.5 , 0.5 ] , x ^ 5 = [ 0.6 , 0.6 ] . For the RBFNNs, eleven nodes are selected as q = 11 . We design two different centers of the receptive field. c 1 , 1 and c 2 , 1 are evenly space in 1 , 1 . c 1 , 2 and c 2 , 2 are selected as 1 5 5 4 3 2 1 0 1 2 3 4 5 6 4.8 3.6 2.4 1.2 0 1.2 2.4 3.6 4.8 6 . The width of the Gaussian function is selected as b 1 , 1 = b 1 , 2 = b 2 , 1 = b 2 , 2 = 5 .
According to Theorem 1 and Equations (52), (53), (103) and (104), the virtual control law, the parameters update laws and the control input are designed, as follows
x i , 2 * = c i , 1 2 a i x i , 1 x d + j N i a i j ( x i , 1 x j , 1 ) θ i , 1 T ψ i , 1
D α θ i , 1 = σ i , 1 2 a i x i , 1 x d + j N i a i j ( x i , 1 x j , 1 ) ρ i , 1 θ i , 1
u i = c i , 2 s i , 2 2 s i , 2 θ i , 2 T ψ i , 2 + x i , 2 * v i , 2 λ i , 2
D α θ i , 2 = σ i , 2 ψ i , 2 s i , 2 ρ i , 2 θ i , 2 .
Choose design parameters as c 1 , 1 = c 3 , 1 = c 4 , 1 = c 5 , 1 = 5 , c 2 , 1 = 4 , c i , 2 = 2 , σ i , 1 = σ i , 2 = 1 , ρ i , 1 = 40 , ρ i , 2 = 80 , λ i , 2 = 0.05 .
Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 show the simulation results. The trajectory of the system can be observed in Figure 2, where it clearly shows that the system is unstable. In Figure 3 it can be observed as well. Figure 4 shows the trajectories of x d and x i , 1 . Figure 5 displays the trajectories of the tracking error s i , 1 , which demonstrates that the tracking error s i , 1 can quickly converge to near zero. Figure 6 and Figure 7 give the trajectories of x ^ i , 1 and x ^ i , 2 . Figure 8 shows the trajectories of x i , 2 . In Figure 9 and Figure 10, we use x 1 , 1 and x 1 , 2 for examples to compare the true value with the estimated value. Figure 11 gives the trajectories of u i , from which it can be clearly observed that the control input can converge quickly. The value of penalty function is shown in Figure 12, from which we can draw the conclusion that the algorithm successfully minimizes the value of the penalty function.
Based on the above simulation results, it is demonstrated that the proposed control method can ensure that the consensus tracking error converges to a small area of the origin quickly, and all agents can synchronize to the reference trajectory with good control performance. Meanwhile, the value of the penalty function converges to the minimum successfully. The controller designed by this method can not only guarantee good tracking performance of all agents, but consider the distributed optimization problem as well.
Example 2.
Consider the following agent dynamics [16]
D α x i ( t ) = u i ( t ) + g i , 1 ( t ) , i = 1 , 2 , , N
with α = 0.98 , i = 1 , 2 , 3 , 4 , 5 and the initial states are selected as x 1 ( 0 ) = 0.1 , x 2 ( 0 ) = 0.2 , x 3 ( 0 ) = 0.3 , x 4 ( 0 ) = 0.4 and x 5 ( 0 ) = 0.5 . Define x d = sin t as the reference signal. Define the unknown functions g 1 , 1 ( t ) = 0.1 sin t , g 2 , 1 ( t ) = 0.2 sin t , g 3 , 1 ( t ) = 0.1 sin t , g 4 , 1 ( t ) = 0.2 cos t and g 5 , 1 ( t ) = 0.1 cos t in system (123) as the local disturbance signals. A communication graph for five agents is given by Figure 1.
Define the local objective functions of each of the five agents, as follows
f 1 ( x 1 , 1 ) = 4 x 1 , 1 2 8 x d x 1 , 1 + 4 x d 2 + 1.8 f 2 ( x 2 , 1 ) = 4.6 x 1 , 1 2 9.2 x d x 1 , 1 + 4.6 x d 2 + 2.2 f 3 ( x 3 , 1 ) = 3.5 x 1 , 1 2 7 x d x 1 , 1 + 3.5 x d 2 + 1.4 f 4 ( x 4 , 1 ) = 2.5 x 1 , 1 2 5 x d x 1 , 1 + 2.5 x d 2 + 6.6 f 5 ( x 5 , 1 ) = 2.2 x 1 , 1 2 4.4 x d x 1 , 1 + 2.2 x d 2 + 9 .
Define the penalty function
P ( x 1 ) = i = 1 5 f i ( x i , 1 ) + x 1 T L x 1 .
Design the necessary condition for optimal solution to the distributed optimization problem, as follows
P ( x 1 * ) x 1 * = 0
where x 1 * = x 1 , 1 * , x 2 , 1 * , , x 5 , 1 * T .
According to Theorem 1 and Equations (52) and (53), the control input and the parameters update laws are developed, as follows
u i = c i , 1 2 a i x i , 1 x d + j N i a i j ( x i , 1 x j , 1 ) θ i , 1 T ψ i , 1
D α θ i , 1 = σ i , 1 2 a i x i , 1 x d + j N i a i j ( x i , 1 x j , 1 ) ρ i , 1 θ i , 1 .
Choose design parameters as c i , 1 = 3 , σ 1 , 1 = 9.5 , σ 2 , 1 = 40 , σ 3 , 1 = 8 , σ 4 , 1 = 28 , σ 5 , 1 = 5 , ρ 1 , 1 = ρ 4 , 1 = 20 and ρ 2 , 1 = ρ 3 , 1 = ρ 5 , 1 = 40 .
A communication graph for five agents is given by Figure 1. Figure 13 shows the simulation results of the protocol designed in this paper, from which we can find that the output signals x i , 1 can track the reference signal x d . Figure 14 displays the trajectories of tracking error s i , 1 . Figure 15 shows the trajectories of control input u i , where it can be demonstrated that all the control input are smooth in the whole time. The value of the penalty function is shown in Figure 16.
In [16], a finite-time distributed optimized algorithm is proposed, as follows
u i = u i o + u i r , i = 1 , 2 , , N u i o = s i g α f i ( x i ) + γ j = 1 N a i j ( x i x j ) u i r = k 1 i s i g 1 2 ( s i ) + ϕ i ϕ ˙ i = k 2 i s i g n ( s i ) s i = x i x i ( 0 ) 0 t u i o ( τ ) d ( τ ) .
Choose design parameters as γ = 15 , α = 0.13 , k 1 i = 3 and k 2 i = 5 .
A communication graph for five agents is given by Figure 1. The simulation results are displayed in Figure 17. The trajectories of the tracking error s i , 1 are shown in Figure 18. In Figure 19, all the trajectories of the control input u i are displayed. The value of the penalty function is shown in Figure 20, from which we can conclude that the algorithm successfully solves the distributed optimization problem.
From Figure 13 and Figure 17, we found that both the methods proposed in this paper and in [16] achieve the goal of tracking the reference signal x d . In Figure 14 and Figure 18, the tracking error between two algorithms is very close, in the range of −0.1 to 0.1. In Figure 15, it is clearly observed that based on the algorithm proposed in this paper, the control input is smooth in the whole process of simulation. However, the control input shown in Figure 19 is found to be not smooth at some times, from which we conclude that the method proposed in [16] is not extraordinarily suitable for the fractional-order distributed optimization consensus problem. Based on the simulation results, it is demonstrated that the algorithm designed in this paper makes sure that all agents can be synchronized to the optimal solution x * , while rejecting local disturbance signals g i , 1 ( t ) .

5. Conclusions

This article has investigated the distributed optimization problem of FOMASs with nonlinear uncertain dynamics. Each agent is described by the fractional-order nonlinear system containing unmeasured states, unknown nonlinear functions, and is constrained by a local objective function denoted by a quadratic polynomial function. To estimate the unmeasured states, we construct the NN state observer, and exploit RBFNNs to approximate unknown nonlinear functions, respectively. The distributed optimization problem of FOMASs is transformed into an optimization problem with equality constraints, and a corresponding penalty function is constructed. Example and simulation results demonstrate that all the agents’ outputs are steered to the optimal solution of the global objective function based on the observer-based adaptive NNs backstepping DSC algorithm proposed in this paper. Compared with the traditional distributed algorithm developed for the integer-order MASs, our protocol is feasibility and effectiveness due to its smoother inputs. Future research will focus on the adaptive distributed optimized NNs control algorithm design for the distributed optimization containment problem of FOMASs on the basis of this paper’s results.

Author Contributions

Conceptualization, J.Y. and T.C.; methodology, X.Y., J.Y. and T.C.; software, X.Y.; validation, X.Y.; formal analysis, J.Y., T.C. and C.Z.; investigation, X.Y., W.Z., J.Y. and T.C.; resources, W.Z., J.Y., C.Z. and Liangquan Wang; writing—original draft, X.Y.; writing—review and editing, J.Y.; visualization, X.Y.; supervision, J.Y. and T.C.; project administration, W.Z. and J.Y.; funding acquisition, W.Z. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the opening project of Key Laboratory of Rotor Aerodynamics (China Aerodynamics Research and Development Center) under Grant Number: 2113RAL202103-5; in part by the National Natural Science Foundation of China under Grant Number: 5217052158.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Z.; Wang, L.; Zhang, H.; Vlacic, L.; Chen, Q. Distributed Formation Control of Nonholonomic Wheeled Mobile Robots Subject to Longitudinal Slippage Constraints. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 2992–3003. [Google Scholar] [CrossRef]
  2. Klaimi, J.; Rahim-Amoud, R.; Merghem-Boulahia, L.; Jrad, A. A novel loss-based energy management approach for smart grids using multi-agent systems and intelligent storage systems. Sustain. Cities Soc. 2018, 39, 344–357. [Google Scholar] [CrossRef]
  3. Taboun, M.S.; Brennan, R.W. An Embedded Multi-Agent Systems Based Industrial Wireless Sensor Network. Sensors 2017, 17, 2112. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Cao, D.; Zhao, J.; Hu, W.; Ding, F.; Huang, Q.; Chen, Z.; Blaabjerg, F. Data-Driven Multi-Agent Deep Reinforcement Learning for Distribution System Decentralized Voltage Control with High Penetration of PVs. IEEE Trans. Smart Grid 2021, 12, 4137–4150. [Google Scholar] [CrossRef]
  5. Jiménez, A.C.; García-Díaz, V.; Bolaños, S. A Decentralized Framework for Multi-Agent Robotic Systems. Sensors 2018, 18, 417. [Google Scholar] [CrossRef] [Green Version]
  6. Chen, T.; Yuan, J.; Yang, H. Event-triggered adaptive neural network backstepping sliding mode control of fractional-order multi-agent systems with input delay. J. Vib. Control. 2021, 10775463211036827. [Google Scholar] [CrossRef]
  7. Yuan, J.; Chen, T. Observer-based adaptive neural network dynamic surface bipartite containment control for switched fractional order multi-agent systems. Int. J. Adapt. Control Signal Process. 2022, 36, 1619–1646. [Google Scholar] [CrossRef]
  8. Yuan, J.; Chen, T. Switched Fractional Order Multiagent Systems Containment Control with Event-Triggered Mechanism and Input Quantization. Fractal Fract. 2022, 6, 77. [Google Scholar] [CrossRef]
  9. Zilun, H.; Jianying, Y. Distributed optimal formation algorithm for multi-satellites system with time-varying performance function. Int. J. Control 2018, 93, 1015–1026. [Google Scholar]
  10. Chen, T.; Shan, J. Continuous constrained attitude regulation of multiple spacecraft on SO3. Aerosp. Sci. Technol. 2020, 99, 105769.1–105769.15. [Google Scholar] [CrossRef]
  11. Chen, T.; Shan, J. Distributed spacecraft attitude tracking and synchronization under directed graphs. Aerosp. Sci. Technol. 2021, 109, 106432. [Google Scholar] [CrossRef]
  12. Wang, X.; Li, S.; Wang, G. Distributed optimization for disturbed second-order multiagent systems based on active antidisturbance control. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 2104–2117. [Google Scholar] [CrossRef] [PubMed]
  13. Guo, G.; Kang, J. Distributed Optimization of Multiagent Systems Against Unmatched Disturbances: A Hierarchical Integral Control Framework. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 3556–3567. [Google Scholar] [CrossRef]
  14. Pilloni, A.; Franceschelli, M.; Pisano, A.; Usai, E. Sliding Mode-Based Robustification of Consensus and Distributed Optimization Control Protocols. IEEE Trans. Autom. Control 2021, 66, 1207–1214. [Google Scholar] [CrossRef]
  15. Liu, Y.; Yang, G.H. Distributed Robust Adaptive Optimization for Nonlinear Multiagent Systems. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 1046–1053. [Google Scholar] [CrossRef]
  16. Feng, Z.; Hu, G.; Cassandras, C.G. Finite-time distributed convex optimization for continuous-time multiagent systems with disturbance rejection. IEEE Trans. Control Netw. Syst. 2019, 7, 686–698. [Google Scholar] [CrossRef]
  17. Qin, Z.; Liu, T.; Jiang, Z.P. Adaptive backstepping for distributed optimization. Automatica 2022, 141, 110304. [Google Scholar] [CrossRef]
  18. Chen, X.; Zhao, L.; Yu, J. Adaptive neural finite-time bipartite consensus tracking of nonstrict feedback nonlinear coopetition multi-agent systems with input saturation. Neurocomputing 2020, 397, 168–178. [Google Scholar] [CrossRef]
  19. Shen, Q.; Shi, P. Distributed command filtered backstepping consensus tracking control of nonlinear multiple-agent systems in strict-feedback form. Automatica 2015, 53, 120–124. [Google Scholar] [CrossRef]
  20. Lin, Z.; Liu, Z.; Zhang, Y.; Chen, C. Command filtered neural control of multi-agent systems with input quantization and unknown control direction. Neurocomputing 2021, 430, 47–57. [Google Scholar] [CrossRef]
  21. Liu, Y.; Zhang, H.; Li, Q.; Liang, H. Practical fixed-time bipartite consensus control for nonlinear multi-agent systems: A barrier Lyapunov function-based approach. Inf. Sci. 2022, 607, 519–536. [Google Scholar] [CrossRef]
  22. Li, P.; Wu, X.; Chen, X.; Qiu, J. Distributed adaptive finite-time tracking for multi-agent systems and its application. Neurocomputing 2022, 481, 46–54. [Google Scholar] [CrossRef]
  23. Zhao, L.; Yu, J.; Lin, C. Command filter based adaptive fuzzy bipartite output consensus tracking of nonlinear coopetition multi-agent systems with input saturation. ISA Trans. 2018, 80, 187–194. [Google Scholar] [CrossRef]
  24. Mousavi, A.; Markazi, A.H.; Khanmirza, E. Adaptive fuzzy sliding-mode consensus control of nonlinear under-actuated agents in a near-optimal reinforcement learning framework. J. Frankl. Inst. 2022, 359, 4804–4841. [Google Scholar] [CrossRef]
  25. Yoo, S.J. Distributed Consensus Tracking for Multiple Uncertain Nonlinear Strict-Feedback Systems Under a Directed Graph. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 666–672. [Google Scholar] [CrossRef]
  26. Distributed adaptive coordination control for uncertain nonlinear multi-agent systems with dead-zone input. J. Frankl. Inst. 2016, 353, 2270–2289. [CrossRef]
  27. Wu, Z.; Zhang, T.; Xia, X.; Hua, Y. Finite-time adaptive neural command filtered control for non-strict feedback uncertain multi-agent systems including prescribed performance and input nonlinearities. Appl. Math. Comput. 2022, 421, 126953. [Google Scholar] [CrossRef]
  28. Qu, F.; Tong, S. Observer-based fuzzy adaptive quantized control for uncertain nonlinear multiagent systems. Int. J. Adapt. Control Signal Process. 2019, 33, 567–585. [Google Scholar] [CrossRef]
  29. Li, Y.m.; Li, K.; Tong, S. An Observer-Based Fuzzy Adaptive Consensus Control Method for Nonlinear Multi-Agent Systems. IEEE Trans. Fuzzy Syst. 2022, 30, 4667–4678. [Google Scholar] [CrossRef]
  30. Wang, W.; Tong, S. Observer-Based Adaptive Fuzzy Containment Control for Multiple Uncertain Nonlinear Systems. IEEE Trans. Fuzzy Syst. 2019, 27, 2079–2089. [Google Scholar] [CrossRef]
  31. Li, Y.; Qu, F.; Tong, S. Observer-Based Fuzzy Adaptive Finite-Time Containment Control of Nonlinear Multiagent Systems with Input Delay. IEEE Trans. Cybern. 2021, 51, 126–137. [Google Scholar] [CrossRef] [PubMed]
  32. Wu, Y.; Ma, H.; Chen, M.; Li, H. Observer-Based Fixed-Time Adaptive Fuzzy Bipartite Containment Control for Multiagent Systems with Unknown Hysteresis. IEEE Trans. Fuzzy Syst. 2022, 30, 1302–1312. [Google Scholar] [CrossRef]
  33. Chen, C.; Ren, C.E.; Tao, D. Fuzzy Observed-Based Adaptive Consensus Tracking Control for Second-Order Multiagent Systems with Heterogeneous Nonlinear Dynamics. IEEE Trans. Fuzzy Syst. 2016, 24, 906–915. [Google Scholar] [CrossRef]
  34. Zhao, L.; Yu, J.; Lin, C. Distributed adaptive output consensus tracking of nonlinear multi-agent systems via state observer and command filtered backstepping. Inf. Sci. 2019, 478, 355–374. [Google Scholar] [CrossRef]
  35. Gao, Z.; Zhang, H.; Wang, Y.; Mu, Y. Time-varying output formation-containment control for homogeneous/heterogeneous descriptor fractional-order multi-agent systems. Inf. Sci. 2021, 567, 146–166. [Google Scholar] [CrossRef]
  36. Lin, W.; Peng, S.; Fu, Z.; Chen, T.; Gu, Z. Consensus of fractional-order multi-agent systems via event-triggered pinning impulsive control. Neurocomputing 2022, 494, 409–417. [Google Scholar] [CrossRef]
  37. Zhang, X.; Chen, S.; Zhang, J.X. Adaptive sliding mode consensus control based on neural network for singular fractional order multi-agent systems. Appl. Math. Comput. 2022, 434, 127442. [Google Scholar] [CrossRef]
  38. Gong, P.; Lan, W.; Han, Q.L. Robust adaptive fault-tolerant consensus control for uncertain nonlinear fractional-order multi-agent systems with directed topologies. Automatica 2020, 117, 109011. [Google Scholar] [CrossRef]
  39. Cheng, Y.; Hu, T.; Li, Y.; Zhong, S. Consensus of fractional-order multi-agent systems with uncertain topological structure: A Takagi-Sugeno fuzzy event-triggered control strategy. Fuzzy Sets Syst. 2021, 416, 64–85. [Google Scholar] [CrossRef]
  40. Shahvali, M.; Azarbahram, A.; Naghibi-Sistani, M.B.; Askari, J. Bipartite consensus control for fractional-order nonlinear multi-agent systems: An output constraint approach. Neurocomputing 2020, 397, 212–223. [Google Scholar] [CrossRef]
  41. Zhu, W.; Li, W.; Zhou, P.; Yang, C. Consensus of fractional-order multi-agent systems with linear models via observer-type protocol. Neurocomputing 2017, 230, 60–65. [Google Scholar] [CrossRef]
  42. Chen, S.; An, Q.; Zhou, H.; Su, H. Observer-based consensus for fractional-order multi-agent systems with positive constraint. Neurocomputing 2022, 501, 489–498. [Google Scholar] [CrossRef]
  43. Li, Y.; Chen, Y.; Podlubny, I. Mittag–Leffler stability of fractional order nonlinear dynamic systems. Automatica 2009, 45, 1965–1969. [Google Scholar] [CrossRef]
  44. Podlubny, I. An introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications. Math. Sci. Eng 1999, 198, 340. [Google Scholar]
  45. Duarte-Mermoud, M.A.; Aguila-Camacho, N.; Gallegos, J.A.; Castro-Linares, R. Using general quadratic Lyapunov functions to prove Lyapunov uniform stability for fractional order systems. Commun. Nonlinear Sci. Numer. Simul. 2015, 22, 650–659. [Google Scholar] [CrossRef]
  46. Wang, Y.; Xie, L.; De Souza, C.E. Robust control of a class of uncertain nonlinear systems. Syst. Control Lett. 1992, 19, 139–149. [Google Scholar] [CrossRef]
  47. Li, Z.; Duan, Z. Cooperative Control of Multi-Agent Systems: A Consensus Region Approach; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  48. Zou, Y.; Zheng, Z. A robust adaptive RBFNN augmenting backstepping control approach for a model-scaled helicopter. IEEE Trans. Control Syst. Technol. 2015, 23, 2344–2352. [Google Scholar]
  49. Huang, J.T. Global tracking control of strict-feedback systems using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1714–1725. [Google Scholar] [CrossRef]
  50. Wang, D.; Huang, J. Neural network-based adaptive dynamic surface control for a class of uncertain nonlinear systems in strict-feedback form. IEEE Trans. Neural Netw. 2005, 16, 195–202. [Google Scholar] [CrossRef]
  51. Yu, J.; Shi, P.; Dong, W.; Chen, B.; Lin, C. Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors. IEEE Trans. Neural Netw. Learn. Syst. 2014, 26, 640–645. [Google Scholar]
  52. Bernstein, A.; Dall’Anese, E.; Simonetto, A. Online primal-dual methods with measurement feedback for time-varying convex optimization. IEEE Trans. Signal Process. 2019, 67, 1978–1991. [Google Scholar] [CrossRef]
  53. Huang, B.; Zou, Y.; Meng, Z.; Ren, W. Distributed time-varying convex optimization for a class of nonlinear multiagent systems. IEEE Trans. Autom. Control 2019, 65, 801–808. [Google Scholar] [CrossRef]
  54. Yi, X.; Li, X.; Xie, L.; Johansson, K.H. Distributed online convex optimization with time-varying coupled inequality constraints. IEEE Trans. Signal Process. 2020, 68, 731–746. [Google Scholar] [CrossRef] [Green Version]
  55. Hu, Z.; Yang, J. Distributed finite-time optimization for second order continuous-time multiple agents systems with time-varying cost function. Neurocomputing 2018, 287, 173–184. [Google Scholar] [CrossRef]
  56. Deepika, D.; Kaur, S.; Narayan, S. Uncertainty and disturbance estimator based robust synchronization for a class of uncertain fractional chaotic system via fractional order sliding mode control. Chaos Solitons Fractals 2018, 115, 196–203. [Google Scholar] [CrossRef]
Figure 1. Communication graph.
Figure 1. Communication graph.
Fractalfract 06 00642 g001
Figure 2. Fractional order Duffing-Holmes chaotic systems trajectories.
Figure 2. Fractional order Duffing-Holmes chaotic systems trajectories.
Fractalfract 06 00642 g002
Figure 3. Trajectories of system states without control input.
Figure 3. Trajectories of system states without control input.
Fractalfract 06 00642 g003
Figure 4. The trajectories of x d and x i , 1 i = 1 , , 5 .
Figure 4. The trajectories of x d and x i , 1 i = 1 , , 5 .
Fractalfract 06 00642 g004
Figure 5. The trajectories of error s i , 1 i = 1 , , 5 .
Figure 5. The trajectories of error s i , 1 i = 1 , , 5 .
Fractalfract 06 00642 g005
Figure 6. The trajectories of x i , 1 i = 1 , , 5 estimation values.
Figure 6. The trajectories of x i , 1 i = 1 , , 5 estimation values.
Fractalfract 06 00642 g006
Figure 7. The trajectories of x i , 2 i = 1 , , 5 estimation values.
Figure 7. The trajectories of x i , 2 i = 1 , , 5 estimation values.
Fractalfract 06 00642 g007
Figure 8. The trajectories of x i , 2 i = 1 , , 5 .
Figure 8. The trajectories of x i , 2 i = 1 , , 5 .
Fractalfract 06 00642 g008
Figure 9. The trajectories of x 1 , 1 and its estimation.
Figure 9. The trajectories of x 1 , 1 and its estimation.
Fractalfract 06 00642 g009
Figure 10. The trajectories of x 1 , 2 and its estimation.
Figure 10. The trajectories of x 1 , 2 and its estimation.
Fractalfract 06 00642 g010
Figure 11. The trajectories of control input u i .
Figure 11. The trajectories of control input u i .
Fractalfract 06 00642 g011
Figure 12. The value of penalty function.
Figure 12. The value of penalty function.
Fractalfract 06 00642 g012
Figure 13. The trajectories of x d and x i , 1 i = 1 , , 5 based on the algorithm proposed in this paper.
Figure 13. The trajectories of x d and x i , 1 i = 1 , , 5 based on the algorithm proposed in this paper.
Fractalfract 06 00642 g013
Figure 14. The trajectories of tracking error s i , 1 i = 1 , , 5 based on the algorithm proposed in this paper.
Figure 14. The trajectories of tracking error s i , 1 i = 1 , , 5 based on the algorithm proposed in this paper.
Fractalfract 06 00642 g014
Figure 15. The trajectories of control input u i based on the algorithm proposed in this paper.
Figure 15. The trajectories of control input u i based on the algorithm proposed in this paper.
Fractalfract 06 00642 g015
Figure 16. The value of penalty function based on the algorithm proposed in this paper.
Figure 16. The value of penalty function based on the algorithm proposed in this paper.
Fractalfract 06 00642 g016
Figure 17. The trajectories of x d and x i , 1 i = 1 , , 5 based on the finite-time distributed algorithm.
Figure 17. The trajectories of x d and x i , 1 i = 1 , , 5 based on the finite-time distributed algorithm.
Fractalfract 06 00642 g017
Figure 18. The trajectories of error s i , 1 i = 1 , , 5 based on the finite-time distributed algorithm.
Figure 18. The trajectories of error s i , 1 i = 1 , , 5 based on the finite-time distributed algorithm.
Fractalfract 06 00642 g018
Figure 19. The trajectories of control input u i based on the finite-time distributed algorithm.
Figure 19. The trajectories of control input u i based on the finite-time distributed algorithm.
Fractalfract 06 00642 g019
Figure 20. The value of penalty function based on the finite-time distributed algorithm.
Figure 20. The value of penalty function based on the finite-time distributed algorithm.
Fractalfract 06 00642 g020
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, X.; Zhao, W.; Yuan, J.; Chen, T.; Zhang, C.; Wang, L. Distributed Optimization for Fractional-Order Multi-Agent Systems Based on Adaptive Backstepping Dynamic Surface Control Technology. Fractal Fract. 2022, 6, 642. https://doi.org/10.3390/fractalfract6110642

AMA Style

Yang X, Zhao W, Yuan J, Chen T, Zhang C, Wang L. Distributed Optimization for Fractional-Order Multi-Agent Systems Based on Adaptive Backstepping Dynamic Surface Control Technology. Fractal and Fractional. 2022; 6(11):642. https://doi.org/10.3390/fractalfract6110642

Chicago/Turabian Style

Yang, Xiaole, Weiming Zhao, Jiaxin Yuan, Tao Chen, Chen Zhang, and Liangquan Wang. 2022. "Distributed Optimization for Fractional-Order Multi-Agent Systems Based on Adaptive Backstepping Dynamic Surface Control Technology" Fractal and Fractional 6, no. 11: 642. https://doi.org/10.3390/fractalfract6110642

APA Style

Yang, X., Zhao, W., Yuan, J., Chen, T., Zhang, C., & Wang, L. (2022). Distributed Optimization for Fractional-Order Multi-Agent Systems Based on Adaptive Backstepping Dynamic Surface Control Technology. Fractal and Fractional, 6(11), 642. https://doi.org/10.3390/fractalfract6110642

Article Metrics

Back to TopTop