Next Article in Journal
Anisotropic Fractional Cosmology: K-Essence Theory
Next Article in Special Issue
Adaptive Fuzzy Fault-Tolerant Control of Uncertain Fractional-Order Nonlinear Systems with Sensor and Actuator Faults
Previous Article in Journal
Computational Analysis of Fractional-Order KdV Systems in the Sense of the Caputo Operator via a Novel Transform
Previous Article in Special Issue
Distributed Adaptive Optimization Algorithm for Fractional High-Order Multiagent Systems Based on Event-Triggered Strategy and Input Quantization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fixed-Time Distributed Time-Varying Optimization for Nonlinear Fractional-Order Multiagent Systems with Unbalanced Digraphs

1
School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
2
School of Mathematics and Statistics, Guangdong University of Foreign Studies, Guangzhou 510006, China
3
College of Science, Liaoning University of Technology, Jinzhou 121001, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(11), 813; https://doi.org/10.3390/fractalfract7110813
Submission received: 1 October 2023 / Revised: 21 October 2023 / Accepted: 8 November 2023 / Published: 9 November 2023

Abstract

:
This paper investigates the problem of fixed-time distributed time-varying optimization of a nonlinear fractional-order multiagent system (FOMAS) over a weight-unbalanced directed graph (digraph), where the heterogeneous unknown nonlinear functions and disturbances are involved. The aim is to cooperatively minimize a convex time-varying global cost function produced by a sum of time-varying local cost functions within a fixed time, where each time-varying local cost function does not have to be convex. Using a three-step design procedure, a fully distributed fixed-time optimization algorithm is constructed to achieve the objective. The first step is to design a fully distributed fixed-time estimator to estimate some centralized optimization terms within a fixed time T 0 . The second step is to develop a novel discontinuous fixed-time sliding mode algorithm with nominal controller to derive all the agents to the sliding-mode surface within a fixed time T 1 , and meanwhile the dynamics of each agent is described by a single-integrator MAS with nominal controller. In the third step, a novel estimator-based fully distributed fixed-time nominal controller for the single-integrator MAS is presented to guarantee all agents reach consensus within a fixed time T 2 , and afterwards minimize the convex time-varying global cost function within a fixed time T 3 . The upper bound of each fixed time T m ( m = 0 , 1 , 2 , 3 ) is given explicitly, which is independent of the initial states. Finally, a numerical example is provided to validate the results.

1. Introduction

More recently, distributed optimization in a group of multiagent systems (MASs) has received considerable attention owing to its broad applications including but not limited to sensor networks [1], economic dispatch of power grids [2,3] and wireless resource management [4]. The primary goal of distributed optimization in MASs is to minimize the sum of local cost functions (termed the global cost function) cooperatively, where each agent has a local cost function only known to itself. Existing results mainly focus on implementing distributed optimization of MASs in a discrete-time manner [1,5,6], which is different from the case where each agent has continuous-time dynamics. Since many physical systems have continuous-time dynamics [7], the distributed optimization problem of MASs with various continuous-time dynamics is extensively studied, such as first-order MASs [8,9], second-order MASs [10,11] and higher-order MASs [12,13]. Normally, the implementation of distributed optimization relies upon some factors, i.e., algorithm performance, the information exchange network topology, the agent inherent dynamics, as well as the characteristics of the local cost functions. Existing works on the above four factors will be discussed, respectively, to point out their corresponding limitations, and further bring out the research motivation and contribution of this paper.

1.1. Related Work and Its Limitations

(1) Fixed-Time Optimal Convergence: One of the paramount criteria to evaluate the performance of the distributed optimization algorithm is the optimal convergence rate. Nonetheless, in the aforementioned discrete-time and continuous-time optimization algorithms, all agents reach an agreement and converge to the global optimal solution as time approaches infinity. In order to satisfy the finite-time convergence requirement of some practical tasks, some finite-time distributed optimization algorithms are designed in [3,14,15]. For the finite-time distributed optimization algorithms, the convergence time relies on initial states directly. It is impossible to ensure a predetermined convergence time if certain initial state data are not supplied in advance. Designing fixed-time distributed optimization algorithms that are independent of any initial states is required. So far, little work about the fixed-time distributed optimization problem of MASs [16,17,18,19] has been addressed by utilizing the idea of fixed-time stability [20]. Note that, all the aforementioned finite- and fixed-time optimization algorithms have some constraints, i.e., each local cost function and its gradient are respectively required to be (strongly) convex and/or Lipschitz, and the agents’ topology is considered to be undirected.
(2) Weight-Unbalanced Directed Topology: The aforesaid optimal algorithms are designed based on the assumption that the information exchange network topology among agents are either undirected or directed but weight-balanced [21], which would generally fail when they are relocated to unbalanced digraphs. Unbalanced digraphs present a unique challenge that has received widespread recognition in the optimization community [5]. Existing attempts to consider weight-unbalanced digraphs have used certain additional information, such as the in-neighbors and out-degree information [22,23], out-neighbors and in-degree information [24], and the zero eigenvalue’s corresponding left eigenvector in the Laplacian matrix [10,25,26], which might not be feasible in practice [5,27]. Without employing the certain information mentioned above, the distributed optimization problem of MASs with unbalanced digraphs is studied by designing a scaling-function-based discrete-time algorithm in [5] and a continuous-time coordination algorithm in [27]. But the designed algorithms in [5,27] depend on certain global information, based on the assumption that each local cost function is strongly convex, and can only achieve an asymptotic optimal convergence. The above limitations hinder the algorithms’ implementation in real applications.
(3) Heterogeneous Nonlinear Fractional-Order Dynamics: Additionally, existing works on distributed optimization usually assume all agents have homogeneous linear dynamics, and/or share a unique dynamical mode, i.e., single-integrator dynamics [28,29], double-integrator dynamics [10,11] and general linear dynamics [12]. But oftentimes, many physical systems are inherently nonlinear and susceptible to heterogeneous disturbances. The tools used in the homogeneous and linear frameworks cannot be used in the heterogeneous and nonlinear ones, and the designed optimal algorithms for homogeneous linear MASs are generally not suitable for heterogeneous nonlinear MASs. The distributed optimization problems for MASs with heterogeneous disturbances and heterogeneous nonlinear MASs are respectively studied in [8,14,18] and in [13,30], where the considered topologies are either undirected [8,13,14] or directed but weight-balanced [30]. Note that all considered MASs and designed controllers above have integer-order dynamics. A fractional-order (FO) system with a fractional order can be used to correctly describe the dynamics of anomalous systems with memory or hereditary features [31,32]. In addition, FO controllers are more reliable and offer more design flexibility than integer-order controllers [33]. The FO system is attracting more attention and offers a wide range of practical applications [34]. Recently, the authors in [35] study the distributed optimization problem of nonlinear uncertain fractional-order MASs (FOMASs) via designing an adaptive surface control protocol with asymptotic optimal convergence rate. To the best of our knowledge, the fixed-time distributed optimization of FOMASs with heterogeneous nonlinear functions and disturbances over a weight-unbalanced digraph has not been reported.
(4) Time-Varying Local Cost Functions: Furthermore, while time-varying local cost functions are frequently used in applications like monitoring a time-varying optimal solution, the studies listed above primarily focus on the distributed optimization issue using time-invariant local cost functions. The authors of [11,13,15,19,28,36,37] study the distributed optimization problem with time-varying local cost functions, also known as the distributed time-varying optimization problem, which is more challenging to solve because its optimal point (trajectory) may be time-varying. A connectivity-preserving optimization algorithm is developed in [11] to make all agents achieve consensus within a finite-time and the consensus values converge to a time-varying optimal trajectory asymptotically. Via distributed average tracking, some optimization algorithms are designed to cooperatively minimize the sum of time-varying local cost functions within a finite time in [15] and within a fixed time in [19]. At present, there are several limitations in the aforementioned distributed time-varying optimization problems. For example, the considered network topologies in [11,13,15,19,28,37] are undirected and the designed algorithms in [11,13,28,37] can only achieve asymptotic convergence. Moreover, the authors in [11,15,19,28,37] only consider the integrator-type agent dynamics, each time-varying local cost function and its Hessian are forced to be convex and invertible in [11,13,37], respectively. This paper intends to overcome the aforementioned limitations.

1.2. Research Motivation

Motivated by the discussion and observation on the aforementioned four factors, this paper aims to solve the fixed-time distributed time-varying optimization problem of an FOMAS with time-varying local cost functions, heterogeneous unknown nonlinear functions and disturbances over a weight-unbalanced directed network. In this paper, each time-varying local cost function and its Hessian are not necessarily forced to be convex and invertible, respectively. As previously mentioned, the aforesaid four factors (i.e., fixed-time optimal convergence, weight-unbalanced directed topology, heterogeneous nonlinear fractional-order dynamics, and time-varying local cost functions) have their corresponding motivations, and existing related works about those four factors have their corresponding limitations or constraints. Fixed-time optimal convergence has a fast convergence rate and meets the demand for ensuring a predefined convergence time independent of any initial states. The benefit of weighted unbalanced direct topology is that it requires fewer communication channels and less equipment. Heterogeneous nonlinear dynamics and time-varying local cost functions are common in many physical systems and applications. Based on a practical perspective, the above four factors are comprehensively considered in a unified framework and the existing limitations of some related works are removed in this paper. The studied problem of this paper is very challenging, and the derived results are of great significance in theory and practice.

1.3. Research Contribution

This paper focuses on studying the problem of distributed optimization of an FOMAS through comprehensive consideration of the following four factors: fixed-time optimal convergence, weight-unbalanced directed topology, heterogeneous nonlinear agent inherent dynamics, and time-varying local cost functions. By integrating the idea of fixed-time stability [20], the distributed estimator (or tracking control) method [14], the sliding-mode control technique and the distributed leaderless consensus control method, this paper proposes an estimator-based fully distributed fixed-time optimization algorithm to solve such a problem. Compared with the existing related works, the research contribution of this work is threefold.
( 1 )
A fixed-time optimal convergence protocol independent of any initial states is designed; this is different from the designed asymptotic optimal convergence protocols in [5,30,37], and the finite-time optimal convergence protocols in [3,14,15] dependent of initial states. However, the fixed-time optimal convergence protocols are designed in [16,17,18,19], where the considered topologies among agents are undirected.
( 2 )
A weight-unbalanced directed topology without employing certain additional information is considered, which includes the undirected topologies considered in [8,16,18], weight-balanced directed topologies in [19,21,30], and weight-unbalanced directed topologies, and employs certain additional information in [23,24,25,26] as its special cases. However, the weight-unbalanced directed topology without employing certain additional information is considered in [5,27], where the designed protocols are only asymptotic optimal convergence.
( 3 )
An FOMAS with time-varying local cost functions, heterogeneous unknown nonlinear functions and disturbances is investigated; this is in contrast to the studied MAS with linear and homogeneous integer-order dynamics in [9,12,28,29]. Note that each local cost function is required to be convex in [8,11,12,13,29], strongly convex in [5,17,27,37], and the Hessian of each local cost function is forced to be invertible and equal in [9,10,11,13,37]. However, in this paper, only the global cost function is forced to be convex but not necessarily each local cost function, and only the Hessian of the global cost function is forced to be invertible but not necessarily the Hessian of each local cost function.
In sum, the work of this paper is an extension of and/or an improvement on the above-mentioned works.

2. Preliminaries

2.1. Notations

Let R R + R n and R n × m be respectively the sets of all real numbers, nonnegative real numbers, n-dimensional real column vectors and n × m real matrices. Symbols ⊗,  I N and I N respectively represent the Kronecker product, index set { 1 , 2 , , N } and N × N identity matrix. For a real number p > 0 and a vector x = [ x 1 , x 2 , , x n ] T R n x p = [ x 1 p , x 2 p , , x n p ] T sig p ( x ) = [ sig p ( x 1 ) , sig p ( x 2 ) , , sig p ( x n ) ] T with sig p ( x i ) = | x i | p sgn ( x i ) sgn ( x ) = [ sgn ( x 1 ) , sgn ( x 2 ) , , sgn ( x n ) ] T , where  sgn ( x i ) represents the signum function of x i d d t and x i represent the differential operator and partial differential operator with respect to t and x i , respectively; for a function f ( x , t ) : R n × R + R that is twice differentiable, its gradient is denoted by f ( x , t ) = [ f ( x , t ) x i ] i I n (as a column vector), its Hessian is denoted by 2 f ( x , t ) = [ 2 f ( x , t ) x i x j ] i , j I n (as a n × n matrix). For b i , c R i = 1 , 2 , , m , a diagonal matrix with diagonal entries b 1 , b 2 , , b m is denoted as diag ( b 1 , b 2 , , b m ) R m × m , and let c n = [ c , c , c ] T R n .

2.2. Fractional Integral and Derivative

In this paper, let p ( 0 , 1 ] , p i ( 0 , 1 ] , Γ ( z ) = 0 t z 1 e t d t and h ( t ) C ( [ t 0 , ) , R ) . The following definitions and property are available in [38].
Definition 1. 
The p-order (Riemann-Liouville fractional) integral of h ( t ) is defined as
I p h ( t ) = 1 Γ ( p ) t 0 t h ( s ) ( t s ) 1 p d s .
Definition 2. 
Let D p h ( t ) be the p-order (Caputo fractional) derivative of h ( t ) , defined as
D p h ( t ) = I 1 p h ˙ ( t ) = 1 Γ ( 1 p ) t 0 t h ˙ ( s ) ( t s ) p d s .
Property 1. 
It holds that D p D 1 p h ( t ) = D 1 p D p h ( t ) = h ˙ ( t ) for h ( t ) C ( [ t 0 , ) , R ) . Specifically,  D p h = 0 if h is a constant.

2.3. Directed Graph Theories

The following directed graph theories can be found in [5,27]. A weighted directed graph (digraph) among N nodes (agents) is modeled as G . Let V = { V 1 , V 2 , , V N } be the node set, and A = [ a i j ] N × N with weights a i j 0 be the adjacency matrix, where a i j > 0 if and only if agent j is available to agent i, and a i j = 0 otherwise. The Laplacian matrix L = [ l i j ] N × N can be defined as L = diag ( | N 1 i n | , | N 2 i n | , , | N N i n | ) A , where N i i n = { j V | a i j > 0 } , N i o u t = { j V | a j i > 0 } , the cardinalities | N i i n | = j N i i n a i j and | N i o u t | = j N i o u t a j i are called the in-degree and out-degree of agent i. A digraph is called weight-balanced if and only if | N i i n | = | N i o u t | , i V . A digraph is termed strongly connected if there exists a directed path between any two nodes. A digraph graph has a directed spanning tree if and only if there exists a node (termed the root) with a directed path to all other nodes.
Assumption 1. 
The digraph G is time-invariant and strongly connected.
It is worth mentioning that, the digraph G is weight-unbalanced if Assumption 1 holds. Denote ξ = [ ξ 1 , ξ 2 , , ξ N ] T , Ξ = diag ( ξ 1 , ξ 2 , , ξ N ) , and a matrix Q = ( Ξ L + L T Ξ ) / 2 , where ξ i > 0 and i = 1 N ξ i = 1 . If Assumption 1 holds, we have the following important lemma.
Lemma 1 
([39]). Let X = [ X 1 , X 2 , , X N ] T R N Y = [ Y 1 , Y 2 , , Y N ] T R N , and   Ω ( a , b , α , β ) = { Y : Y = a · sig α ( X ) + b · sig β ( X ) , X c N } , where a b 0  (except a = b = 0 ) and 0 α < β , and a , b , c , α , β are constants. Based on Assumption 1, there exists a constant k 0 > 0 such that
k 0 = min Y Ω ( a , b , α , β ) Y T Q Y Y T Y .
Lemma 2 
([40]). If H is a nonsingular M-matrix, there exist an N × N positive diagonal matrix Θ = diag ( θ 1 , θ 2 , , θ N ) and a positive constant η = λ ̲ ( H ˜ ) such that H ˜ = Θ H + H T Θ η I N , where [ θ 1 , θ 2 , , θ N ] T = ( H T ) 1 1 N and λ ̲ ( H ˜ ) represents the minimal eigenvalue value of H ˜ .

2.4. Some Supporting Lemmas

Lemma 3 
([41]). Let x 1 , x 2 , , x n 0 . Then
i = 1 n x i q i = 1 n x i q , 0 < q 1 ,
i = 1 n x i q n 1 q i = 1 n x i q , 1 < q < .
Lemma 4 
([20]). For a differential equation
y ˙ = a y 1 1 ε b y 1 + 1 ε , y ( 0 ) = y 0 ,
in which y R + { 0 } a , b > 0 , and ε > 1 , the equilibrium of (4) is globally stable after a fixed-time T and T T max = π ε 2 a b .
Lemma 5 
([42]). For a continuous differentiable convex function f ( x ) : R n R , x * R n is a global minimum of f ( x ) if and only if f ( x * ) = 0 n .

3. Problem Statement

Consider an FOMAS composed of N agents under a digraph G . Inspired by [18,30], the FOMAS is assumed to satisfy the following continuous-time heterogeneous nonlinear dynamics:
D p i x i = h i ( x i , t ) + τ i + u i , i I N = { 1 , 2 , , N } ,
where 0 < p i 1 , x i R and τ i R are, respectively, agent i’s state and disturbance, h i ( x i , t ) R is a nonlinear function, and u i R  is the input. Let J = i = 1 N f i ( x i , t ) be a convex global cost function, where f i ( x i , t ) : R × R + R is a time-varying local cost function only known to itself.
This study aims to find a fully distributed optimization protocol to solve the fixed-time distributed time-varying optimization problem formulated below.
Fixed-time distributed time-varying optimization problem: Find a fully distributed controller u i in (5) for each agent, such that for any given initial states x i ( 0 ) , there exists a fixed-time T independent of the initial states, and x i converges to the optimization point x i * within the fixed-time T, i.e., x i = x i * as t T for each i I N , where x i * R is the minimizer of the following distributed time-varying optimization problem:
min over all x i R J = i = 1 N f i ( x i , t ) subject to x i δ i = x j δ j , i , j I N ,
and the constant δ i represents the final consensus configuration (or expected formation) such that x i δ i = x j δ j i , j I N . In fact, x i * R is the minimizer of the optimization problem (6) if and only if x * R is the minimizer of the optimization problem min x 0 R f 0 ( x 0 , t ) , where f 0 ( x 0 , t ) = i = 1 N f i ( x 0 + δ i , t ) is the time-varying global cost function and x i * = x * + δ i , i I N .
Remark 1. 
It is worth pointing out that existing optimization problems usually require all x i and reach an exact consensus (see [8,9,10,11,12,13,14,15,16,17,18,19]), i.e., x i = x j , i , j I N , and they converge to a common optimization point x * . But achieving perfect consensus for all x i in real applications would be incredibly difficult. In other words, there is always an offset between x i and x j  for i j . Thus, in the optimization problem (6), it is required that x i x j = δ i δ j for i , j I N , and each x i converges to its own optimization point x i * , where x i * δ i = x j * δ j = x * , i , j I N . For i , j I N , if δ i = δ j , x i x j = δ i δ j reduces to x i = x j . Thus, the studied time-varying optimization problem (6) is more generic and practical, and has a wider range of applications in resource management/allocation problems [4], economic dispatch problems [18] and optimal rendezvous formation problems [29].
The following assumptions are required to solve the fixed-time distributed time-varying optimization problem of the nonlinear FOMAS (5).
Assumption 2. 
There exists a positive scalar function h ¯ i ( x i , t ) such that | h i ( x i , t ) + τ i | h ¯ i ( x i , t ) for each i I N .
Assumption 3. 
The time-varying global cost function f 0 ( x 0 , t ) is a twice continuously differentiable function with respect to x 0 , and its Hessian 2 f 0 ( x 0 , t ) = 2 f 0 ( x 0 , t ) x 0 2 0 , x 0 R and t R + .
Remark 2. 
From Assumption 2, each agent’s nonlinear function and disturbance are converted into a simplified relation that only involves an available (computable) scalar function. Assumption 2 is mild and has been used for consensus algorithm design of uncertain nonidentical MASs in [43,44,45]. Additionally, from Assumption 2, both of h i ( x i , t ) and τ i are unavailable, h ¯ i ( x i , t ) is available to agent i I N only and can be used in the algorithm design.
Remark 3. 
Note that each local cost function is required to be convex in [8,11,12,13,29], strongly convex in [5,17,27,37], and the Hessian of each local cost function is required to be invertible and equal in [9,10,11,13,37]. However, in Assumption 3, only the global cost function is required to be convex but not necessarily each local cost function, and only the Hessian of the global cost function is required to be invertible but not necessarily the Hessian of each local cost function. Thus, Assumption 3 is mild. The invertibility of 2 f 0 ( x 0 , t ) in Assumption 3 implies that i = 1 N f i ( x 0 , t ) is strictly convex; thus, there exists a unique solution in the time-varying optimization problem (6).

4. Fixed-Time Sliding Mode Control

A sliding-mode-based optimization controller is first proposed as
u i = w i h ¯ i ( x i , t ) sgn ( r i ) a 1 sig 1 ε 1 ( r i ) b 1 sig 1 + ε 1 ( r i ) , i I N ,
where the functions r i R and w i R satisfy the following fractional-order dynamics:
D 1 p i r i = x i z i , D p i z i = w i , D 1 p i w i = u i * ,
0 < ε 1 < 1 a 1 b 1 > 0 are constants, and u i * is the nominal controller to be designed later.
Remark 4. 
Specifically, if p i = 1 i I N , the fractional-order dynamics (8) reduce to
r i = x i z i , z ˙ i = w i = u i * .
Theorem 1. 
Under Assumption 2, the nonlinear FOMAS (5) with the protocol (7) consisting of the sliding-mode manifold (8) reaches the sliding-mode surface r i = 0 within a fixed time T 1 , satisfying
T 1 π N ε 1 4 2 ε 1 a 1 b 1 .
Proof. 
Let V r ( t ) = i = 1 N r i 2 be the Lyapunov function. By using Property 1, (5), (7) and (8), i I N , we have that
r ˙ i = D p i ( D 1 p i r i ) = D p i x i D p i z i = h i ( x i , t ) + τ i + u i w i = h i ( x i , t ) + τ i h ¯ i ( x i , t ) sgn ( r i ) a 1 sig 1 ε 1 ( r i ) b 1 sig 1 + ε 1 ( r i ) .
From (11), Assumption 2 and Lemma 3, we have that V ˙ r ( t ) satisfies
V ˙ r ( t ) 2 i = 1 N r i a 1 sig 1 ε 1 ( r i ) b 1 sig 1 + ε 1 ( r i ) = 2 i = 1 N a 1 | r i | 2 ε 1 b 1 | r i | 2 + ε 1 2 a 1 i = 1 N r i 2 2 ε 1 2 2 b 1 N 1 2 + ε 1 2 i = 1 N r i 2 2 + ε 1 2 = a r V r 1 1 ε r ( t ) b r V r 1 + 1 ε r ( t ) ,
where a r = 2 a 1 b r = 2 b 1 N ε 1 2 and ε r = 2 ε 1 > 2 . By invoking Lemma 4, we have V r ( t ) = 0 within a fixed time T 1 , satisfying (10). Therefore, the sliding-mode surface r i = 0 for each i I N can be achieved within the fixed time T 1 .    □
As proved in Theorem 1, r i = 0 as t T 1 ; thus, r ˙ i = D p i x i D p i z i = 0 , i.e., D p i x i = D p i z i = w i according to (8). Hence, as t T 1 , the dynamics of each agent can be described as the following single-integrator MAS:
x ˙ i = D 1 p i ( D p i x i ) = D 1 p i w i = u i * , i I N ,
where we have Property 1 and (8) in the first equality and the last equality, respectively.
Remark 5. 
According to Theorem 1, when t T 1 , the nonlinear FOMAS (5) with the proposed protocol (7) is equivalent to the single-integrator MAS (13) with nominal controller. In view of (8), for each i I N , r i is independent of any unknown information h i ( x i , t ) or τ i , but depends only on its own absolute state information x i and the nominal controller u i * . This holds true for the protocol (7). In the following, it just needs to design a fully distributed nominal controller u i * such that the fixed-time distributed time-varying optimization problem of the single-integrator MAS (13) is solved.

5. Main Results

In this section, over a strongly connected digraph (could be weight-unbalanced), we first design a centralized fixed-time convergent or optimization protocol by embedding some centralized optimization terms into the fixed-time optimization control scheme in Section 5.1. In Section 5.2, the centralized optimization protocol is transformed into a distributed optimization protocol via designing a distributed fixed-time estimator to estimate the centralized optimization terms.

5.1. Centralized Fixed-Time Optimization Protocol Design

Before designing the centralized fixed-time optimization protocol, three centralized optimization terms about the time-varying global cost function are denoted as
F 1 = j = 1 N f j ( x j , t ) , F 2 = j = 1 N t f j ( x j , t ) , F 3 = j = 1 N 2 f j ( x j , t ) .
A neighborhood position error variable is designed as
e i x = j N i i n a i j ( x i δ i ( x j δ j ) ) , i I N .
Based on Assumption 3, we design the following nominal controller (for each i I N ):
u i * = a 2 sig 1 ε 2 ( e i x ) b 2 sig 1 + ε 2 ( e i x ) ϕ i ( t ) ,
where the optimization term ϕ i ( t ) is defined as
ϕ i ( t ) = ϕ ( t ) = 1 F 3 a 3 sig 1 ε 3 ( F 1 ) + b 3 sig 1 + ε 3 ( F 1 ) + F 2 ,
0 < ε k < 1 a k , b k > 0 are constants,  k = 2 , 3 .
Theorem 2. 
Under Assumptions 1–3, consider the nonlinear FOMAS (5) controlled by the fixed-time optimization controller (7) consisting of the sliding-mode manifold (8), nominal controller (15) and optimization term (16). Then x i = x i * is achieved within a fixed time T, satisfying
T π N ε 1 4 2 ε 1 a 1 b 1 + 2 π ( 2 + ε 2 ) ( 2 N ) ε 2 2 ( 2 + ε 2 ) ρ σ ε 2 + π 2 ε 3 a 3 b 3 ,
where i I N , ρ = k 0 min { a 2 2 , b 2 2 } and σ = min { 2 ε 2 a 2 , 2 + ε 2 b 2 } .
Proof. 
The proof process includes two steps. As r i = 0 within a fixed time T 1 , we will show that, i , j I N , x i δ i = x j δ j along the sliding-mode surface r i = 0 within a fixed time T 2 in Step 1; and afterwards, we will show that x i = x i * within a fixed time T 3 in Step 2.
Step 1 (Fixed-time consensus): As t T 1 , using (15) with (16) for (13) yields
x ˙ i = a 2 sig 1 ε 2 ( e i x ) b 2 sig 1 + ε 2 ( e i x ) ϕ ( t )
for i I N . For t T 1 , by using a new variable x ˜ i = x i + T 1 t ϕ ( s ) d s for (18), one has
x ˜ ˙ i = a 2 sig 1 ε 2 ( e i x ˜ ) b 2 sig 1 + ε 2 ( e i x ˜ )
for i I N and e i x ˜ = j N i i n l i j ( x ˜ j δ j ) . Denote Y i = a 2 sig 1 ε 2 ( e i x ˜ ) + b 2 sig 1 + ε 2 ( e i x ˜ ) for i I N . Then e ˙ i x ˜ = j N i i n l i j x ˜ ˙ j = j N i i n l i j Y j . Consider the Lyapunov function
V x ˜ ( t ) = i = 1 N ξ i a 2 2 ε 2 | e i x ˜ | 2 ε 2 + b 2 2 + ε 2 | e i x ˜ | 2 + ε 2 .
The time derivative V ˙ x ˜ ( t ) satisfies
V ˙ x ˜ ( t ) = i = 1 N ξ i a 2 | e i x ˜ | 1 ε 2 sgn ( e i x ˜ ) + b 2 | e i x ˜ | 1 + ε 2 sgn ( e i x ˜ ) e ˙ i x ˜ = i = 1 N j N i i n ξ i Y i l i j Y j k 0 i = 1 N sgn 2 ( e i x ˜ ) a 2 | e i x ˜ | 1 ε 2 + b 2 | e i x ˜ | 1 + ε 2 2 ρ i = 1 N | e i x ˜ | 2 ( 1 ε 2 ) + | e i x ˜ | 2 ( 1 + ε 2 ) ,
where ρ = k 0 min { a 2 2 , b 2 2 } and the first inequality holds since i = 1 N j N i i n ξ i Y i l i j Y j = Y T Q Y k 0 Y T Y with Y = [ Y 1 , Y 2 , , Y N ] T Ω ( a 2 , b 2 , 1 ε 2 , 1 + ε 2 ) according to (1).
Note that 0 < 1 ε 2 < 2 ε 2 2 + ε 2 < 1 + ε 2 and 0 < 1 ε 2 < 1 < 1 + ε 2 due to the fact that 0 < ε 2 < 1 . For i I N , it holds
| e i x ˜ | 2 ( 2 ε 2 ) 2 + ε 2 | e i x ˜ | 2 ( 1 ε 2 ) + | e i x ˜ | 2 ( 1 + ε 2 ) , | e i x ˜ | 2 | e i x ˜ | 2 ( 1 ε 2 ) + | e i x ˜ | 2 ( 1 + ε 2 ) .
Using (22) and the inequality (2) with n = 2 N and q = 2 2 + ε 2 < 1 yields
i = 1 N | e i x ˜ | 2 ( 1 ε 2 ) + | e i x ˜ | 2 ( 1 + ε 2 ) 1 2 i = 1 N | e i x ˜ | 2 ( 2 ε 2 ) 2 + ε 2 + | e i x ˜ | 2 = 1 2 i = 1 N | e i x ˜ | 2 ε 2 2 2 + ε 2 + | e i x ˜ | 2 + ε 2 2 2 + ε 2 1 2 i = 1 N | e i x ˜ | 2 ε 2 + | e i x ˜ | 2 + ε 2 2 2 + ε 2 .
Also note that 0 < 1 ε 2 < ( 1 + ε 2 ) ( 2 ε 2 ) 2 + ε 2 < 1 + ε 2 . For i I N , it holds
| e i x ˜ | 2 ( 1 + ε 2 ) ( 2 ε 2 ) 2 + ε 2 | e i x ˜ | 2 ( 1 ε 2 ) + | e i x ˜ | 2 ( 1 + ε 2 ) , | e i x ˜ | 2 ( 1 + ε 2 ) | e i x ˜ | 2 ( 1 ε 2 ) + | e i x ˜ | 2 ( 1 + ε 2 ) .
Using (24) and the inequality (3) with n = 2 N and q = 2 ( 1 + ε 2 ) 2 + ε 2 > 1 yields
i = 1 N | e i x ˜ | 2 ( 1 ε 2 ) + | e i x ˜ | 2 ( 1 + ε 2 ) 1 2 i = 1 N | e i x ˜ | 2 ( 1 + ε 2 ) ( 2 ε 2 ) 2 + ε 2 + | e i x ˜ | 2 ( 1 + ε 2 ) = 1 2 i = 1 N | e i x ˜ | 2 ε 2 2 ( 1 + ε 2 ) 2 + ε 2 + | e i x ˜ | 2 + ε 2 2 ( 1 + ε 2 ) 2 + ε 2 ( 2 N ) 1 2 ( 1 + ε 2 ) 2 + ε 2 2 i = 1 N | e i x ˜ | 2 ε 2 + | e i x ˜ | 2 + ε 2 2 ( 1 + ε 2 ) 2 + ε 2 .
Denote V e ( t ) = i = 1 N ( | e i x ˜ | 2 ε 2 + | e i x ˜ | 2 + ε 2 ) . According to (20), one has that V e ( t ) σ V x ˜ ( t ) with σ = min { 2 ε 2 a 2 , 2 + ε 2 b 2 } . It then follows from (21), (23) and (25) that
V ˙ x ˜ ( t ) ρ 4 V e 2 2 + ε 2 ( t ) ρ ( 2 N ) 1 2 ( 1 + ε 2 ) 2 + ε 2 4 V e 2 ( 1 + ε 2 ) 2 + ε 2 ( t ) ρ 4 σ V x ˜ ( t ) 2 2 + ε 2 ρ ( 2 N ) ε 2 2 + ε 2 4 σ V x ˜ ( t ) 2 ( 1 + ε 2 ) 2 + ε 2 = a * V x ˜ 1 1 ε * ( t ) b * V x ˜ 1 + 1 ε * ( t ) ,
where ε * = 2 + ε 2 ε 2 > 2 , a * = ρ σ 1 1 ε * 4 and  b * = ρ σ 1 + 1 ε * 4 ( 2 N ) 1 ε * . Using Lemma 4 yields that V x ˜ ( t ) = 0 within a fixed time T 2 , satisfying
T 2 π ε * 2 a * b * = 2 π ( 2 + ε 2 ) ( 2 N ) ε 2 2 ( 2 + ε 2 ) ρ σ ε 2 .
Thus, as t T 1 + T 2 , e i x ˜ = j N i i n l i j ( x ˜ j δ j ) = 0 . This, together with L 1 N = 0 N and rank ( L ) = N 1 , implies that x ˜ 1 δ 1 = x ˜ 2 δ 2 = = x ˜ N δ N ; that is, x i δ i = x j δ j as t T 1 + T 2 i , j I N . Therefore, the consensus of the nonlinear MAS (5) is reached within a finite settling time T 1 + T 2 .
Step 2 (Fixed-time optimization): As t T 1 + T 2 , x ˙ i = ϕ ( t ) . Consider a Lyapunov function as V f ( t ) = F 1 2 = ( i = 1 N f i ( x i , t ) ) 2 . Its derivative is V ˙ f ( t ) = 2 ( i = 1 N f i ( x i , t ) ) ( i = 1 N 2 f i ( x i , t ) x ˙ i + i = 1 N t f i ( x i , t ) ) = 2 F 1 ( ϕ ( t ) F 3 + F 2 ) as t T 1 + T 2 ; this, together with (16), implies that
V ˙ f ( t ) = 2 F 1 a 3 sig 1 ε 3 ( F 1 ) b 3 sig 1 + ε 3 ( F 1 ) = 2 a 3 | F 1 | 2 ε 3 b 3 | F 1 | 2 + ε 3 = 2 a 3 V f 2 ε 3 2 ( t ) 2 b 3 V f 2 + ε 3 2 ( t ) = a f V f 1 1 ε f ( t ) b f V f 1 + 1 ε f ( t ) ,
where a f = 2 a 3 b f = 2 b 3 and ε f = 2 ε 3 > 2 . By invoking Lemma 4, we have F 1 = i = 1 N f i ( x i , t ) = 0 within a fixed time T 3 , satisfying
T 3 π ε f 2 a f b f = π 2 ε 3 a 3 b 3 .
Therefore, according to Lemma 5, we can get that, as t T = i = 1 3 T i , x i = x i * for i I N , where x i * is the unique minimizer of problem (6).    □
Remark 6. 
It follows from Theorem 2 that the proposed fixed-time protocol (7) with (8), (15) and (16) not only guarantees the system’s state converges to an optimal solution in a fixed time, but also avoids the requirements that each f i ( x i , t ) is convex in [8,12,13,37] and each 2 f i ( x , t ) is invertible and equal in [9,10,13,37]. Unfortunately, the protocol (7) is centralized because ϕ i ( t ) as given in (16) depends on the knowledge of  F 1 F 2 and F 3 , which is global information (centralized optimization terms). In the following subsection, a class of fixed-time estimators based on tracking will be designed to reconstruct the global information (16) in a fully distributed manner.

5.2. Distributed Fixed-Time Optimization Protocol Design

To rebuild the centralized optimization terms, a distributed fixed-time estimator is developed for each agent below. Distributed fixed-time optimization protocol for the nonlinear FOMAS (5) will be constructed in accordance with the result of Theorem 2 and the developed distributed estimator.
Let ω j = [ f j ( x j , t ) , t f j ( x j , t ) , 2 f j ( x j , t ) ] T R 3 for j I N . Each local cost function f j ( x j , t ) is unique to each agent j I N , and as a result, ω j is also unique. In the leaderless FOMAS (5) with a digraph G , each agent j I N may be regarded as a virtual leader. Then, by building a distributed fixed-time leader-following network estimator, the information ω j of the virtual leader may be tracked in fixed-time by all the agents i I N (treated as N virtual followers). Before proceeding, denote diagonal matrices A ¯ j = diag ( a ¯ j 1 , , a ¯ j i , , a ¯ j N ) R N × N with a ¯ j j > 0 and a ¯ j i = 0 , i j , i , j I N . If Assumption 1 holds (the leader-following network has a directed spanning tree), then each matrix H j = L + A ¯ j defines the interaction topology among the leader-following network, which is a nonsingular M-matrix. Under Assumption 1, it follows from Lemma 2 that there exist a positive diagonal matrix Θ j = diag ( θ 1 j , θ 2 j , , θ N j ) and a positive constant η j = λ ̲ ( H ˜ j ) such that H ˜ j = Θ j H j + H j T Θ j η j I N for j I N .
In this section, a distributed fixed-time estimator or tracking for each agent i I N is constructed as
ϖ ˙ j i = Δ j i l j sgn ( Δ j i ) , Δ j i = c j sig 1 ϵ j ( y j i ) + d j sig 1 + ϵ j ( y j i ) , y j i = k N i i n a i k ( ϖ j i ϖ j k ) + a ¯ j i ( ϖ j i ω j ) ,
where ϖ j i = [ ϖ j , 1 i , ϖ j , 2 i , ϖ j , 3 i ] T is the estimate of ω j for each agent i I N Δ j i = [ Δ j , 1 i , Δ j , 2 i , Δ j , 3 i ] T ,   y j i = [ y j , 1 i , y j , 2 i , y j , 3 i ] T l j sup t ω ˙ j c j , d j > 0 are constants, and  0 < ϵ j < 1 i , j I N . Here, ω ˙ j is assumed to be bounded for any j I N and t R + .
Lemma 6. 
Under Assumption 1, consider the distributed fixed-time estimator (30) for each agent i I N , the centralized optimization term ϕ ( t ) given in (16) is reconstructed in a distributed manner as
ϕ i ( t ) = 1 F i , 3 a 3 sig 1 ε 3 ( F i , 1 ) + b 3 sig 1 + ε 3 ( F i , 1 ) + F i , 2
within a fixed time T 0 , where F i , 1 = j = 1 N ϖ j , 1 i F i , 2 = j = 1 N ϖ j , 2 i , and F i , 3 = j = 1 N ϖ j , 3 i , and
T 0 = max j I N 2 π ( 2 + ϵ j ) ( 2 N ) ϵ j 2 ( 2 + ϵ j ) ρ j σ j ϵ j ,
ρ j = η j 2 min { c j 2 , d j 2 } and σ j = min i I N { 2 ϵ j θ i j c j , 2 + ϵ j θ i j d j } for j I N .
Proof. 
Denote H j = [ h i k j ] N × N = L + A ¯ j and χ j , 1 i = ϖ j , 1 i f j ( x j , t ) for i , j I N . From (30), i , j I N , the neighborhood-based error variable y j , 1 i satisfies   
y ˙ j , 1 i = k N i i n h i k j χ ˙ j , 1 k = k N i i n h i k j Δ j , 1 k + l j sgn ( Δ j , 1 k ) + d d t f j ( x j , t ) .
Recall that each matrix H j is a nonsingular M-matrix if Assumption 1 holds; it thus follows from Lemma 2 that there exists a positive diagonal matrix Θ j = diag ( θ 1 j , θ 2 j , , θ N j ) such that H ˜ j = Θ j H j + H j T Θ j η j I N , where η j = λ ̲ ( H ˜ j ) > 0 and j I N . Choose a Lyapunov function for each j I N as
V j ( t ) = i = 1 N θ i j c j 2 ϵ j | y j , 1 i | 2 ϵ j + d j 2 + ϵ j | y j , 1 i | 2 + ϵ j .
Differentiating V j ( t ) along (33) yields (each j I N )
V ˙ j ( t ) = i = 1 N θ i j c j | y j , 1 i | 1 ϵ j sgn ( y j , 1 i ) + d j | y j , 1 i | 1 + ϵ j sgn ( y j , 1 i ) y ˙ j , 1 i = i = 1 N θ i j Δ j , 1 i k N i i n h i k j Δ j , 1 k + l j sgn ( Δ j , 1 k ) + d d t f j ( x j , t ) η j 2 i = 1 N Δ j , 1 i · Δ j , 1 i i = 1 N k N i i n θ i j Δ j , 1 i h i k j l j sgn ( Δ j , 1 k ) + d d t f j ( x j , t ) ,
where the last inequality holds since
i = 1 N k N i i n θ i j Δ j , 1 i h i k j Δ j , 1 k = 1 2 Δ j , 1 T H ˜ j Δ j , 1 η j 2 Δ j , 1 T Δ j , 1
and Δ j , 1 = [ Δ j , 1 1 , Δ j , 1 2 , , Δ j , 1 N ] T . It should be noted that H j 1 N = [ a ¯ j 1 , a ¯ j 2 , , a ¯ j N ] T ,   Δ j , 1 i sgn ( Δ j , 1 i ) = | Δ j , 1 i | and Δ j , 1 i sgn ( Δ j , 1 k ) | Δ j , 1 i | , for i , j , k I N . Then, for each j I N , we can use l j sup t ω ˙ j sup t | d d t f j ( x j , t ) | to deduce
i = 1 N k N i i n θ i j Δ j , 1 i h i k j d d t f j ( x j , t ) + l j sgn ( Δ j , 1 k ) = i = 1 N θ i j Δ j , 1 i a ¯ j i d d t f j ( x j , t ) l j i = 1 N θ i j Δ j , 1 i k N i i n a i k sgn ( Δ j , 1 i ) sgn ( Δ j , 1 k ) + a ¯ j i sgn ( Δ j , 1 i ) d d t f j ( x j , t ) l j i = 1 N θ i j a ¯ j i | Δ j , 1 i | 0 .
Denote ρ j = η j 2 min { c j 2 , d j 2 } for each j I N . Substituting (36) into (35) yields
V ˙ j ( t ) η j 2 i = 1 N c j | y j , 1 i | 1 ϵ j sgn ( y j , 1 i ) + d j | y j , 1 i | 1 + ϵ j sgn ( y j , 1 i ) 2 η j 2 i = 1 N c j 2 | y j , 1 i | 2 ( 1 ϵ j ) + d j 2 | y j , 1 i | 2 ( 1 + ϵ j ) ρ j i = 1 N | y j , 1 i | 2 ( 1 ϵ j ) + | y j , 1 i | 2 ( 1 + ϵ j ) .
Note that 0 < 2 ( 1 ϵ j ) < 2 ( 2 ϵ j ) 2 + ϵ j < 2 ( 1 + ϵ j ) and 2 ( 1 ϵ j ) < 2 < 2 ( 1 + ϵ j ) due to 0 < 1 ϵ j < 1 < 1 + ϵ j , j I N . So, j I N , one has
| y j , 1 i | 2 ( 2 ϵ j ) 2 + ϵ j | y j , 1 i | 2 ( 1 ϵ j ) + | y j , 1 i | 2 ( 1 + ϵ j ) , | y j , 1 i | 2 | y j , 1 i | 2 ( 1 ϵ j ) + | y j , 1 i | 2 ( 1 + ϵ j ) .
Using (38) and (2) with n = 2 N and q = 2 2 + ϵ j ( 0 , 1 ) yields
i = 1 N | y j , 1 i | 2 ( 1 ϵ j ) + | y j , 1 i | 2 ( 1 + ϵ j ) 1 2 i = 1 N | y j , 1 i | 2 ( 2 ϵ j ) 2 + ϵ j + | y j , 1 i | 2 = 1 2 i = 1 N | y j , 1 i | 2 ϵ j 2 2 + ϵ j + | y j , 1 i | 2 + ϵ j 2 2 + ϵ j 1 2 i = 1 N | y j , 1 i | 2 ϵ j + | y j , 1 i | 2 + ϵ j 2 2 + ϵ j .
Also note that 2 ( 1 ϵ j ) < 2 ( 1 + ϵ j ) 2 ϵ j 2 + ϵ j < 2 ( 1 + ϵ j ) due to 0 < 2 ( 1 ϵ j ) < 2 ( 1 + ϵ j ) , j I N . So, j I N , one has
| y j , 1 i | 2 ( 1 + ϵ j ) ( 2 ϵ j ) 2 + ϵ j | y j , 1 i | 2 ( 1 ϵ j ) + | y j , 1 i | 2 ( 1 + ϵ j ) , | y j , 1 i | 2 ( 1 + ϵ j ) | y j , 1 i | 2 ( 1 ϵ j ) + | y j , 1 i | 2 ( 1 + ϵ j ) .
Using (40) and (3) with n = 2 N and q = 2 ( 1 + ϵ j ) 2 + ϵ j > 1 yields
i = 1 N | y j , 1 i | 2 ( 1 ϵ j ) + | y j , 1 i | 2 ( 1 + ϵ j ) 1 2 i = 1 N | y j , 1 i | 2 ( 1 + ϵ j ) ( 2 ϵ j ) 2 + ϵ j + | y j , 1 i | 2 ( 1 + ϵ j ) = 1 2 i = 1 N | y j , 1 i | 2 ϵ j 2 ( 1 + ϵ j ) 2 + ϵ j + | y j , 1 i | 2 + ϵ j 2 ( 1 + ϵ j ) 2 + ϵ j ( 2 N ) 1 2 ( 1 + ϵ j ) 2 + ϵ j 2 i = 1 N | y j , 1 i | 2 ϵ j + | y j , 1 i | 2 + ϵ j 2 ( 2 + ϵ j ) 2 + ϵ j .
Denote V ¯ j ( t ) = i = 1 N ( | y j , 1 i | 2 ϵ j + | y j , 1 i | 2 + ϵ j ) for each j I N . According to (34), one has that V ¯ j ( t ) σ j V j ( t ) with σ j = min i I N { 2 ϵ j θ i j c j , 2 + ϵ j θ i j d j } for each j I N . It can be derived from (37), (39) and (41) that
V ˙ j ( t ) ρ j 4 V ¯ j 2 2 + ϵ j ( t ) ρ j ( 2 N ) 1 2 ( 1 + ϵ j ) 2 + ϵ j 4 V ¯ j 2 ( 1 + ϵ j ) 2 + ϵ j ( t ) ρ j 4 σ j V j ( t ) 2 2 + ϵ j ρ j ( 2 N ) 1 2 ( 1 + ϵ j ) 2 + ϵ j 4 σ j V j ( t ) 2 ( 1 + ϵ j ) 2 + ϵ j = μ j V j 1 1 γ j ( t ) ν j V j 1 + 1 γ j ( t ) ,
where γ j = 2 + ϵ j ϵ j > 2 , μ j = ρ j σ j 1 1 γ j 4 and  ν j = ρ j σ j 1 + 1 γ j 4 ( 2 N ) 1 γ j for each j I N . It thus follows from Lemma 4 that V j ( t ) = 0 within a fixed time T j * , satisfying   
T j * π γ j 2 μ j ν j = 2 π ( 2 + ϵ j ) ( 2 N ) ϵ j 2 ( 2 + ϵ j ) ρ j σ j ϵ j
for j I N . Thus, as t T j * , y j , 1 i = k N i i n h i k j χ j , 1 k = 0 for j I N , where H j = [ h i k j ] N × N is a nonsingular M-matrix, implies that χ j , 1 k = 0 for k I N ; that is, ϖ j , 1 i = f j ( x j , t ) as t T j * for i , j I N . So, j = 1 N ϖ j , 1 i = j = 1 N f j ( x j , t ) , i.e., F i , 1 = F 1 , as t T 0 = max j I N { T j * } for each agent i I N . Similarly, as t T j * , it can be derived that ϖ j , 2 i = t f j ( x j , t ) and ϖ j , 3 i = 2 f j ( x j , t ) for j I N . So j = 1 N ϖ j , 2 i = j = 1 N t f j ( x j , t ) and j = 1 N ϖ j , 3 i = j = 1 N 2 f j ( x j , t ) , i.e., F i , 2 = F 2 and F i , 3 = F 3 , as t T 0 for each agent i I N . Therefore, as t T 0 , the distributed optimization term ϕ i ( t ) given in (31) for each agent i I N is equivalent to the centralized optimization term ϕ ( t ) given in (16).    □
Now, to summarize Lemma 6, Theorems 1 and 2, we can state and establish the main theorem of this paper.
Theorem 3. 
Under Assumptions 1–3, consider the nonlinear FOMAS (5) controlled by the distributed optimization controller (7) consisting of the sliding-mode manifold (8), nominal controller (15), estimator (30) and optimization term (31). Then x i = x i * is achieved within a fixed time m = 0 3 T m , satisfying
m = 0 3 T m max j I N 2 π ( 2 + ϵ j ) ( 2 N ) ϵ j 2 ( 2 + ϵ j ) ρ j σ j ϵ j = T 0 ( s e e ( 32 ) ) + π N ε 1 4 2 ε 1 a 1 b 1 T 1 ( s e e ( 10 ) ) + 2 π ( 2 + ε 2 ) ( 2 N ) ε 2 2 ( 2 + ε 2 ) ρ σ ε 2 T 2 ( s e e ( 27 ) ) + π 2 ε 3 a 3 b 3 T 3 ( s e e ( 29 ) ) ,
where i I N ρ j = η j 2 min { c j 2 , d j 2 } , σ j = min i I N { 2 ϵ j θ i j c j , 2 + ϵ j θ i j d j } , ρ = k 0 min { a 2 2 , b 2 2 } and σ = min { 2 ε 2 a 2 , 2 + ε 2 b 2 } , and j I N .
Proof. 
The proof process includes four steps, i.e., Step 1–4. By repeating the proof processes of Lemma 6, Theorem 1, Step 1 and Step 2 in Theorem 2, respectively, it can be proved that, i , j I N , ϕ i ( t ) = ϕ ( t ) as t T 0 in Step 1, each agent reaches the sliding-mode surface r i = 0 as t T 0 + T 1 in Step 2, x i δ i = x j δ j as t T 0 + T 1 + T 2 in Step 3, and x i = x i * as t T 0 + T 1 + T 2 + T 3 in Step 4, and, hence, omitted here.    □
If Assumptions 1–3 are satisfied, the implementation process of the fixed-time distributed optimization of the nonlinear FOMAS (5) with generic digraph is summarized as in Algorithm 1.
Remark 7. 
To prevent the divergence that the time-varying signals ω j would otherwise cause, we introduce a signum function term l j sgn ( Δ j i ) in (30), where l j sup t ω ˙ j . Note that l j sup t ω ˙ j is similar to Assumption 4.6 in [28], and ω ˙ j is bounded if all of d d t ( f j ( x j , t ) ) , d d t ( t f j ( x j , t ) ) and d d t ( 2 f j ( x j , t ) ) are bounded. The boundedness of ω ˙ j might be restrictive, but it can be satisfied for many cost functions. For example, f i ( x i , t ) = ( a i x i + g i ( t ) ) 2 , ω ˙ j is bounded if g i ( t ) 2 g ˙ i ( t ) 2 and g ¨ i ( t ) 2 are bounded [28], such as g i ( t ) = i ( sin ( t ) + cos ( t ) ) i e t sin ( t ) i 1 + t or i tanh ( t ) , and so on.
Remark 8. 
In the algorithm (7), the sliding-mode control term h ¯ i ( x i , t ) sgn ( r i ) a 1 sig 1 ε 1 ( r i ) b 1 sig 1 + ε 1 ( r i ) is used to derive all agents reaching the sliding-mode surface in a fixed time., the nominal controller u i * designed by (15) consists of consensus control term a 2 sig 1 ε 2 ( e i x ) b 2 sig 1 + ε 2 ( e i x ) and an estimator-based optimization term ϕ i ( t ) estimated by (30) and (31). The consensus control term is used for reaching consensus in fixed time, and the estimator-based optimization term is used for capturing the minimizer of the optimization problem (6) within a fixed time.
Remark 9. 
Since none of the parameters, including ε k a k  , b k c j d j and  ϵ j , k = 1 , 2 , 3 and j = 1 , 2 , , N , depend on any global information, the designed distributed optimization controller (7) using (8), (15), (30) and (31) is fully distributed. Also the designed fully distributed optimization controller is suitable for weight-unbalanced digraphs, and task requirements can be satisfied by modifying the parameters of the expected settling time in (44).
Algorithm 1: Fixed-time distributed optimization algorithm: A fractional-stage implementation
If Assumptions 1–3 are satisfied, the whole fixed-time distributed optimization procedure is summarized by the following four cascading stages.
   ▸ Stage 1: Fixed-time estimator of the centralized optimization term ϕ ( t ) : upgrade (5) using (7) with (8), (15), (30) and (31). According to Lemma 6, as t T 0 = max j I N { T j * } , the distributed optimization term ϕ i ( t ) given in (31) is equivalent to the centralized optimization term ϕ ( t ) given in (16) for each i I N .
   ▸ Stage 2: Fixed-time sliding mode control: continue upgrading (5) using (7) with (8), (15), (30) and (31). According to Theorem 1, as t T 0 + T 1 , the dynamics of each agent is described by the single-integrator MAS (13).
   ▸ Stage 3: Fixed-time consensus of x i δ i i I N : continue upgrading (5) using (7) with (8), (15), (30) and (31). According to the proof of Step 1 in Theorem 2, as t T 0 + T 1 + T 2 , x i δ i = x j δ j i , j I N .
   ▸ Stage 4: Fixed-time convergence of x i x i * i I N : continue upgrading (5) using (7) with (8), (15), (30) and (31). According to the proof of Step 2 in Theorem 2, as t T 0 + T 1 + T 2 + T 3 , x i = x i * i I N .
Remark 10. 
The previous works in [11,13,15,19,28,37] about the distributed time-varying optimization problem of MASs are only applicable for a connected undirected topology. Whereas, our work is applicable for a strongly connected directed topology. Furthermore, both fixed-time optimal convergence and nonlinear dynamics are considered here. The detailed comparison between our work and the works in [11,13,15,19,28,37] is listed in Table 1.
If the nonlinear FOMAS (5) reduces to the single-integrator MAS (13), as a byproduct, we have the following theorem.
Theorem 4. 
Under Assumptions 1 and 3, consider the single-integrator MAS (13) controlled by the continuous distributed optimization controller (15) consisting of the estimator (30) and optimization term (31). Then x i = x i * is achieved within a fixed time T 0 + T 2 + T 3 , where i I N , T 0 = max j I N { T j * } and T j * is given by (32), T 2 and T 3 are given by (27) and (29), respectively.
Remark 11. 
The proposed discontinuous controller (7) in Theorem 3 may cause chattering, and both the proposed discontinuous controller (7) in Theorem 3 and the continuous controller (15) in Theorem 4 may result in singularity when F i , 3 in (31) is irreversible at some point, i.e., F i , 3 = 0 at t = t * with t * ( 0 , T 0 ) . To avoid these unexpected phenomena, some control techniques/methods need to be developed, which are interesting topics for future research.
To avoid the singularity caused by F i , 3 in (31), a special case of Assumption 3 is studied; that is, each 2 f i ( x i , t ) in Assumption 3 is equal [37], i.e., 2 f i ( x i , t ) = 2 f j ( x j , t ) , i , j I N . Then, the optimization term ϕ i ( t ) in (15) is designed as
ϕ i ( t ) = 1 F ˜ i , 3 a 3 sig 1 ε 3 ( F i , 1 ) + b 3 sig 1 + ε 3 ( F i , 1 ) + F i , 2 ,
where F ˜ i , 3 = N 2 f i ( x i , t ) , F i , 1 = j = 1 N ϖ j , 1 i , F i , 2 = j = 1 N ϖ j , 2 i , ϖ j , 1 i and ϖ j , 2 i are designed by the estimator (30) with ϖ j i = [ ϖ j , 1 i , ϖ j , 2 i ] T being the estimate of ω j = [ f j ( x j , t ) , t f j ( x j , t ) ] T for each agent i I N , Δ j i = [ Δ j , 1 i , Δ j , 2 i ] T , y j i = [ y j , 1 i , y j , 2 i ] T , l j sup t ω ˙ j , c j , d j > 0 , and 0 < ϵ j < 1 , i , j I N .
By using the optimization term (45) for (15), a singularity-free distributed optimization controller is thus derived and designed. The following theorems, as byproducts of Theorem 3, are stated and established directly.
Theorem 5. 
Under Assumptions 1–3, if 2 f i ( x i , t ) = 2 f j ( x j , t ) , i , j I N , consider the nonlinear FOMAS (5) controlled by the distributed optimization controller (7) consisting of the sliding-mode manifold (8), nominal controller (15), estimator (30) and optimization term (45). Then x i = x i * is achieved within a fixed time m = 0 3 T m , satisfying (44), i I N .
Theorem 6. 
Under Assumptions 1 and 3, if 2 f i ( x i , t ) = 2 f j ( x j , t ) , i , j I N , consider the single-integrator MAS (13) controlled by the continuous distributed optimization controller (15), consisting of the estimator (30) and optimization term (45). Then x i = x i * is achieved within a fixed time T 0 + T 2 + T 3 , where i I N , T 0 = max j I N { T j * } and T j * is given by (32), T 2 and T 3 are given by (27) and (29), respectively.

6. Simulation Study

Consider the FOMAS (5) with N = 5 , p i = 1.1 0.1 i , h i = k i 1 5 | k i 1 | + 1 ( x i 1 ) 2 and τ i = 0.2 tanh ( k i 2 t ) , where k i 1 and k i 2 are taken randomly from N ( 0 , 1 ) (defined by the randn function of Matlab). The digraph G among the five agents is shown in Figure 1 with weights. We can check that Assumption 1 is satisfied with a weight-unbalanced strongly-connected digraph G , since only agent 5 has the same weighted in-degree and weighted out-degree, i.e., | N 5 i n | = | N 5 o u t | = 1 . It is easy to verify that Assumption 2 is satisfied with h ¯ i = 0.2 ( x i 1 ) 2 + 0.2 for i I N . In the optimization problem (6), we choose δ i = i for each i I N , and
f 1 ( x 1 , t ) = ( x 1 + 0.3 e 1 t ) 2 + 3 , f 2 ( x 2 , t ) = 0.4 sin ( π t ) x 2 + t , f 3 ( x 3 , t ) = 2 x 3 2 0.6 e 1 t x 3 , f 4 ( x 4 , t ) = 0.5 ( x 4 5 ) 2 + t + 2 t + 1 , f 5 ( x 5 , t ) = 2 x 5 2 + 8 x 5 + arctan ( π t ) .
Denote A ¯ j = diag ( 5 , 5 , 5 , 5 , 5 ) , c j = 1 , d j = 2 , ϵ j = 0.1 , l j = 15 , a k = 0.8 , b k = 0.5 and ε k = 0.2 , k = 1 , 2 , 3 and j = 1 , 2 , , 5 . Let x i ( 0 ) = z i ( 0 ) + 1 , r i ( 0 ) = w i ( 0 ) = 0 , and ϖ j i ( 0 ) = 0 3 , i , j I N , where x 1 ( 0 ) , , x 5 ( 0 ) are chosen as 1, 3, −5, 4, −2.
The gradient of each time-varying cost function f j ( x j , t ) , j = 0 , 1 , , 5 , and its two partial differential operators with respect to t and x j are given in Table 2, where f 0 ( x 0 , t ) = i = 1 N f i ( x 0 , t ) is the time-varying global cost function, x 0 = x i δ i , i I N . It is easily verified from Table 2 that the time-varying local cost functions f 2 ( x 2 , t ) , f 4 ( x 4 , t ) and f 5 ( x 5 , t ) are non-convex, the Hessian of each time-varying local cost function is not equal and 2 f 2 ( x 2 , t ) is not invertible. Therefore, the methods in the literature mentioned in Remark 3 fail to work. It is also verified from Table 2 that the time-varying global cost function f 0 ( x 0 , t ) is strictly convex; thus, Assumption 3 is satisfied, and there exists a unique minimizer to the optimization problem (6). By some calculations, the unique minimizer of (6) is x i * = 0.4 sin ( π t ) + i 3 for each i I N , which shows that the optimal points (trajectories) are time-varying. Note that x * = x i * δ i = 0.4 sin ( π t ) 3 is a common optimization point of (6). The trajectories of ϕ i in (31) and ϕ in (16) are shown in Figure 2; these imply that ϕ i = ϕ as t 1 s, i I N . The trajectories of r i , x i , x i * , x i δ i , and x * are respectively provided in Figure 3, Figure 4 and Figure 5. It is seen from Figure 3 that, i I N , r i = 0 is achieved as t 2 s (all agents achieve fixed-time sliding mode control), while there exist chattering phenomena. It is seen from Figure 4 and Figure 5 that, i , j I N , x i δ i = x j δ j is achieved as t 4 s (all agents reach fixed-time consensus), and x i δ i = x i * δ i = x * is achieved as t 5 s (all agents reach fixed-time optimization). The trajectories of x 1 * = 0.4 sin ( π t ) 2 , and x 1 with four different initial states x 1 ( 0 ) = 2 , x 1 ( 0 ) = 0 , x 1 ( 0 ) = 2 , x 1 ( 0 ) = 4 , and all other parameters and initial values given above are shown in Figure 6. It is observed from Figure 6 that the fixed time within which x 1 = x 1 * is achieved is independent of the initial states. Therefore, Theorem 3 is verified by the simulation results.

7. Conclusions

This study has considered the fixed-time distributed optimization problem of a nonlinear FOMAS with heterogeneous time-varying cost functions, nonlinear functions and disturbances under generic weight-unbalanced digraph. By integrating the fixed-time Lyapunov stability theory and the sliding-mode control technique, a novel estimator-based fully distributed optimization algorithm with fixed-time optimal convergence has been designed to solve the problem. One simulation example has been given to demonstrate the effectiveness of our method over a wider range of time-varying local cost functions, considering a weight-unbalanced digraph. An interesting research direction would be to further extend the results of this study to the FOMAS with the digraph having a spanning tree, time-varying consensus configurations, or a continuous fixed-time optimization algorithm.

Author Contributions

Methodology, K.W. and P.G.; formal analysis, Z.M.; writing—original draft preparation, K.W. and P.G.; writing—review and editing, P.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Funds of China under Grant 62003142 and Grant 62203199, and the Guangdong Basic and Applied Basic Research Foundation under Grant 2020A1515110965 and Grant 2023A1515011025.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, C.; Elia, N. Stochastic sensor scheduling via distributed convex optimization. Automatica 2015, 58, 173–182. [Google Scholar] [CrossRef]
  2. Chen, F.; Chen, X.; Xiang, L.; Ren, W. Distributed economic dispatch via a predictive scheme: Heterogeneous delays and privacy preservation. Automatica 2021, 123, 109356. [Google Scholar] [CrossRef]
  3. Mao, S.; Dong, Z.; Schultz, P.; Tang, Y.; Meng, K.; Dong, Z.Y.; Qian, F. A finite-time distributed optimization algorithm for economic dispatch in smart grids. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 2068–2079. [Google Scholar] [CrossRef]
  4. Lee, H.; Lee, S.H.; Quek, T.Q. Deep learning for distributed optimization: Applications to wireless resource management. IEEE J. Sel. Areas Commun. 2019, 37, 2251–2266. [Google Scholar] [CrossRef]
  5. Chen, F.; Jin, J.; Xiang, L.; Ren, W. A scaling-function approach for distributed constrained optimization in unbalanced multiagent networks. IEEE Trans. Autom. Control 2021, 67, 6112–6118. [Google Scholar] [CrossRef]
  6. Falsone, A.; Margellos, K.; Garatti, S.; Prandini, M. Dual decomposition for multi-agent distributed optimization with coupling constraints. Automatica 2017, 84, 149–158. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Deng, Z.; Hong, Y. Distributed optimal coordination for multiple heterogeneous Euler-Lagrangian systems. Automatica 2017, 79, 207–213. [Google Scholar] [CrossRef]
  8. Tran, N.T.; Xiao, J.W.; Wang, Y.W.; Yang, W. Distributed optimisation problem with communication delay and external disturbance. Int. J. Syst. Sci. 2017, 48, 3530–3541. [Google Scholar] [CrossRef]
  9. Ning, B.; Han, Q.L.; Zuo, Z. Distributed optimization for multiagent systems: An edge-based fixed-time consensus approach. IEEE Trans. Cybern. 2017, 49, 122–132. [Google Scholar] [CrossRef]
  10. Gong, P.; Chen, F.; Lan, W. Time-varying convex optimization for double-integrator dynamics over a directed network. In Proceedings of the 2016 35th Chinese Control Conference (CCC), Chengdu, China, 27–29 July 2016; pp. 7341–7346. [Google Scholar]
  11. Hong, H.; Baldi, S.; Yu, W.; Yu, X. Distributed time-varying optimization of second-order multiagent systems under limited interaction ranges. IEEE Trans. Cybern. 2022, 52, 13874–13886. [Google Scholar] [CrossRef]
  12. Zhao, Y.; Liu, Y.; Wen, G.; Chen, G. Distributed optimization for linear multiagent systems: Edge-and node-based adaptive designs. IEEE Trans. Autom. Control 2017, 62, 3602–3609. [Google Scholar] [CrossRef]
  13. Huang, B.; Zou, Y.; Meng, Z.; Ren, W. Distributed time-varying convex optimization for a class of nonlinear multiagent systems. IEEE Trans. Autom. Control 2019, 65, 801–808. [Google Scholar] [CrossRef]
  14. Wang, X.; Wang, G.; Li, S. Distributed finite-time optimization for disturbed second-order multiagent systems. IEEE Trans. Cybern. 2021, 51, 4634–4647. [Google Scholar] [CrossRef] [PubMed]
  15. Hu, Z.; Yang, J. Distributed finite-time optimization for second order continuous-time multiple agents systems with time-varying cost function. Neurocomputing 2018, 287, 173–184. [Google Scholar] [CrossRef]
  16. Chen, G.; Li, Z. A fixed-time convergent algorithm for distributed convex optimization in multi-agent systems. Automatica 2018, 95, 539–543. [Google Scholar] [CrossRef]
  17. Wang, X.; Wang, G.; Li, S. A distributed fixed-time optimization algorithm for multi-agent systems. Automatica 2020, 122, 109289. [Google Scholar] [CrossRef]
  18. Firouzbahrami, M.; Nobakhti, A. Cooperative fixed-time/finite-time distributed robust optimization of multi-agent systems. Automatica 2022, 142, 110358. [Google Scholar] [CrossRef]
  19. Li, Y.; He, X.; Xia, D. Distributed fixed-time optimization for multi-agent systems with time-varying objective function. Int. J. Robust Nonlinear Control 2022, 32, 6523–6538. [Google Scholar] [CrossRef]
  20. Parsegov, S.; Polyakov, A.; Shcherbakov, P. Fixed-time consensus algorithm for multi-agent systems with integrator dynamics. IFAC Proc. Vol. 2013, 46, 110–115. [Google Scholar] [CrossRef]
  21. Gharesifard, B.; Cortés, J. Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Trans. Autom. Control 2014, 59, 781–786. [Google Scholar] [CrossRef]
  22. Makhdoumi, A.; Ozdaglar, A. Graph balancing for distributed subgradient methods over directed graphs. In Proceedings of the 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, 15–18 December 2015; pp. 1364–1371. [Google Scholar]
  23. Akbari, M.; Gharesifard, B.; Linder, T. Distributed online convex optimization on time-varying directed graphs. IEEE Trans. Control Netw. Syst. 2017, 4, 417–428. [Google Scholar] [CrossRef]
  24. Touri, B.; Gharesifard, B. Continuous-time distributed convex optimization on time-varying directed networks. In Proceedings of the 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, 15–18 December 2015; pp. 724–729. [Google Scholar]
  25. Li, Z.; Ding, Z.; Sun, J.; Li, Z. Distributed adaptive convex optimization on directed graphs via continuous-time algorithms. IEEE Trans. Autom. Control 2018, 63, 1434–1441. [Google Scholar] [CrossRef]
  26. Yu, Z.; Yu, S.; Jiang, H.; Mei, X. Distributed fixed-time optimization for multi-agent systems over a directed network. Nonlinear Dyn. 2021, 103, 775–789. [Google Scholar] [CrossRef]
  27. Zhu, Y.; Yu, W.; Wen, G.; Ren, W. Continuous-time coordination algorithm for distributed convex optimization over weight-unbalanced directed networks. IEEE Trans. Circuits Syst. II Express Briefs 2019, 66, 1202–1206. [Google Scholar] [CrossRef]
  28. Rahili, S.; Ren, W. Distributed continuous-time convex optimization with time-varying cost functions. IEEE Trans. Autom. Control 2017, 62, 1590–1605. [Google Scholar] [CrossRef]
  29. Gong, X.; Cui, Y.; Shen, J.; Xiong, J.; Huang, T. Distributed optimization in prescribed-time: Theory and experiment. IEEE Trans. Netw. Sci. Eng. 2022, 9, 564–576. [Google Scholar] [CrossRef]
  30. Liu, Y.; Yang, G.H. Distributed robust adaptive optimization for nonlinear multiagent systems. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 1046–1053. [Google Scholar] [CrossRef]
  31. Abdelwahed, H.; El-Shewy, E.; Alghanim, S.; Abdelrahman, M.A. On the physical fractional modulations on Langmuir plasma structures. Fractal Fract. 2022, 6, 430. [Google Scholar] [CrossRef]
  32. Sharaf, M.; El-Shewy, E.; Zahran, M. Fractional anisotropic diffusion equation in cylindrical brush model. J. Taibah Univ. Sci. 2020, 14, 1416–1420. [Google Scholar] [CrossRef]
  33. Liang, B.; Zheng, S.; Ahn, C.K.; Liu, F. Adaptive fuzzy control for fractional-order interconnected systems with unknown control directions. IEEE Trans. Fuzzy Syst. 2022, 30, 75–87. [Google Scholar] [CrossRef]
  34. Azar, A.T.; Radwan, A.G.; Vaidyanathan, S. Fractional Order Systems: Optimization, Control, Circuit Realizations and Applications; Academic Press: Cambridge, MA, USA, 2018. [Google Scholar]
  35. Yang, X.; Zhao, W.; Yuan, J.; Chen, T.; Zhang, C.; Wang, L. Distributed optimization for fractional-order multi-agent systems based on adaptive backstepping dynamic surface control technology. Fractal Fract. 2022, 6, 642. [Google Scholar] [CrossRef]
  36. Gong, P.; Han, Q.L. Distributed fixed-time optimization for second-order nonlinear multiagent systems: State and output feedback designs. IEEE Trans. Autom. Control 2023. [Google Scholar] [CrossRef]
  37. Sun, S.; Zhang, Y.; Lin, P.; Ren, W.; Farrell, J.A. Distributed time-varying optimization with state-dependent gains: Algorithms and experiments. IEEE Trans. Control Syst. Technol. 2022, 30, 416–425. [Google Scholar] [CrossRef]
  38. Podlubny, I. Fractional Differential Equations. In Mathematics in Science and Engineering; Elsevier: Amsterdam, The Netherlands, 1999. [Google Scholar]
  39. Gong, P.; Han, Q.L. Practical fixed-time bipartite consensus of nonlinear incommensurate fractional-order multiagent systems in directed signed networks. SIAM J. Control Optim. 2020, 58, 3322–3341. [Google Scholar] [CrossRef]
  40. Li, Z.; Wen, G.; Duan, Z.; Ren, W. Designing fully distributed consensus protocols for linear multi-agent systems with directed graphs. IEEE Trans. Autom. Control 2015, 60, 1152–1157. [Google Scholar] [CrossRef]
  41. Hardy, G.H.; Littlewood, J.E.; Pólya, G. Inequalities; Cambridge University Press: Cambridge, UK, 1952. [Google Scholar]
  42. Boyd, S.P.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  43. Wang, Y.; Song, Y.; Lewis, F.L. Robust adaptive fault-tolerant control of multiagent systems with uncertain nonidentical dynamics and undetectable actuation failures. IEEE Trans. Ind. Electron. 2015, 62, 3978–3988. [Google Scholar]
  44. Gong, P.; Lan, W.; Han, Q.L. Robust adaptive fault-tolerant consensus control for uncertain nonlinear fractional-order multi-agent systems with directed topologies. Automatica 2020, 117, 109011. [Google Scholar] [CrossRef]
  45. Zou, W.; Qian, K.; Xiang, Z. Fixed-time consensus for a class of heterogeneous nonlinear multiagent systems. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 1279–1283. [Google Scholar] [CrossRef]
Figure 1. A digraph G among five agents A1–A5.
Figure 1. A digraph G among five agents A1–A5.
Fractalfract 07 00813 g001
Figure 2. Trajectories of ϕ i in (31) and ϕ in (16).
Figure 2. Trajectories of ϕ i in (31) and ϕ in (16).
Fractalfract 07 00813 g002
Figure 3. Trajectories of r i in (8).
Figure 3. Trajectories of r i in (8).
Fractalfract 07 00813 g003
Figure 4. Trajectories of x i and x i * .
Figure 4. Trajectories of x i and x i * .
Fractalfract 07 00813 g004
Figure 5. Trajectories of x i δ i and x * .
Figure 5. Trajectories of x i δ i and x * .
Fractalfract 07 00813 g005
Figure 6. Trajectories of x 1 * , and x 1 with different initial states x 1 ( 0 ) .
Figure 6. Trajectories of x 1 * , and x 1 with different initial states x 1 ( 0 ) .
Fractalfract 07 00813 g006
Table 1. Comparison of Distributed Time-Varying Optimization.
Table 1. Comparison of Distributed Time-Varying Optimization.
Related WorkOptimal Convergence RateTopologyDynamics
[11,28,37]Infinite timeUndirectedLinear
[13]Infinite timeUndirectedNonlinear
[15]Finite timeUndirectedLinear
[19]Fixed timeUndirectedLinear
This workFixed timeDirectedNonlinear
Table 2. The Gradient of Each Time-Varying Cost Function, and its Two Partial Differential Operators.
Table 2. The Gradient of Each Time-Varying Cost Function, and its Two Partial Differential Operators.
j f j ( x j , t ) t f j ( x j , t ) 2 f j ( x j , t )
1 2 x 1 + 0.6 e 1 t 0.6 e 1 t 2
2 0.4 sin ( π t ) 0.4 π cos ( π t ) 0
3 4 x 3 0.6 e 1 t 0.6 e 1 t 4
4 x 4 + 5 0 1
5 4 x 5 + 8 0 4
0 x 0 0.4 sin ( π t ) + 3 0.4 π cos ( π t ) 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, K.; Gong, P.; Ma, Z. Fixed-Time Distributed Time-Varying Optimization for Nonlinear Fractional-Order Multiagent Systems with Unbalanced Digraphs. Fractal Fract. 2023, 7, 813. https://doi.org/10.3390/fractalfract7110813

AMA Style

Wang K, Gong P, Ma Z. Fixed-Time Distributed Time-Varying Optimization for Nonlinear Fractional-Order Multiagent Systems with Unbalanced Digraphs. Fractal and Fractional. 2023; 7(11):813. https://doi.org/10.3390/fractalfract7110813

Chicago/Turabian Style

Wang, Kun, Ping Gong, and Zhiyao Ma. 2023. "Fixed-Time Distributed Time-Varying Optimization for Nonlinear Fractional-Order Multiagent Systems with Unbalanced Digraphs" Fractal and Fractional 7, no. 11: 813. https://doi.org/10.3390/fractalfract7110813

APA Style

Wang, K., Gong, P., & Ma, Z. (2023). Fixed-Time Distributed Time-Varying Optimization for Nonlinear Fractional-Order Multiagent Systems with Unbalanced Digraphs. Fractal and Fractional, 7(11), 813. https://doi.org/10.3390/fractalfract7110813

Article Metrics

Back to TopTop