Next Article in Journal
On Constructing a Family of Sixth-Order Methods for Multiple Roots
Next Article in Special Issue
Privacy Preservation of Nabla Discrete Fractional-Order Dynamic Systems
Previous Article in Journal
Symmetry of Ancient Solution for Fractional Parabolic Equation Involving Logarithmic Laplacian
Previous Article in Special Issue
Multi-Machine Power System Transient Stability Enhancement Utilizing a Fractional Order-Based Nonlinear Stabilizer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Results on Delay-Dependent and Order-Dependent Criteria of Fractional-Order Neural Networks with Time Delay Based on Sampled-Data Control

1
School of Mathematics and Computer Science, Yunnan Minzu University, Kunming 650500, China
2
School of Media and Information Engineering, Yunnan Open University, Kunming 650504, China
3
Faculty of Mechanical and Electrical Engineering, Kunming University of Science and Technology, Kunming 650500, China
4
School of Mathematical Sciences, Chongqing Normal University, Chongqing 401331, China
*
Authors to whom correspondence should be addressed.
Fractal Fract. 2023, 7(12), 876; https://doi.org/10.3390/fractalfract7120876
Submission received: 18 October 2023 / Revised: 22 November 2023 / Accepted: 5 December 2023 / Published: 11 December 2023

Abstract

:
This paper studies the asymptotic stability of fractional-order neural networks (FONNs) with time delay utilizing a sampled-data controller. Firstly, a novel class of Lyapunov–Krasovskii functions (LKFs) is established, in which time delay and fractional-order information are fully taken into account. Secondly, by combining with the fractional-order Leibniz–Newton formula, LKFs, and other analysis techniques, some less conservative stability criteria that depend on time delay and fractional-order information are given in terms of linear matrix inequalities (LMIs). In the meantime, the sampled-data controller gain is developed under a larger sampling interval. Last, the proposed criteria are shown to be valid and less conservative than the existing ones using three numerical examples.

1. Introduction

The concept of fractional calculus was put forward almost simultaneously with the concept of integral calculus more than 300 years ago. Nevertheless, the development of fractional calculus has been somewhat slow, and it has long been studied as a purely mathematical theory due to its weak singularity and lack of precise geometric explanation and application background. Fractional calculus did not start to become a worldwide hot topic in the area of engineering applications until Mandelbrot published his work on fractal theory in 1983 [1]. With its development, it has been pointed out by experts and scholars in many fields that fractional calculus is an effective mathematical instrument for depicting real materials in terms of genetics and memory [2,3]. It is widely used in a number of distinct fields, for instance, biology, control systems, medical care, electromagnetic waves, information science, economic systems [4,5,6,7], and so on.
Artificial neural networks are a kind of network system that is constructed by imitating the microstructure of a human brain model and the research results of intelligent behavior. For the past few years, in the study of neural networks, fractional calculus has been introduced, and the FONNs model has been formed. The FONNs have two benefits over classical neural networks [8,9,10]. To begin with, their limitless memory makes them better at characterizing complicated systems and neurons, and they can also describe system models with greater accuracy. Secondly, the selection of system parameters can be made more flexible since fractional-order systems have more degrees of freedom [11]. An essential characteristic is that the fractional-order derivative depends on an infinite number of terms, whereas the integer-order derivative only represents a finite series. Because of this, the integer-order derivative is a local operator, whereas the fractional-order derivative has the memory of all previous occurrences. Therefore, the dynamic analysis of FONNs has attracted great interest among scholars, and a wealth of results have emerged. For example, the synchronization problem of fractional-order complex-valued neural networks with delay was discussed by employing the linear feedback control and comparison theorem of fractional-order linear delay systems [12]. A crucial finding of Caputo’s fractional-order derivative of a quadratic function was used to study the issue of robust finite-time guaranteed cost control for FONNs, which is based on finite-time stability theory [13]. With the use of fractional calculus and the fractional-order Razumikhin theorem, the passivity of uncertain FONNs with time-varying delay was investigated [14].
Effective control methods are very important from the viewpoint of the control strategy for the analysis of FONNs with complex nonlinear dynamical characteristics, such as sliding mode control [15,16,17], impulsive control [17,18,19], state feedback control [20,21], and sampled-data control [22,23,24,25]. It is important to note that sampled-data control, when compared with other control strategies, can successfully reduce control costs and significantly increase the controller’s usability and utilization. It is common knowledge that a longer sample interval has benefits such as fewer controller drivers, less signal transmission, and less communication channel utilization. An important problem in the study of FONNs stability with the sampled-data controller is how to obtain a longer sampling period. Referencing [26], fractional-order Razumikhin theorem and LMI were used to provide stability conditions for input delay dependence and order dependence. A sampled-data controller was proposed in accordance with the stability requirement. The findings from the studies mentioned above are still conservative; therefore, much work needs to be carried out in this area. So, the question of how to obtain the FONNs stability conditions with low conservatism is crucial. On the other hand, the time delay’s presence [27,28,29] makes the analysis and synthesis of the system more complicated and difficult and also leads to the deterioration of system performance and even instability. Hence, it has significant theoretical implications for studying the stability of FONNs with time delay via sampled-data control.
Inspired by the aforementioned comments, this paper focuses on the controller design issue of FONNs with time delay using a new method. The findings of this study can serve as a foundation and source of encouragement for the development of FONNs with time-delay theory. The innovations in this paper are as follows:
A novel class of LKFs is established, in which time delay and fractional-order information are taken into account so as to reduce the conservatism of the stability criteria.
A new method is proposed to present the relations among the terms of the fractional-order Leibniz–Newton formula for FONNs with time delay by free-weighting matrices. Because ϖ Γ ( δ ) t ϖ t ( t u ) δ 1 ( t 0 C D u δ ξ ( u ) ) T W ( t 0 C D u δ ξ ( u ) ) d u is very difficult to deal with, more functionals need to be constructed, which may also be conservative and computationally complex. Based on this method, the estimation of ϖ Γ ( δ ) t ϖ t ( t u ) δ 1 × ( t 0 C D u δ ξ ( u ) ) T W ( t 0 C D u δ ξ ( u ) ) d u can be avoided.
Compared with the existing results, a less conservative stability for FONNs is established, which achieves a longer sampling period. Moreover this method is applied to the stability analysis of fractional-order linear time-delay systems.
Based on the stability criteria obtained, the sampled-data controller of the FONNs is designed. The results are in terms of LMIs, which make computation and application easier.
The structure of this paper is as follows: In Section 2, the definitions, assumptions, and lemmas required for the FONNs with time delay to be stable are provided. The asymptotic stability conditions of FONNs with time delay are put forth, and a sampled-data controller is designed in Section 3. Two numerical examples validate the rationality of the theoretical method in Section 4. Section 5 summarizes the work of this paper.

2. Preliminaries

Definition 1
([5]). The δ ( 0 , 1 ) -order Caputo fractional-order derivative for function y ^ ( v ) is
t 0 C D v δ y ^ ( v ) = 1 Γ ( m δ ) t 0 v y ^ ( m ) ( γ ) ( v γ ) δ m + 1 d γ ,
where m = [ δ ] + 1 , Γ ( · ) is the Gamma function, y ^ ( v ) C m ( [ t 0 , ) , n ) .
Definition 2
([5]). For an integrable function y ^ ( v ) : [ t 0 , ) n , the fractional-order integral of order δ + is given below.
t 0 I v δ y ^ ( v ) = 1 Γ ( δ ) t 0 v ( v γ ) δ 1 y ^ ( γ ) d γ .
Lemma 1
([5,30]). Some properties of the fractional-order integral and derivative:
  • For any t 0 [ 0 , ) , y ^ ( v ) C 1 ( [ 0 , ) , ) and δ ( 0 , 1 ) , t 0 C D v δ ( t 0 I v δ y ^ ( v ) ) = y ^ ( v ) .
  • For any t 0 [ 0 , ) , y ^ ( v ) C 1 ( [ 0 , ) , ) and δ ( 0 , 1 ) , t 0 I v δ ( t 0 C D v δ y ^ ( v ) ) = y ^ ( v ) y ^ ( t 0 ) .
  • t 0 C D v δ ( y ^ T ( v ) Y y ^ ( v ) ) 2 y ^ T ( v ) Y ( t 0 C D v δ y ^ ( v ) ) , for any y ^ ( v ) n , where δ ( 0 , 1 ) , Y is symmetric positive definite matrix.
Lemma 2
([31]). If the symmetric matrix Q > 0 , for any y ^ ( v ) C 1 ( [ t 0 , v ] , n ) , the following inequality is true:
t 0 I v δ y ^ T ( v ) Q y ^ ( v ) Γ ( δ + 1 ) ( v t 0 ) δ t 0 I v δ y ^ ( v ) T Q t 0 I v δ y ^ ( v ) .
Lemma 3
([32]). The matrix = 11 12 22 < 0 , if and only if equivalent (1) or (2) holds:
  • 11 < 0 , 22 12 T 11 1 12 < 0 ;
  • 22 < 0 , 11 12 22 1 12 T < 0 .
Consider the following FONNs with time delay
t 0 C D t δ ξ ( t ) = A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + u ( t ) , ξ ( t ) = ψ ( t ) , t [ σ , 0 ]
where ξ ( t ) = ( ξ 1 ( t ) , , ξ n ( t ) ) T n , f ( ξ ( t ) ) = ( f 1 ( ξ 1 ( t ) ) , , f n ( ξ n ( t ) ) ) T n , and u ( t ) = ( u 1 ( t ) , , u n ( t ) ) T stand for state, activation function, and control input; the fractional-order δ ( 0 , 1 ) ; σ is constant time delay; and A n , B n , C n are known constant matrices. ψ ( t ) denotes the initial condition.
To reduce data transfers as much as possible while maintaining the required control in terms of performance for the stability of the model (1), we developed a sampled-data controller. Accordingly, the sampled-data controller is depicted below:
u ( t ) = K ξ ( t ) ,
where K is the control gain to be designed. In this paper, the control signal is generated via a zero-order holder (ZOH) function at the sampling moments 0 = t 0 t 1 lim + t = + . For any integer > 0 , the variable sampling intervals are defined as 0 < t + 1 t = ϖ ϖ , where ϖ is the upper bound on the sampling periods.
Bringing (2) into (1), then using the input time-varying delay approach, we obtain
t 0 C D t δ ξ ( t ) = A ξ ( t ) + B f ( ξ ( t ) + C ξ ( t σ ) ) + K ξ ( t ϖ ( t ) ) , t > 0 ,
where ϖ ( t ) = t t , t [ t , t + 1 ] . It is simple to understand ϖ ( t ) ϖ < ϖ .
We require the following assumption before moving on:
Assumption 1.
The activation functions f i ( · ) ( i = 1 , 2 , , n ) are continuous and fulfill
l i < f i ( ς 1 ) f i ( ς 2 ) ς 1 ς 2 < l i + , ς 1 ς 2 ,
where l i and l i + are constants.

3. Main Results

We talk about the stability of the model (3) through the sampled-data controller (2) in this Section. The following theorem presents the LMIs for the delay-dependent and order-dependent stability criterion for the model (3).
Theorem 1.
For the given parameters ϖ 0 , σ 0 , and matrix K, the model (3) is asymptotically stable, if there exist symmetric matrices P > 0 , Q > 0 , M > 0 , E > 0 , diagonal matrix W 1 > 0 , symmetric matrices W 0 , H 0 , Ξ i i 0 , i i 0 ( i = 1 , 2 , 3 ) , any matrices S i , N i ( i = 1 , 2 , 3 ) , Ξ i j , and i j ( 1 i < j 3 ) such that the following LMIs hold:
Υ 1 = 11 12 13 22 23 33 0 ,
Υ 2 = 11 12 13 N 1 22 23 N 2 33 N 3 W 0 ,
Υ 3 = Ξ 11 Ξ 12 Ξ 13 S 1 Ξ 22 Ξ 23 S 2 Ξ 33 S 3 H 0 ,
Υ 4 = 1 , 1 1 , 2 1 , 3 1 , 4 1 , 5 1 , 6 2 , 2 2 , 3 2 , 4 0 2 , 6 3 , 3 0 3 , 5 3 , 6 4 , 4 0 0 5 , 5 0 6 , 6 < 0 ,
where
1 , 1 = P A + A T P + Q + ϖ δ + 1 Γ ( δ + 1 ) A T W A + ϖ δ + 1 Γ ( δ + 1 ) A T M A + σ δ + 1 Γ ( δ + 1 ) A T H A + σ δ + 1 Γ ( δ + 1 ) A T E A + ϖ N 1 + ϖ N 1 T + σ S 1 + σ S 1 T + ϖ δ + 1 Γ ( δ + 1 ) 11 + σ δ + 1 Γ ( δ + 1 ) Ξ 11 + L T W 1 L ,
1 , 2 = P K + ϖ δ + 1 Γ ( δ + 1 ) A T W K + ϖ δ + 1 Γ ( δ + 1 ) A T M K + σ δ + 1 Γ ( δ + 1 ) A T H K + σ δ + 1 Γ ( δ + 1 ) A T E K ϖ N 1 + ϖ N 2 T + ϖ δ + 1 Γ ( δ + 1 ) 12 , 1 , 3 = P C + ϖ δ + 1 Γ ( δ + 1 ) A T W C + ϖ δ + 1 Γ ( δ + 1 ) A T M C + σ δ + 1 Γ ( δ + 1 ) A T H C + σ δ + 1 Γ ( δ + 1 ) A T E C σ S 1 + σ S 2 T + σ δ + 1 Γ ( δ + 1 ) Ξ 12 , 1 , 4 = ϖ N 1 + ϖ N 3 T + ϖ δ + 1 Γ ( δ + 1 ) 13 , 1 , 5 = σ S 1 + σ S 3 T + σ δ + 1 Γ ( δ + 1 ) Ξ 13 , 1 , 6 = P B + ϖ δ + 1 Γ ( δ + 1 ) A T W B + ϖ δ + 1 Γ ( δ + 1 ) A T M B + σ δ + 1 Γ ( δ + 1 ) A T H B + σ δ + 1 Γ ( δ + 1 ) A T E B , 2 , 2 = ϖ δ + 1 Γ ( δ + 1 ) K T W K + ϖ δ + 1 Γ ( δ + 1 ) K T M K + σ δ + 1 Γ ( δ + 1 ) K T H K + σ δ + 1 Γ ( δ + 1 ) K T E K ϖ N 2 ϖ N 2 T + ϖ δ + 1 Γ ( δ + 1 ) 22 , 2 , 3 = ϖ δ + 1 Γ ( δ + 1 ) K T W C + ϖ δ + 1 Γ ( δ + 1 ) K T M C + σ δ + 1 Γ ( δ + 1 ) K T H C + σ δ + 1 Γ ( δ + 1 ) K T E C , 2 , 4 = ϖ N 2 ϖ N 3 T + ϖ δ + 1 Γ ( δ + 1 ) 23 , 2 , 6 = ϖ δ + 1 Γ ( δ + 1 ) K T W B + ϖ δ + 1 Γ ( δ + 1 ) K T M B + σ δ + 1 Γ ( δ + 1 ) K T H B + σ δ + 1 Γ ( δ + 1 ) K T E B , 3 , 3 = Q + ϖ δ + 1 Γ ( δ + 1 ) C T W C + ϖ δ + 1 Γ ( δ + 1 ) C T M C + σ δ + 1 Γ ( δ + 1 ) C T H C + σ δ + 1 Γ ( δ + 1 ) C T E C σ S 2 σ S 2 T + σ δ + 1 Γ ( δ + 1 ) Ξ 22 , 3 , 5 = σ S 2 σ S 3 T + σ δ + 1 Γ ( δ + 1 ) Ξ 23 , 3 , 6 = ϖ δ + 1 Γ ( δ + 1 ) C T W B + ϖ δ + 1 Γ ( δ + 1 ) C T M B + σ δ + 1 Γ ( δ + 1 ) C T H B + σ δ + 1 Γ ( δ + 1 ) C T E B , 4 , 4 = Γ ( δ + 1 ) ϖ δ 1 M + ϖ N 3 + ϖ N 3 T + ϖ δ + 1 Γ ( δ + 1 ) 33 , 5 , 5 = Γ ( δ + 1 ) σ δ 1 E + σ S 3 + σ S 3 T + σ δ + 1 Γ ( δ + 1 ) Ξ 33 , 6 , 6 = ϖ δ + 1 Γ ( δ + 1 ) B T W B + ϖ δ + 1 Γ ( δ + 1 ) B T M B + σ δ + 1 Γ ( δ + 1 ) B T H B + σ δ + 1 Γ ( δ + 1 ) B T E B W 1 .
Proof. 
For ease of use, we denote as
y ( u ) = 1 Γ ( 1 δ ) t 0 t ϖ ( t ) ( u s ) δ ξ ˙ ( s ) d s , p ( u ) = 1 Γ ( 1 δ ) t 0 t σ ( u s ) δ ξ ˙ ( s ) d s .
Then, select the LKFs listed below:
V 1 ( ξ ( t ) ) = t 0 C D t ( 1 δ ) ( ξ T t P ξ t ) ,
V 2 ( ξ ( t ) ) = t σ t ξ T ( s ) Q ξ ( s ) d s ,
V 3 ( ξ ( t ) ) = ϖ Γ ( δ ) ϖ 0 ( ϑ ) δ 1 t + ϑ t ( t 0 C D s δ ξ ( s ) ) T W ( t 0 C D s δ ξ ( s ) ) d s d ϑ ,
V 4 ( ξ ( t ) ) = σ Γ ( δ ) σ 0 ( ϑ ) δ 1 t + ϑ t ( t 0 C D s δ ξ ( s ) ) T H ( t 0 C D s δ ξ ( s ) ) d s d ϑ ,
V 5 ( ξ ( t ) ) = ϖ Γ ( δ ) ϖ 0 ( ϑ ) δ 1 t + ϑ t y T ( s ) M y ( s ) d s d ϑ ,
V 6 ( ξ ( t ) ) = σ Γ ( δ ) σ 0 ( ϑ ) δ 1 t + ϑ t p T ( s ) E p ( s ) d s d ϑ .
The time derivative of V i ( ξ ( t ) ) ( i = 1 , 2 , 3 , 6 ) according to model (3) is given by
V ˙ 1 ( ξ ( t ) ) = t 0 C D t δ ( ξ T t P ξ t ) ,
as stated by Lemma 1, one has
t 0 C D t δ ( ξ T ( t ) P ξ ( t ) ) 2 ξ T ( t ) P t 0 C D t δ ( ξ t ) = 2 ξ T t P ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) ,
and V ˙ 2 ( ξ ( t ) ) , V ˙ 3 ( ξ ( t ) ) can be computed as
V ˙ 2 ( ξ ( t ) ) = ξ T ( t ) Q ξ ( t ) ξ T ( t σ ) Q ξ ( t σ ) ,
V ˙ 3 ( ξ ( t ) ) = ϖ Γ ( δ ) ϖ 0 ( ϑ ) δ 1 ( t 0 C D t δ ξ ( t ) ) T W t 0 C D t δ ξ ( t ) d ϑ ϖ Γ ( δ ) ϖ 0 ( ϑ ) δ 1 ( t 0 C D t + ϑ δ ξ ( t + ϑ ) ) T W ( t 0 C D t + ϑ δ ξ ( t + ϑ ) ) d ϑ = ϖ δ + 1 Γ ( δ + 1 ) ( t 0 C D t δ ξ ( t ) ) T W ( t 0 C D t δ ξ ( t ) ) ϖ Γ ( δ ) t ϖ t ( t u ) δ 1 ( t 0 C D u δ ξ ( u ) ) T W ( t 0 C D u δ ξ ( u ) ) d u ,
according to model (3), we can obtain ϖ ( t ) < ϖ , so one has
V ˙ 3 ( ξ ( t ) ) ϖ δ + 1 Γ ( δ + 1 ) ( t 0 C D t δ ξ ( t ) ) T W ( t 0 C D t δ ξ ( t ) ) ϖ Γ ( δ ) t ϖ ( t ) t ( t u ) δ 1 ( t 0 C D u δ ξ ( u ) ) T W ( t 0 C D u δ ξ ( u ) ) d u
= ϖ δ + 1 Γ ( δ + 1 ) ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) T W × ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) ϖ Γ ( δ ) t ϖ ( t ) t ( t u ) δ 1 ( t ϖ ( t ) C D u δ ξ ( u ) + y ( u ) ) T W ( t ϖ ( t ) C D u δ ξ ( u ) + y ( u ) ) d u ,
we can obtain the derivative of V 4 ( ξ ( t ) ) as
V ˙ 4 ( ξ ( t ) ) = σ Γ ( δ ) σ 0 ( ϑ ) δ 1 ( t 0 C D t δ ξ ( t ) ) T H ( t 0 C D t δ ξ ( t ) ) d ϑ σ Γ ( δ ) σ 0 ( ϑ ) δ 1 ( t 0 C D t + ϑ δ ξ ( t + ϑ ) ) T H ( t 0 C D t + ϑ δ ξ ( t + ϑ ) ) d ϑ = σ δ + 1 Γ ( δ + 1 ) ( t 0 C D t δ ξ ( t ) ) T H ( t 0 C D t δ ξ ( t ) ) σ Γ ( δ ) t σ t ( t u ) δ 1 ( t 0 C D u δ ξ ( u ) ) T H ( t 0 C D u δ ξ ( u ) ) d u = σ δ + 1 Γ ( δ + 1 ) ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) T H × ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) σ Γ ( δ ) t σ t ( t u ) δ 1 ( t σ C D u δ ξ ( u ) + p ( u ) ) T H ( t σ C D u δ ξ ( u ) + p ( u ) ) d u ,
on the basis of Lemma 1, the V ˙ 3 ( ξ ( t ) ) , V ˙ 4 ( ξ ( t ) ) are equal to
V ˙ 3 ( ξ ( t ) ) = ϖ δ + 1 Γ ( δ + 1 ) ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) T W × ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) ϖ Γ ( δ ) t ϖ ( t ) t ( t u ) δ 1 ( t ϖ ( t ) C D u δ ( ξ ( u ) + t ϖ ( t ) I u δ y ( u ) ) ) T W × ( t ϖ ( t ) C D u δ ( ξ ( u ) + t ϖ ( t ) I u δ y ( u ) ) ) d u ,
V ˙ 4 ( ξ ( t ) ) = σ δ + 1 Γ ( δ + 1 ) ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) T H × ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) σ Γ ( δ ) t σ t ( t u ) δ 1 ( t σ C D u δ ( ξ ( u ) + t σ I u δ p ( u ) ) ) T H × ( t σ C D u δ ( ξ ( u ) + t σ I u δ p ( u ) ) ) d u ,
furthermore, one can obtain
V ˙ 5 ( ξ ( t ) ) = ϖ Γ ( δ ) ϖ 0 ( ϑ ) δ 1 y T ( t ) M y ( t ) d ϑ ϖ Γ ( δ ) ϖ 0 ( ϑ ) δ 1 y T ( t + ϑ ) M y ( t + ϑ ) d ϑ = ϖ δ + 1 Γ ( δ + 1 ) y T ( t ) M y ( t ) ϖ Γ ( δ ) t ϖ t ( t u ) δ 1 y T ( u ) M y ( u ) d u = ϖ δ + 1 Γ ( δ + 1 ) y T ( t ) M y ( t ) ϖ t ϖ I t δ ( y T ( t ) M y ( t ) )
= ϖ δ + 1 Γ ( δ + 1 ) ( 1 Γ ( 1 δ ) t 0 t ϖ ( t ) ( t s ) δ ξ ˙ ( s ) d s ) T M × ( 1 Γ ( 1 δ ) t 0 t ϖ ( t ) ( t s ) δ ξ ˙ ( s ) d s ) ϖ t ϖ I t δ ( y T ( t ) M y ( t ) ) ,
V ˙ 6 ( ξ ( t ) ) = σ Γ ( δ ) σ 0 ( ϑ ) δ 1 p T ( t ) E p ( t ) d ϑ σ Γ ( δ ) σ 0 ( ϑ ) δ 1 p T ( t + ϑ ) E p ( t + ϑ ) d ϑ = σ δ + 1 Γ ( δ + 1 ) p T ( t ) E p ( t ) σ Γ ( δ ) t σ t ( t u ) δ 1 p T ( u ) E p ( u ) d u = σ δ + 1 Γ ( δ + 1 ) p T ( t ) E p ( t ) σ t σ I t δ p T ( t ) E p ( t ) = σ δ + 1 Γ ( δ + 1 ) ( 1 Γ ( 1 δ ) t 0 t σ ( t s ) δ ξ ˙ ( s ) d s ) T E ( 1 Γ ( 1 δ ) t 0 t σ ( t s ) δ ξ ˙ ( s ) d s ) σ t σ I t δ ( p T ( t ) E p ( t ) ) ,
because t ϖ ( t ) < t , t σ < t , ϖ ( t ) < ϖ , the V ˙ 5 ( ξ ( t ) ) , V ˙ 6 ( ξ ( t ) ) can be scaled to
V ˙ 5 ( ξ ( t ) ) ϖ δ + 1 Γ ( δ + 1 ) ( 1 Γ ( 1 δ ) t 0 t ( t s ) δ ξ ˙ ( s ) d s ) T M ( 1 Γ ( 1 δ ) t 0 t ( t s ) δ ξ ˙ ( s ) d s ) ϖ t ϖ ( t ) I t δ ( y T ( t ) M y ( t ) ) = ϖ δ + 1 Γ ( δ + 1 ) ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) T M
× ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) ϖ t ϖ ( t ) I t δ ( y T ( t ) M y ( t ) ) ,
V ˙ 6 ( ξ ( t ) ) σ δ + 1 Γ ( δ + 1 ) ( 1 Γ ( 1 δ ) t 0 t ( t s ) δ ξ ˙ ( s ) d s ) T E ( 1 Γ ( 1 δ ) t 0 t ( t s ) δ ξ ˙ ( s ) d s ) σ t σ I t δ ( p T ( t ) E p ( t ) ) = ϖ δ + 1 Γ ( δ + 1 ) ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) T E × ( A ξ ( t ) + B f ( ξ ( t ) ) + C ξ ( t σ ) + K ξ ( t ϖ ( t ) ) ) σ t σ I t δ ( p T ( t ) E p ( t ) ) .
In accordance with Lemma 2
ϖ t ϖ ( t ) I t δ ( y T ( t ) M y ( t ) ) ϖ Γ ( δ + 1 ) ϖ ( t ) δ ( t ϖ ( t ) I t δ y ( t ) ) T M ( t ϖ ( t ) I t δ y ( t ) ) ,
σ t σ I t δ ( p T ( t ) E p ( t ) ) Γ ( δ + 1 ) σ δ 1 ( t σ I t δ p ( t ) ) T E ( t σ I t δ p ( t ) ) .
In order to allow LMIs to be solved, based on ϖ ( t ) < ϖ , we can obtain
ϖ t ϖ ( t ) I t δ ( y T ( t ) M y ( t ) ) Γ ( δ + 1 ) ϖ δ 1 ( t ϖ ( t ) I t δ y ( t ) ) T M ( t ϖ ( t ) I t δ y ( t ) ) .
From Assumption 1, for any diagonal matrix W 1 > 0 , the following can be deduced:
ξ T ( t ) L W 1 L ξ ( t ) f T ( ξ ( t ) ) W 1 f ( ξ ( t ) ) 0 ,
where L = d i a g ( l 1 , l 2 , , l n ) .
Using the fractional-order Leibniz–Newton formula, it is clear that, for any matrices N i , S i ( 1 , 2 , 3 ) , the equations below are correct:
2 [ ξ T ( t ) N 1 + ξ T ( t ϖ ( t ) ) N 2 + t ϖ ( t ) I t δ y T ( t ) N 3 ] × [ ϖ ξ ( t ) + ϖ t ϖ ( t ) I t δ y ( t ) ϖ ξ ( t ϖ ( t ) ) ϖ Γ ( δ ) t ϖ ( t ) t ( t s ) δ 1 ( t ϖ ( t ) C D u δ ( ξ ( u ) + t ϖ ( t ) I u δ y ( u ) ) d u ] = 0 ,
2 [ ξ T ( t ) S 1 + ξ T ( t σ ) S 2 + t σ I t δ p T ( t ) S 3 ] × [ σ ξ ( t ) + σ t σ I t δ p ( t ) σ ξ ( t σ ) σ Γ ( δ ) t σ t ( t s ) δ 1 ( t σ C D u δ ( ξ ( u ) + t σ I u δ p ( u ) ) d u ] = 0 .
On the contrary, for any matrices i i = i i T 0 , Ξ i i = Ξ i i T 0 ( i = 1 , 2 ) , Υ 1 0 , and i j , Ξ i j ( 1 i < j 3 ) , the equations below hold:
ξ ( t ) ξ ( t ϖ ( t ) ) t ϖ ( t ) I t δ y ( t ) T Ω 11 Ω 12 Ω 13 Ω 22 Ω 23 Ω 33 ξ ( t ) ξ ( t ϖ ( t ) ) t ϖ ( t ) I t δ y ( t ) 0 ,
ξ ( t ) ξ ( t σ ) t σ I t δ p ( t ) T Δ 11 Δ 12 Δ 13 Δ 22 Δ 23 Δ 33 ξ ( t ) ξ ( t σ ) t σ I t δ p ( t ) = 0 ,
where Ω i j = ϖ δ + 1 Γ ( δ + 1 ) i j ϖ ϖ ( t ) δ Γ ( δ + 1 ) i j , Δ i j = σ δ + 1 Γ ( δ + 1 ) ( Ξ i j Ξ i j ) , ( 1 i j 3 ) .
Comprehensively taking (17)–(35), one has
V ˙ ( ξ ( t ) ) = ζ 1 T ( t ) Υ 4 ζ 1 ( t ) ϖ Γ ( δ ) t ϖ ( t ) t ( t u ) δ 1 ζ 2 T ( t , u ) Υ 2 ζ 2 ( t , u ) d u σ Γ ( δ ) t σ t ( t u ) δ 1 ζ 3 T ( t , u ) Υ 3 ζ 3 ( t , u ) d u ,
where
ζ 1 ( t ) = ξ T ( t ) , ξ T ( t ϖ ( t ) ) , ξ T ( t σ ) , t ϖ ( t ) I t δ y T ( t ) , t σ I t δ p T ( t ) , f T ( ξ ( t ) ) T , ζ 2 ( t , u ) = ξ T ( t ) , ξ T ( t ϖ ( t ) ) , t ϖ ( t ) I t δ y T ( t ) , ( t ϖ ( t ) C D u δ ( ξ ( u ) + t ϖ ( t ) I u δ y ( u ) ) ) T T , ζ 3 ( t , u ) = ξ T ( t ) , ξ T ( t σ ) , t σ I t δ p T ( t ) , ( t σ C D u δ ( ξ ( u ) + t σ I u δ p ( u ) ) ) T T ,
and Υ 1 , Υ 2 , Υ 3 , and Υ 4 are defined in (5)–(8). If (5)–(8) hold, then V ˙ ( ξ ( t ) ) < 0 for any ζ 1 ( t ) 0 . Therefore, the model (3) is asymptotically stable. □
There are nonlinear terms P K in condition (8) when the gain matrix K is unknown. However, the subsequent Theorem 2 enables it to be changed into LMIs.
Theorem 2.
For the given parameter ϖ 0 , σ 0 , the model (3) is asymptotically stable if there exist symmetric matrices P > 0 , Q > 0 , M > 0 , E > 0 , diagonal matrix W 1 > 0 , symmetric matrices W 0 , H 0 , Ξ i i 0 , i i 0 ( i = 1 , 2 , 3 ) , any matrices Y, S i , N i ( i = 1 , 2 , 3 ) , Ξ i j , and i j ( 1 i < j 3 ) such that the following LMIs hold:
Υ ^ 1 = 11 12 13 22 23 33 0 ,
Υ ^ 2 = 11 12 13 N 1 22 23 N 2 33 N 3 W 0 ,
Υ ^ 3 = Ξ 11 Ξ 12 Ξ 13 S 1 Ξ 22 Ξ 23 S 2 Ξ 33 S 3 H 0 ,
Υ ^ 4 = ˜ 1 , 1 ˜ 1 , 2 ˜ 1 , 3 ˜ 1 , 4 ˜ 1 , 5 ˜ 1 , 6 ˜ 1 , 7 ˜ 1 , 8 ˜ 1 , 9 ˜ 1 , 10 ˜ 2 , 2 0 ˜ 2 , 4 0 0 ˜ 2 , 7 ˜ 2 , 8 ˜ 2 , 9 ˜ 2 , 10 ˜ 3 , 3 0 ˜ 3 , 5 0 ˜ 3 , 7 ˜ 3 , 8 ˜ 3 , 9 ˜ 3 , 10 ˜ 4 , 4 0 0 0 0 0 0 ˜ 5 , 5 0 0 0 0 0 ˜ 6 , 6 ˜ 6 , 7 ˜ 6 , 8 ˜ 6 , 9 ˜ 6 , 10 ˜ 7 , 7 0 0 0 ˜ 8 , 8 0 0 ˜ 9 , 9 0 ˜ 10 , 10 < 0 ,
where
˜ 1 , 1 = P A + A T P + Q + ϖ N 1 + ϖ N 1 T + σ S 1 + σ S 1 T + ϖ δ + 1 Γ ( δ + 1 ) 11 + σ δ + 1 Γ ( δ + 1 ) Ξ 11 + L T W 1 L , ˜ 1 , 2 = Y ϖ N 1 + ϖ N 2 T + ϖ δ + 1 Γ ( δ + 1 ) 12 , ˜ 1 , 3 = P C σ S 1 + σ S 2 T + σ δ + 1 Γ ( δ + 1 ) Ξ 12 , ˜ 1 , 4 = ϖ N 1 + ϖ N 3 T + ϖ δ + 1 Γ ( δ + 1 ) 13 , ˜ 15 = σ S 1 + σ S 3 T + σ δ + 1 Γ ( δ + 1 ) Ξ 13 , ˜ 1 , 6 = P B , ˜ 1 , 7 = P A T , ˜ 1 , 8 = P A T , ˜ 1 , 9 = P A T , ˜ 1 , 10 = P A T , ˜ 2 , 2 = ϖ N 2 ϖ N 2 T + ϖ δ + 1 Γ ( δ + 1 ) 22 , ˜ 2 , 4 = ϖ N 2 ϖ N 3 T + ϖ δ + 1 Γ ( δ + 1 ) 23 , ˜ 2 , 7 = Y T , ˜ 2 , 8 = Y T , ˜ 2 , 9 = Y T , ˜ 2 , 10 = Y T , ˜ 3 , 3 = Q σ S 2 σ S 2 T + σ δ + 1 Γ ( δ + 1 ) Ξ 22 , ˜ 3 , 5 = σ S 2 σ S 3 T + σ δ + 1 Γ ( δ + 1 ) Ξ 23 , ˜ 3 , 7 = P C T , ˜ 3 , 8 = P C T , ˜ 3 , 9 = P C T , ˜ 3 , 10 = P C T , ˜ 4 , 4 = Γ ( δ + 1 ) ϖ δ 1 M + ϖ N 3 + ϖ N 3 T + ϖ δ + 1 Γ ( δ + 1 ) 33 , ˜ 5 , 5 = Γ ( δ + 1 ) σ δ 1 E + σ S 3 + σ S 3 T + σ δ + 1 Γ ( δ + 1 ) Ξ 33 , ˜ 6 , 6 = W 1 , ˜ 6 , 7 = P B T , ˜ 6 , 8 = P B T ,
˜ 6 , 9 = P B T , ˜ 6 , 10 = P B T , ˜ 7 , 7 = Γ ( δ + 1 ) ϖ δ + 1 ( W 2 P ) , ˜ 8 , 8 = Γ ( δ + 1 ) ϖ δ + 1 ( M 2 P ) , ˜ 9 , 9 = Γ ( δ + 1 ) σ δ + 1 ( H 2 P ) , ˜ 10 , 10 = Γ ( δ + 1 ) σ δ + 1 ( E 2 P ) ,
in addition, the expected gain matrix is provided by K = P 1 Y .
Proof. 
By employing Lemma 3, it is possible to rewrite the condition (8) as
Υ ˜ 4 = ˜ 1 , 1 ^ 1 , 2 ˜ 1 , 3 ˜ 1 , 4 ˜ 1 , 5 ˜ 1 , 6 ^ 1 , 7 ^ 1 , 8 ^ 1 , 9 ^ 1 , 10 ˜ 2 , 2 0 ˜ 2 , 4 0 0 ^ 2 , 7 ^ 2 , 8 ^ 2 , 9 ^ 2 , 10 ˜ 3 , 3 0 ˜ 3 , 5 0 ^ 3 , 7 ^ 3 , 8 ^ 3 , 9 ^ 3 , 10 ˜ 4 , 4 0 0 0 0 0 0 ˜ 5 , 5 0 0 0 0 0 ˜ 6 , 6 ^ 6 , 7 ^ 6 , 8 ^ 6 , 9 ^ 6 , 10 ^ 7 , 7 0 0 0 ^ 8 , 8 0 0 ^ 9 , 9 0 ^ 10 , 10 < 0 ,
where ^ 1 , 2 = P K ϖ N 1 + ϖ N 2 T + ϖ δ + 1 Γ ( δ + 1 ) 12 ,   ^ 1 , 7 = A T ,   ^ 2 , 7 = K T ,   ^ 3 , 7 = C T , ^ 6 , 7 = B T ,   ^ 1 , 8 = A T ,   ^ 2 , 8 = K T ,   ^ 3 , 8 = C T ,   ^ 6 , 8 = B T ,   ^ 1 , 9 = A T ,   ^ 2 , 9 = K T , ^ 3 , 9 = C T ,   ^ 6 , 9 = B T ,   ^ 1 , 10 = A T ,   ^ 2 , 10 = K T ,   ^ 3 , 10 = C T ,   ^ 6 , 10 = B T , ^ 7 , 7 = Γ ( δ + 1 ) ϖ δ + 1 W 1 , ^ 8 , 8 = Γ ( δ + 1 ) ϖ δ + 1 M 1 , ^ 9 , 9 = Γ ( δ + 1 ) σ δ + 1 H 1 , ^ 10 , 10 = Γ ( δ + 1 ) σ δ + 1 E 1 .
Multiplying both sides of the left expression for the inequality of (41) with the matrix diag ( I , 4 , I , P , P , P , P ) and letting Y = P K , one can conclude that condition (40) holds. □
For comparison, the result without ξ ( t σ ) is provided. It is possible to rewrite the model (3) as
t 0 C D t δ ξ ( t ) = A ξ ( t ) + B f ( ξ ( t ) ) + K ξ ( t ϖ ( t ) ) .
Next, we talk about the stability of the model (42) via the sampled-data controller (2). The following corollary gives the LMIs of the delay-dependent stability criterion and the order-dependent stability criterion for the model (42).
Corollary 1.
For the given parameter ϖ 0 and matrix K, the model (42) is asymptotically stable if there exist symmetric matrices P > 0 , M > 0 , diagonal matrix W 1 > 0 , symmetric matrices W 0 , i i 0 ( i = 1 , 2 , 3 ) , any matrices N i ( i = 1 , 2 , 3 ) , and i j ( 1 i < j 3 ) such that the following LMIs hold:
π 1 = 11 12 13 22 23 33 0 ,
π 2 = 11 12 13 N 1 22 23 N 2 33 N 3 W 0 ,
π 3 = Ω 11 Ω 12 Ω 13 Ω 14 Ω 22 Ω 23 Ω 24 Ω 33 0 Ω 44 < 0 ,
where
Ω 11 = P A + A T P + ϖ δ + 1 Γ ( δ + 1 ) A T W A + ϖ δ + 1 Γ ( δ + 1 ) A T M A + L T W 1 L + ϖ N 1 + ϖ N 1 T + ϖ δ + 1 Γ ( δ + 1 ) 11 , Ω 12 = P K + ϖ δ + 1 Γ ( δ + 1 ) A T W K + ϖ δ + 1 Γ ( δ + 1 ) A T M K ϖ N 1 + ϖ N 2 T + ϖ δ + 1 Γ ( δ + 1 ) 12 , Ω 13 = ϖ N 1 + ϖ N 3 T + ϖ δ + 1 Γ ( δ + 1 ) 13 , Ω 14 = P B + ϖ δ + 1 Γ ( δ + 1 ) A T W B + ϖ δ + 1 Γ ( δ + 1 ) A T M B , Ω 22 = ϖ δ + 1 Γ ( δ + 1 ) K T W K + ϖ δ + 1 Γ ( δ + 1 ) K T M K ϖ N 2 ϖ N 2 T + ϖ δ + 1 Γ ( δ + 1 ) 22 , Ω 23 = ϖ N 2 ϖ N 3 T + ϖ δ + 1 Γ ( δ + 1 ) 23 , Ω 24 = ϖ δ + 1 Γ ( δ + 1 ) K T W B + ϖ δ + 1 Γ ( δ + 1 ) K T M B ,
Ω 33 = Γ ( δ + 1 ) ϖ δ 1 M + ϖ N 3 + ϖ N 3 T + ϖ δ + 1 Γ ( δ + 1 ) 33 , Ω 44 = ϖ δ + 1 Γ ( δ + 1 ) B T W B + ϖ δ + 1 Γ ( δ + 1 ) B T M B W 1 .
Proof. 
Based on the LKFs mentioned above, we only select the following LKFs:
V 1 ( ξ ( t ) ) = t 0 C D t ( 1 δ ) ( ξ T t P ξ t ) ,
V 3 ( ξ ( t ) ) = ϖ Γ ( δ ) ϖ 0 ( ϑ ) δ 1 t + ϑ t ( t 0 C D s δ ξ ( s ) ) T W ( t 0 C D s δ ξ ( s ) ) d s d ϑ ,
V 5 ( ξ ( t ) ) = ϖ Γ ( δ ) ϖ 0 ( ϑ ) δ 1 t + ϑ t y T ( s ) M y ( s ) d s d ϑ ,
and combine with (31), (32), and (34). The proof resembles the Theorem 1. One obtains
V ˙ ( ξ ( t ) ) = ζ ^ 1 T ( t ) π 3 ζ ^ 1 ( t ) ϖ Γ ( δ ) t ϖ ( t ) t ( t u ) δ 1 ζ ^ 2 T ( t , u ) π 2 ζ ^ 2 ( t , u ) d u ,
where
ζ ^ 1 ( t ) = ξ T ( t ) , ξ T ( t ϖ ( t ) ) , t ϖ ( t ) I t δ y T ( t ) , f T ( ξ ( t ) ) T , ζ ^ 2 ( t , u ) = ξ T ( t ) , ξ T ( t ϖ ( t ) ) , t ϖ ( t ) I t δ y T ( t ) , t ϖ ( t ) C D u δ ( ξ ( u ) + t ϖ ( t ) I u δ y ( u ) ) T T ,
and π 1 , π 2 , and π 3 are defined in (43)–(45). If π 1 0 , π 2 0 , and π 3 < 0 , then V ˙ ( ξ ( t ) ) < 0 for any ζ ^ 1 ( t ) 0 . So, the model (42) is asymptotically stable. □
Similar to Theorem 1, Corollary 1 also has the nonlinear terms P K , and we also transform inequality (45) into the LMIs that Corollary 2 can directly solve with theMATLAB LMI toolbox.
Corollary 2.
For the given parameter ϖ 0 , the model (42) is asymptotically stable if there exist symmetric matrices P > 0 , M > 0 , diagonal matrix W 1 > 0 , symmetric matrices W 0 , i i 0 ( i = 1 , 2 , 3 ) , any matrices Y, N i ( i = 1 , 2 , 3 ) , and i j ( 1 i < j 3 ) such that the following LMIs hold:
π ^ 1 = 11 12 13 22 23 33 0 ,
π ^ 2 = 11 12 13 N 1 22 23 N 2 33 N 3 W 0 ,
π ^ 3 = Ω ˜ 11 Ω ˜ 12 Ω ˜ 13 Ω ˜ 14 Ω ˜ 15 Ω ˜ 16 Ω ˜ 22 Ω ˜ 23 0 Ω ˜ 25 Ω ˜ 26 Ω ˜ 33 0 0 0 Ω ˜ 44 Ω ˜ 45 Ω ˜ 46 Ω ˜ 55 0 Ω ˜ 66 < 0 ,
where
Ω ˜ 11 = P A + A T P + L T W 1 L + ϖ N 1 + ϖ N 1 T + ϖ δ + 1 Γ ( δ + 1 ) 11 , Ω ˜ 12 = Y ϖ N 1 + ϖ N 2 T + ϖ δ + 1 Γ ( δ + 1 ) 12 , Ω ˜ 13 = ϖ N 1 + ϖ N 3 T + ϖ δ + 1 Γ ( δ + 1 ) 13 , Ω ˜ 14 = P B , Ω ˜ 15 = P A T , Ω ˜ 16 = P A T , Ω ˜ 22 = ϖ N 2 ϖ N 2 T + ϖ δ + 1 Γ ( δ + 1 ) 22 , Ω ˜ 23 = ϖ N 2 ϖ N 3 T + ϖ δ + 1 Γ ( δ + 1 ) 23 , Ω ˜ 25 = Y T , Ω ˜ 26 = Y T , Ω ˜ 33 = Γ ( δ + 1 ) ϖ δ 1 M + ϖ N 3 + ϖ N 3 T + ϖ δ + 1 Γ ( δ + 1 ) 33 , Ω ˜ 44 = W 1 , Ω ˜ 45 = P B T , Ω ˜ 46 = P B T , Ω ˜ 55 = Γ ( δ + 1 ) ϖ δ + 1 ( W 2 P ) , Ω ˜ 66 = Γ ( δ + 1 ) ϖ δ + 1 ( M 2 P ) .
Proof. 
By utilizing Lemma 3, the inequality (45) can be transformed into
˜ 3 = Ω ˜ 11 Ω ^ 12 Ω ˜ 13 Ω ^ 14 Ω ^ 15 Ω ^ 16 Ω ˜ 22 Ω ˜ 23 0 Ω ^ 25 Ω ^ 26 Ω ˜ 33 0 0 0 Ω ˜ 44 Ω ^ 45 Ω ^ 46 Ω ^ 55 0 Ω ^ 66 < 0 ,
where Ω ^ 12 = P K ϖ N 1 + ϖ N 2 T + ϖ δ + 1 Γ ( δ + 1 ) 12 , Ω ^ 15 = A T , Ω ^ 16 = A T , Ω ^ 25 = K T , Ω ^ 26 = K T , Ω ^ 45 = B T , Ω ^ 46 = B T , Ω ^ 55 = Γ ( δ + 1 ) ϖ δ + 1 W 1 , Ω ^ 66 = Γ ( δ + 1 ) ϖ δ + 1 M 1 .
Multiplying both sides of the left expression for the inequality of (53) with the matrix diag ( I , 2 , I , P , P ) and letting Y = P K , one can conclude that condition (52) holds. □
In order to further validate the approach, the model (3) without ξ ( t σ ) and u ( t ) = 0 (3) degenerates as follows:
t 0 C D t δ ξ ( t ) = A ξ ( t ) + C ξ ( t σ ) , ξ ( t ) = ψ ( t ) , t [ σ , 0 ] .
The below corollary provides the model (54) stability criterion, which is less conservative and straightforward to verify.
Corollary 3.
For the given parameter σ 0 , the model (54) is asymptotically stable if there exist symmetric matrices P > 0 Q > 0 E > 0 , symmetric matrices H 0 , Ξ i i 0 ( i = 1 , 2 , 3 ) , any matrices S i ( i = 1 , 2 , 3 ) , and Ξ i j ( 1 i < j 3 ) such that the following LMIs hold:
1 = Ξ 11 Ξ 12 Ξ 13 S 1 Ξ 22 Ξ 23 S 2 Ξ 33 S 3 H 0 ,
2 = 11 12 13 22 23 33 < 0 ,
where
11 = P A + A T P + Q + σ δ + 1 Γ ( δ + 1 ) A T H A + σ δ + 1 Γ ( δ + 1 ) A T E A + σ S 1 + σ S 1 T + σ δ + 1 Γ ( δ + 1 ) Ξ 11 , 12 = P C + σ δ + 1 Γ ( δ + 1 ) A T H C + σ δ + 1 Γ ( δ + 1 ) A T E C σ S 1 + σ S 2 T + σ δ + 1 Γ ( δ + 1 ) Ξ 12 , 13 = σ S 1 + σ S 3 T + σ δ + 1 Γ ( δ + 1 ) Ξ 13 , 22 = Q + σ δ + 1 Γ ( δ + 1 ) C T H C + σ δ + 1 Γ ( δ + 1 ) B T E C σ S 2 σ S 2 T + σ δ + 1 Γ ( δ + 1 ) Ξ 22 , 23 = σ S 2 σ S 3 T + σ δ + 1 Γ ( δ + 1 ) Ξ 23 , 33 = Γ ( δ + 1 ) σ δ 1 E + σ S 3 + σ S 3 T + σ δ + 1 Γ ( δ + 1 ) Ξ 33 .
Proof. 
Only the following LKFs are chosen, depending on the previously listed LKFs:
V 1 ( ξ ( t ) ) = t 0 C D t ( 1 δ ) ( ξ T t P ξ t ) ,
V 2 ( ξ ( t ) ) = t σ t ξ T ( s ) Q ξ ( s ) d s ,
V 4 ( ξ ( t ) ) = σ Γ ( δ ) σ 0 ( ϑ ) δ 1 t + ϑ t ( t 0 C D s δ ξ ( s ) ) T H ( t 0 C D s δ ξ ( s ) ) d s d ϑ ,
V 6 ( ξ ( t ) ) = σ Γ ( δ ) σ 0 ( ϑ ) δ 1 t + ϑ t p T ( s ) E p ( s ) d s d ϑ .
And utilizing (33) and (35), similarly to Theorem 1, we have
V ˙ ( ξ ( t ) ) = ζ ˜ 1 T ( t ) 2 ζ ˜ 1 ( t ) σ Γ ( δ ) t σ t ( t u ) δ 1 ζ ˜ 2 T ( t , u ) 1 ζ ˜ 2 ( t , u ) d u ,
where
ζ ˜ 1 ( t ) = ξ T ( t ) , ξ T ( t σ ) , t σ I t δ p T ( t ) T , ζ ˜ 2 ( t , u ) = ξ T ( t ) , ξ T ( t σ ) , t σ I t δ p T ( t ) , t σ C D u δ ( ξ ( u ) + t σ I u δ p ( u ) ) T T ,
1 and 2 are defined in (55) and (56). If 1 0 and 2 < 0 , then V ˙ ( ξ ( t ) ) < 0 for any ζ ˜ 1 ( t ) 0 . So, the model (54) is asymptotically stable. □
Remark 1.
In Equations (20) and (21), the lower bound of the integral is t ϖ ( t ) and t σ , while the lower bound of the fractional-order derivative is t 0 . According to the properties of fractional calculus, the fractional-order Leibniz–Newton formula cannot be directly applied, so the transformation is carried out. Furthermore, the lower bound of the integral in the fractional-order Leibniz–Newton formula provided in this paper is t ϖ ( t ) and t ϖ . The aim is to take into account more state information and its derivatives with the cross-impact between the systems.
Remark 2.
The y ( t ) and p ( t ) will be generated through the processing of Equations (20) and (21). Dealing with y ( t ) and p ( t ) is very difficult. For example,
V 6 ( t ) = e r k t τ 1 t k e r u ( y 1 ( u + τ 1 ) + δ x ( t k ) ) T Q 8 ( y 1 ( u + τ 1 ) + δ x ( t k ) ) d u ,
V 7 ( t ) = e r k t τ 2 t k e r u ( y 2 ( u + τ 2 ) + δ x ( t k η ) ) T Q 9 ( y 2 ( u + τ 2 ) + δ x ( t k η ) ) d u , r k = k + i = 0 k ln V 6 ( t k ) + V 7 ( t k ) V 6 ( t k ) + V 7 ( t k ) ,
are constructed in reference [33] to deal with y ( t ) and p ( t ) . However, after taking the derivative of V 6 ( t ) and V 7 ( t ) , the terms x ( t k ) and x ( t k η ) are added correspondingly, which depends on a non-negative nondecreasing sequence r k . In this article, Equations (22) and (23) are processed and y ( t ) , p ( t ) is transformed into t ϖ ( t ) I t δ y ( t ) and t σ I t δ p ( t ) in order to obtain t ϖ ( t ) I t δ y ( t ) and t σ I t δ p ( t ) . V 5 ( ξ ( t ) ) and V 6 ( ξ ( t ) ) are constructed. The fact that Equations (24) and (25) also produced y ( t ) and p ( t ) is noteworthy. But, by clever scaling, y ( t ) and p ( t ) are converted into t 0 C D t δ ξ ( t ) . Finally, from (26) and (27), it is obvious that there is no need to introduce the non-negative nondecreasing sequence r k or the terms x ( t k ) and x ( t k η ) . This reduces the number of decision variables and the selection of external parameters, obviously reducing computational complexity.
Remark 3.
In Theorem 1, the free matrices N i and S i ( i = 1 , 2 , 3 ) are employed to analyze the relationship between terms ξ ( t ϖ ( t ) ) , ξ ( t σ ) , and ξ ( t ) + t ϖ ( t ) I t δ y ( t ) 1 Γ ( δ ) t ϖ ( t ) t ( t u ) δ 1 ( t ϖ ( t ) C D u δ ( ξ ( u ) + t ϖ ( t ) I u δ y ( u ) ) ) d u , ξ ( t ) + t σ I t δ p ( t ) 1 Γ ( δ ) t σ t ( t u ) δ 1 ( t σ C D u δ ( ξ ( u ) + t σ I u δ p ( u ) ) ) d u . By resolving the LMIs, we can determine them.

4. Numerical Examples

Three examples are provided in this section to demonstrate the viability of the proposed approach. The following are the parameters:
Example 1.
Consider the model (42) of the following parameters provided in [26]
A = 5 0 0 0 4 0 0 0 9 , B = 2 1.2 0 1.8 1.71 1.15 4.75 0 1.1 ,
and activation functions f i ( · ) = tanh ( · ) ( i = 1 , 2 , 3 ) with L = d i a g ( 1 , 1 , 1 ) . The maximum ϖ can be acquired by utilizing the MATLAB LMI toolbox to solve the LMIs of Corollary 2. For different δ, the maximum ϖ calculated by Corollary 2 is shown in Table 1 and compared with the results of the previous literature. It is obvious that in comparison the results obtained by the method in this article have a greater improvement advantage. Distinctly, when δ takes 0.9 , 0.92 , 0.95 , and 0.98 , the maximum ϖ is 0.410 , 0.415 , 0.423 , and 0.431 , which is an increase of 215.3 % , 196.4 % , 182 % , and 153.5 % compared with [26], respectively. Furthermore, taking δ = 0.98 , ϖ = 0.4315 , the corresponding controller gain can be obtained as
K = 0.1163 0.0134 0.0066 0.1627 0.0114 0.0146 0.0484 0.0163 0.0034 .
For the purpose of obtaining simulation results, the initial value is considered as ξ ( t 0 ) = [ 0.9 , 0.8 , 0.9 ] T , and through the derived matrix K, the curve of state responses for model (42) is exhibited in Figure 1. Figure 2 reflects the sampled-data control input u ( t ) . It is evident from Figure 1 that the FONNs can reach stability in a short time. The sampled-data controller’s discrete feature is depicted in Figure 2. The obtained results verify the superiority of the proposed approaches in this paper.
Example 2.
Consider the model (42) of the following parameters provided in [22]
A = 6 0 0 0 2 0 0 0 2 , B = 3 2 2 1 1 0 1 0 1 ,
taking the activation functions f i ( · ) = tanh ( · ) ( i = 1 , 2 , 3 ) with L = d i a g ( 1 , 1 , 1 ) . For different δ, applying the MATLAB LMI toolbox, the maximum ϖ is obtained by resolving the Equations (50)–(52) in Corollary 2, as displayed in Table 2. As can be observed in Table 2, a less conservative condition for delay-dependent and order-dependent stability can be developed by using the fractional-order Leibniz–Newton formula and creating the suitable LKFs. When δ takes different values of 0.9 , 0.92 , 0.95 , and 0.98 , the maximum ϖ is 0.426 , 0.432 , 0.440 , and 0.448 . The maximum ϖ of the reference [22] is 0.12 , 0.13 , 0.15 , and 0.16 , which is 255 % , 232.3 % , 193.3 % , and 180 % larger than the reference [22], respectively. Additionally, choose δ = 0.98 , ϖ = 0.448 . The controller gain matrix K is designed as
K = 0.0021 0.0321 0.0321 0.0360 0.3117 0.1935 0.0360 0.1935 0.3117 .
Based on the above result, so as to acquire the simulation results, the initial value ξ ( t 0 ) = [ 0.6 , 0.3 , 0.1 ] T is selected. Figure 3 shows the model’s state response curve (42). As can be seen from Figure 3, the FONNs can achieve stability for a brief moment. Figure 4 depicts the corresponding control input when ZOH is implemented. As a result, the simulation results presented above attest to the viability and efficiency of the FONNs based on sampled-data control.
Example 3.
Consider the model (54) of the following parameters provided in [33]
A = 2 0 0 0.9 , B = 1 0 1 1 ,
by solving Corollary 3, the maximum delay of the model (54) can be obtained. Table 3 shows the results of different δ corresponding to different maximum delay σ. From Table 3, we can see that in reference [20], although the authors give the stability condition of delay dependence and order dependence for fractional-order time-delay systems, this criterion does not actually depend on delay or order, and it is very conservative (reference [34] gives the corresponding proof). So, through numerical comparison, it can be found that our stable upper bound is larger than that in the previous literature. It can be seen that with the innovative construction of the LKF method and the appropriate introduction of the fractional-order Leibniz–Newton formula, we can obtain a less conservative order-dependent and delay-dependent stability criterion.
According to the solution results, given the initial condition ψ ( t ) = [ 0.6 , 0.3 ] T , the state trajectory of the model can be simulated, as shown in Figure 5. Therefore, from the comparison table and the state simulation trajectory chart, it can be seen that our results are significantly superior to the existing results.

5. Conclusions

In this paper, the stability of FONNs with time delay in light of sampled-data control has been studied. On the basis of the newly constructed LKFs and the newly proposed fractional-order Leibniz–Newton formula, the sufficient conditions for stable-order dependence and delay dependence have been established. Eventually, the validity of the theoretical results was verified by three numerical simulations. In addition, some of the issues discussed in [36,37,38] (fractional-order chaotic or hyperchaotic systems, synchronous communication of fractional-order chaotic systems, and event-triggered impulsive chaotic synchronization of fractional-order systems) are also interesting and will be further considered in future work.

Author Contributions

Conceptualization, J.D., L.X., H.Z., and W.R.; methodology, J.D., L.X., and H.Z.; software, J.D. and L.X.; validation, J.D. and L.X.; formal analysis, J.D. and L.X.; writing—original draft preparation, J.D. and L.X.; writing—review and editing, J.D. and L.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant number 12061088, the Basic Research Youth Fund Project of Yunnan Science and Technology Department under Grant number 202201AU070046, the Scientific Research Fund Project of Yunnan Provincial Department of Education under Grant number 2022J0447.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mandelbrot, B.B. The Fractal Geometry of Nature; WH Freeman: New York, NY, USA, 1982. [Google Scholar]
  2. Kilbas, A.A.; Marichev, O.I.; Samko, S.G. Fractional Integrals and Derivatives (Theory and Applications); Gordon and Breach: Yverdon, Switzerland, 1983. [Google Scholar]
  3. Hilfer, R. Applications of Fractional Calculus in Physics; World Scientific: Singapore, 2000. [Google Scholar]
  4. Oppenheim, A.V.; Willsky, A.S.; Nawab, S.H.; Ding, J.J. Signals and Systems; Prentice Hall: Upper Saddle River, NJ, USA, 1997. [Google Scholar]
  5. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  6. Delavari, H.; Mohadeszadeh, M. Robust finite-time synchronization of non-identical fractional-order hyperchaotic systems and its application in secure communication. IEEE/CAA J. Autom. Sin. 2016, 6, 228–235. [Google Scholar] [CrossRef]
  7. Yousefpour, A.; Jahanshahi, H.; Munoz-Pacheco, J.M.; Bekiros, S.; Wei, Z.C. A fractional-order hyper-chaotic economic system with transient chaos. Chaos Solitons Fractals 2020, 130, 109400. [Google Scholar] [CrossRef]
  8. Zeng, D.Q.; Wu, K.T.; Zhang, R.M.; Zhong, S.; Shi, K.B. Improved results on sampled-data synchronization of Markovian coupled neural networks with mode delays. Neurocomputing 2018, 275, 2845–2854. [Google Scholar] [CrossRef]
  9. Zhang, G.L.; Zhang, J.Y.; Li, W.; Ge, C.; Liu, Y.J. Robust synchronization of uncertain delayed neural networks with packet dropout using sampled-data control. Appl. Intell. 2021, 51, 9054–9065. [Google Scholar] [CrossRef]
  10. Wang, H.; Ni, Y.J.; Wang, J.W.; Tian, J.P.; Ge, C. Sampled-data control for synchronization of Markovian jumping neural networks with packet dropout. Appl. Intell. 2022, 53, 8898–8909. [Google Scholar] [CrossRef]
  11. Picozzi, S.; West, B.J. Fractional Langevin model of memory in financial markets. Phys. Rev. E 2002, 66, 046118. [Google Scholar] [CrossRef] [PubMed]
  12. Zhang, W.W.; Cao, J.D.; Chen, D.Y.; Alsaadi, F.E. Synchronization in fractional-order complex-valued delayed neural networks. Entropy 2018, 20, 54. [Google Scholar] [CrossRef]
  13. Thuan, M.V.; Binh, T.N.; Huong, D.C. Finite-time guaranteed cost control of Caputo fractional-order neural networks. Asian J. Control 2020, 22, 696–705. [Google Scholar] [CrossRef]
  14. Xu, S.; Liu, H.; Han, Z.M. The passivity of uncertain fractional-order neural networks with time-varying delays. Fractal Fract. 2022, 6, 375. [Google Scholar] [CrossRef]
  15. Wang, C.X.; Zhou, X.D.; Shi, X.Z.; Jin, Y.T. Delay-dependent and order-dependent LMI-based sliding mode H control for variable fractional order uncertain differential systems with time-varying delay and external disturbance. J. Frankl. Inst. 2022, 359, 7893–7912. [Google Scholar] [CrossRef]
  16. Chen, Y.; Wang, B.; Chen, Y.; Wang, Y. Sliding Mode Control for a Class of Nonlinear Fractional Order Systems with a Fractional Fixed-Time Reaching Law. Fractal Fract. 2022, 6, 678. [Google Scholar] [CrossRef]
  17. Jia, T.; Chen, X.; He, L.; Zhao, F.; Qiu, J. Finite-Time Synchronization of Uncertain Fractional-Order Delayed Memristive Neural Networks via Adaptive Sliding Mode Control and Its Application. Fractal Fract. 2022, 6, 502. [Google Scholar] [CrossRef]
  18. Stamova, I.; Henderson, J. Practical stability analysis of fractional-order impulsive control systems. Isa Trans. 2016, 64, 77–85. [Google Scholar] [CrossRef]
  19. Guo, L.; Ali Shah, K.; Bai, S.; Zada, A. On the Analysis of a Neutral Fractional Differential System with Impulses and Delays. Fractal Fract. 2022, 6, 673. [Google Scholar] [CrossRef]
  20. Chen, L.P.; Wu, R.C.; Cheng, Y.; Chen, Y.Q. Delay-dependent and order-dependent stability and stabilization of fractional-order linear systems with time-varying delay. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 1064–1068. [Google Scholar] [CrossRef]
  21. Ma, Z.; Sun, K. Nonlinear Filter-Based Adaptive Output-Feedback Control for Uncertain Fractional-Order Nonlinear Systems with Unknown External Disturbance. Fractal Fract. 2023, 7, 694. [Google Scholar] [CrossRef]
  22. Zhang, Q.; Wang, H.; Wang, L. Order-dependent sampling control for state estimation of uncertain fractional-order neural networks system. Optim. Control Appl. Methods, 2023; under review. [Google Scholar]
  23. Cao, K.; Gu, J.; Mao, J.; Liu, C. Sampled-Data Stabilization of Fractional Linear System under Arbitrary Sampling Periods. Fractal Fract. 2022, 6, 416. [Google Scholar] [CrossRef]
  24. Cao, K.C.; Qian, C.J.; Gu, J.P. Sampled-data control of a class of uncertain nonlinear systems based on direct method. Syst. Control Lett. 2021, 155, 105000. [Google Scholar] [CrossRef]
  25. Li, S.; Ahn, C.K.; Guo, J.; Xiang, Z.R. Neural network-based sampled-data control for switched uncertain nonlinear systems. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 5437–5445. [Google Scholar] [CrossRef]
  26. Zhang, Q.; Ge, C.; Zhang, R.N.; Yang, L. Order-dependent sampling control of uncertain fractional-order neural networks system. Authorea, 2022; Preprints. [Google Scholar]
  27. Agarwal, R.P.; Hristova, S.; O’Regan, D. Lyapunov Functions and Stability Properties of Fractional Cohen—GrossbergNeural Networks Models with Delays. Fractal Fract. 2023, 7, 732. [Google Scholar] [CrossRef]
  28. Chen, L.; Gong, M.; Zhao, Y.; Liu, X. Finite-Time Synchronization for Stochastic Fractional-Order Memristive BAM Neural Networks with Multiple Delays. Fractal Fract. 2023, 7, 678. [Google Scholar] [CrossRef]
  29. Zhao, K. Stability of a Nonlinear Langevin System of ML-Type Fractional Derivative Affected by Time-Varying Delays and Differential Feedback Control. Fractal Fract. 2022, 6, 725. [Google Scholar] [CrossRef]
  30. Duarte-Mermoud, M.A.; Aguila-Camacho, N.; Gallegos, J.A.; Castro-Linares, R. Using general quadratic Lyapunov functions to prove Lyapunov uniform stability for fractional order systems. Commun. Nonlinear Sci. Numer. Simul. 2015, 22, 650–659. [Google Scholar] [CrossRef]
  31. Jia, J.; Huang, X.; Li, Y.X.; Cao, J.D.; Alsaedi, A. Global stabilization of fractional-order memristor-based neural networks with time delay. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 997–1009. [Google Scholar] [CrossRef] [PubMed]
  32. Xiong, J.L.; Lam, J. Stabilization of networked control systems with a logic ZOH. IEEE Trans. Autom. Control 2009, 54, 358–363. [Google Scholar] [CrossRef]
  33. Hu, T.T.; He, Z.; Zhang, X.J.; Zhong, S.M.; Yao, X.Q. New fractional-order integral inequalities: Application to fractional-order systems with time-varying delay. J. Frankl. Inst. 2021, 358, 3847–3867. [Google Scholar] [CrossRef]
  34. Jin, X.C.; Lu, J.G. Order-dependent and delay-dependent conditions for stability and stabilization of fractional-order time-varying delay systems using small gain theorem. Asian J. Control 2023, 25, 1365–1379. [Google Scholar] [CrossRef]
  35. Jin, X.C.; Lu, J.G. Order-dependent LMI-based stability and stabilization conditions for fractional-order time-delay systems using small gain theorem. Int. J. Robust Nonlinear Control 2022, 32, 6484–6506. [Google Scholar] [CrossRef]
  36. Sene, N.; Ndiaye, A. On Class of Fractional-Order Chaotic or Hyperchaotic Systems in the Context of the Caputo Fractional-Order Derivative. J. Math. 2020, 2020, 8815377. [Google Scholar] [CrossRef]
  37. Li, Q.P.; Liu, S.Y.; Chen, Y.G. Combination event-triggered adaptive networked synchronization communication for nonlinear uncertain fractional-order chaotic systems. Appl. Math. Comput. 2018, 333, 521–535. [Google Scholar] [CrossRef]
  38. Yu, N.X.; Zhu, W. Event-triggered impulsive chaotic synchronization of fractional-order differential systems. Appl. Math. Comput. 2021, 388, 125554. [Google Scholar] [CrossRef]
Figure 1. State responses for Example 1.
Figure 1. State responses for Example 1.
Fractalfract 07 00876 g001
Figure 2. Sampled-data control input for Example 1.
Figure 2. Sampled-data control input for Example 1.
Fractalfract 07 00876 g002
Figure 3. State responses for Example 2.
Figure 3. State responses for Example 2.
Fractalfract 07 00876 g003
Figure 4. Sampled-data control input for Example 2.
Figure 4. Sampled-data control input for Example 2.
Fractalfract 07 00876 g004
Figure 5. State responses for Example 3.
Figure 5. State responses for Example 3.
Fractalfract 07 00876 g005
Table 1. The maximum ϖ allowed for different δ of Example 1.
Table 1. The maximum ϖ allowed for different δ of Example 1.
δ 0.90.920.950.98
[26]0.130.140.150.17
Corollary 20.4100.4150.4230.431
Table 2. The maximum ϖ allowed for different δ of Example 2.
Table 2. The maximum ϖ allowed for different δ of Example 2.
δ 0.90.920.950.98
[22]0.120.130.150.16
Corollary 20.4260.4320.4400.448
Table 3. The maximum delay σ allowed for different δ .
Table 3. The maximum delay σ allowed for different δ .
δ 0.80.850.90.950.98
[33] [ τ ˙ ( t ) = 0, u ( t ) = 0]N/AN/AN/AN/AN/A
[20] [Theorem 3.1]
[34] [ μ = 0, u ( t ) = 0]0.3840.4140.4430.4710.488
[35]0.8400.8820.9250.9620.984
Corollary 32.5012.4132.3412.2832.253
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dai, J.; Xiong, L.; Zhang, H.; Rui, W. Improved Results on Delay-Dependent and Order-Dependent Criteria of Fractional-Order Neural Networks with Time Delay Based on Sampled-Data Control. Fractal Fract. 2023, 7, 876. https://doi.org/10.3390/fractalfract7120876

AMA Style

Dai J, Xiong L, Zhang H, Rui W. Improved Results on Delay-Dependent and Order-Dependent Criteria of Fractional-Order Neural Networks with Time Delay Based on Sampled-Data Control. Fractal and Fractional. 2023; 7(12):876. https://doi.org/10.3390/fractalfract7120876

Chicago/Turabian Style

Dai, Junzhou, Lianglin Xiong, Haiyang Zhang, and Weiguo Rui. 2023. "Improved Results on Delay-Dependent and Order-Dependent Criteria of Fractional-Order Neural Networks with Time Delay Based on Sampled-Data Control" Fractal and Fractional 7, no. 12: 876. https://doi.org/10.3390/fractalfract7120876

APA Style

Dai, J., Xiong, L., Zhang, H., & Rui, W. (2023). Improved Results on Delay-Dependent and Order-Dependent Criteria of Fractional-Order Neural Networks with Time Delay Based on Sampled-Data Control. Fractal and Fractional, 7(12), 876. https://doi.org/10.3390/fractalfract7120876

Article Metrics

Back to TopTop