Next Article in Journal
The Stochastic Structural Modulations in Collapsing Maccari’s Model Solitons
Next Article in Special Issue
Fractional Scale Calculus: Hadamard vs. Liouville
Previous Article in Journal
Adaptive Neural Network Synchronization Control for Uncertain Fractional-Order Time-Delay Chaotic Systems
Previous Article in Special Issue
Generalized Shifted Airfoil Polynomials of the Second Kind to Solve a Class of Singular Electrohydrodynamic Fluid Model of Fractional Order
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Extended Stability and Control Strategies for Impulsive and Fractional Neural Networks: A Review of the Recent Results

Department of Mathematics, University of Texas at San Antonio, San Antonio, TX 78249, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Fractal Fract. 2023, 7(4), 289; https://doi.org/10.3390/fractalfract7040289
Submission received: 1 March 2023 / Revised: 16 March 2023 / Accepted: 25 March 2023 / Published: 27 March 2023
(This article belongs to the Special Issue Feature Papers for the 'General Mathematics, Analysis' Section)

Abstract

:
In recent years, cellular neural networks (CNNs) have become a popular apparatus for simulations in neuroscience, biology, medicine, computer sciences and engineering. In order to create more adequate models, researchers have considered memory effects, reaction–diffusion structures, impulsive perturbations, uncertain terms and fractional-order dynamics. The design, cellular aspects, functioning and behavioral aspects of such CNN models depend on efficient stability and control strategies. In many practical cases, the classical stability approaches are useless. Recently, in a series of papers, we have proposed several extended stability and control concepts that are more appropriate from the applied point of view. This paper is an overview of our main results and focuses on extended stability and control notions including practical stability, stability with respect to sets and manifolds and Lipschitz stability. We outline the recent progress in the stability and control methods and provide diverse mechanisms that can be used by the researchers in the field. The proposed stability techniques are presented through several types of impulsive and fractional-order CNN models. Examples are elaborated to demonstrate the feasibility of different technologies.

1. Introduction

In the last 80 years, there has been tremendous activity and development in the formulation of artificial neural network (ANN) models as a framework in the study of human brain function, mental or behavioral phenomena and brain structural plasticity. Researchers have proposed numerous ANN models to imitate human brain structures and study crucial aspects of information processing that meet new requirements and challenges. For example, the authors in [1] used ANNs to model brain responses and investigate human brain function. In [2], an ANN approach has been applied to study the achievement of stable dynamics in neural circuits. The paper [3] is devoted to continual lifelong learning with neural networks. Well-established and emerging research motivated by lifelong learning factors such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration have been discussed. The authors in [4] investigated the molecular transport in the human brain by physics-informed neural networks. All results cited above are very recent contributions in the area of modeling of the brain’s neural networks and the study of their dynamics via ANNs. The book [5] provides a broad collection of articles that offer a comprehensive introduction to the world of brain and different modeling methods, including neural networks.
Recently, ANNs have become advantageous tools applied in pattern recognition, decision making, classification, optimization and linear and nonlinear programming, and they have attracted the attention of researchers in biology, medicine, computer sciences, engineering sciences and business sciences. Hence, the research on artificial intelligence approaches is becoming increasingly important in numerous emerging areas of science, medicine and engineering [6,7].
One of the most popular and appropriate neural network approaches is cellular neural networks (CNNs) in the form of differential equations. In fact, in modeling applications, mathematical models are essential. The explicit structure imposed by these models helps practitioners to overcome a major difficulty in the estimation of network connectivity from experiments: the fact that only some representative subsets of neurons can be measured simultaneously. The mathematical models lead to predictions of the effects of connections from all (even unmeasured) neurons.
CNNs are an important class of ANNs whose design was inspired by the design and functioning of the human brain and its components. The main feature of such a neural network model is that it is composed of a massive aggregate of regularly spaced circuit clones, called cells, which communicate with each other directly only through their nearest neighbors. Introduced in 1988 by Chua and Yang [8,9], this class of information-processing systems processes signals in real time and has been widely used to study numerous phenomena in the learning and modeling of nonlinear and complex relationships.
In their attempts to create more realistic neural network models, researchers have also considered memory effects. Taking into account delay effects is essential to study how current patterns of neural activity impact future patterns of activity, which is a key point in the study of brain plasticity [3]. Delayed CNNs (DCNNs) have had remarkable success in modeling neuron and brain maturation processes. The properties of their units have been found to mimic the properties of real neurons in their functions, necessary for working memory and response inhibition [10]. In fact, time delay performs an important role in many applied dynamical systems, including dynamical processes of neuronal maturation in the infant and adult brain, in which processes’ time delays critically affect the stable or unstable outcomes of the cell dynamics and neural circuit efficacy. Therefore, there has been enormous interest in the area of DCNNs among many researchers in the neurosciences, chemical technologies, population dynamics, biotechnologies, molecular sciences and robotics [11,12,13,14]. Delay effects are also important in numerous other applications of CNNs [15,16,17].
Another fruitful line of research has been considering reaction–diffusion terms in CNNs. Indeed, such terms can appropriately represent spatial growth and time. Moreover, diffusion phenomena arise naturally in various fields when working with neural network models. This is why various reaction–diffusion CNNs have been proposed as modeling approaches in neurosciences to study the molecular transport in the human brain from magnetic resonance images [5], to study the human brain’s development [18], to simulate the formation processes of dendritic spines, which show high plasticity and are related to learning [19], to understand the early developmental changes at the whole brain and regional levels [20], to model the relationship between structural and functional brain connectivity networks [21], to study the synaptic plasticity [22] and much more [23,24,25].
The Cohen–Grossberg neural networks (CGNNs) introduced in 1983 [26] also attracted more research interest since they are advantageous in global pattern formation and partial memory storage [27,28,29,30]. This class of neural network models is a generalization of various CNNs, including Hopfield-type neural network models [31], and can develop a composite dynamical attitude [32,33].
The Bidirectional Associative Memory (BAM) type of neural network is another group of CNNs whose design is inspired by the associative phenomena existing in the human brain [34,35,36]. It extends the single-layer auto-associative correlation to two-layer hetero-associative circuits [37], which is essential in numerous applied problems [38,39,40]. Because of its expansion, it has been intensively studied.
The fractional-order modeling approach has been a consideration of numerous investigators, because of the universality provided and the large scope of the application areas [41,42,43,44]. The fact that the memory is integrated in fractional derivatives is the fundamental superiority in using fractional models [45]. Such memory is known as ‘intrinsic memory’. It is proven that fractional-order networks can provide a flexible framework in the study of deep brain stimulation processes [46]. Neural network models of fractional order have superiority in the understanding of the rich dynamics in the neurons’ activities [47], which justifies research activities in the fractional-order formulation of ANN models, including results concerning biological neurons [48]. Hence, a comprehensive scientific field of production is focused on the study of ANNs with fractional-order dynamics and their applications [33,49,50,51,52,53].
Furthermore, the tool of impulsive neural networks has been extensively used in the accurate description of neuronal processes with short-term perturbations during their evolution [54,55,56,57,58,59]. The design and properties of such networks are based on the theory of impulsive differential equations [60,61,62,63]. Such systems are also adapted to impulsive control problems [64,65]. In fact, impulsive control arises naturally in a wide variety of applications [66,67,68,69,70,71,72]. The main advantage of impulsive control is in the fact that it is applied only in some discrete times and can reduce the amount of transmitted information and, hence, the control cost drastically. Neuroscientists are also aware of the impulsive control disorder phenomena [73]. Impulse control mechanisms are considered in models derived from brain activity [74]. Creating impulsive control architectures for neural network models opens up the possibility of successfully pursuing long-term goals despite short-term attacks and shocks.
The stability problem is one of the most important in the understanding of the internal mechanisms that stabilize and modulate neural activity [2,3]. Indeed, the study of stability also has inspired a broad range of neural network approaches. Stability and control methods provide diverse strategies to investigate the qualitative behavior of the neuronal states [75,76,77,78].
However, in many practical situations, classical stability concepts are useless. This motivates researchers to further expand the stability notions and introduce new stability concepts such as practical stability, stability of sets, stability with respect to manifolds and Lipschitz stability. See, for example, [79] and the references therein. Such extended stability notions allow us to determine which movement mechanisms can support stability strategies that are acceptable from the practical point of view, where the classical strategies do not allow mathematically ideal stable behavior. As such, they have powerful practical applications in emerging areas such as biology, medicine, optimal control, mechanics, bio-technologies, economics, electronics, etc. [62].
This article reviews some authors’ results and recent progress in the application of the extended stability concepts to different classes of impulsive and fractional neural network models. This survey also provides a reference for further research on stability and control strategies for such models.
The remaining part of this paper is organized as follows. In Section 2, materials and methods are presented. Various classes of impulsive neural network models, including fractional-order cases, are given. Extended stability notions are justified. Section 3 is devoted to a review of the results in extending stability and control strategies for impulsive and fractional DCNNs. The significance of the extended criteria is discussed and demonstrated via examples. The effect of uncertain terms is also considered. Section 4 discusses open problems and future research directions on these topics. Finally, some conclusions are included in Section 5.
The following notations will be used: R + = [ 0 , ) , R denotes the set of all real numbers, and R n is the n-dimensional real space with the Euclidean norm | | x | | of an x R n .

2. Materials and Methods

The McCulloch–Pitts neuron, introduced by Warren McCulloch and Walter Pitts in 1943 [80], is considered the first mathematical model of a neural network. Since then, the attention to neural network models and their applications has greatly increased [6,7,81,82,83].

2.1. CNNs

CNNs form an important class of NNs that model human cognition using local real-time signal information processing [8,9]. Such NNs can be represented as
x ˙ i ( t ) = c i x i ( t ) + j = 1 n w i j ( t ) f j ( x j ( t ) ) + I i , i = 1 , 2 , , n ,
where n represents the number of the neurons; n 2 , t R + , x i ( t ) represents the state of the i th cell (neuron) at time t; x ˙ i ( t ) is the first-order derivative of x i with respect to t, which represents the rate of change in the ‘activation’ x i of neuron i with respect to time; f j is the activation function of the j th neuron at time t; w i j are the weight coefficients, which are continuous functions; in general, c i is a constant that represents the rate with which the i th unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs; I i is an external input.

2.2. Hopfield NNs

The classical Hopfield-type neural networks originating from the model introduced in [31] can be considered as a specific case of CNNs for a particular choice of the constants c i . Such NNs are given by the following system:
x ˙ i ( t ) = 1 C i R i x i ( t ) + j = 1 n w i j ( t ) f j ( x j ( t ) ) + I i , t > 0 ,
with i = 1 , 2 , , n , where n 2 denotes the number of nodes in the network; x ( t ) = ( x 1 ( t ) , , x n ( t ) ) T is the state vector at time t; the constants C i > 0 , R i > 0 denote, respectively, the capacitance and the resistance for the node i at time t, and the rest of the parameters are as in (1).
For the description of the next CNN models, in order to avoid repetition, we will use unified notations of the models’ parameters.

2.3. CNNs with Delays

In numerous neural networks’ applications, the state vectors of the designed models depend on stored information. In biological CNNs, delay describes the maturation process of neuronal cells. Hence, the structure of the current network cells depends on their maturity and on the natural evolution rate of the proceeding generations. In fact, the cell-intrinsic transcription factors are required to generate and promote the survival of newborn neurons [84]. Moreover, delay effects may be accompanied by oscillation, divergence or instability, which may be damaging to the neuronal system. In order to study how time delays affect the dynamics of a CNN model, DCNNs of various types have been proposed. One of the most investigated types is the type of DCNNs with time-varying transmission delays given by [85]:
x ˙ i ( t ) = c i x i ( t ) + j = 1 n w i j ( t ) f j ( x j ( t ) ) + j = 1 n h i j ( t ) g j ( x j ( t τ j ( t ) ) ) + I i , i = 1 , 2 , , n ,
where t R + , x i ( t ) denotes the state of the i-th cell at time t; h i j denotes the strength of the j th unit on the i th unit at time t τ j ( t ) ; g j are activation functions that determine the output of the j th node at time t τ j ( t ) , and τ j ( t ) are transmission delays, 0 τ j ( t ) τ .
In fact, due to presence of a multitude of parallel pathways with a variety of axon sizes and lengths, DCNNs have a spatial extent. This is why it is common to consider time-varying delays [86,87,88]. The particular case of constant delays τ j is also studied [89]. Recently, there have been many findings on DCNNs in which the effects of constant delays, time-varying delays, distributed delays and bounded and unbounded delays have been investigated. Since distributed and unbounded delays are more realistic, there is an extended interest in such DCNNs, which can be given as [90]
x ˙ ( t ) = x ( t ) + a f x ( t ) b 0 K ( s ) x ( t s ) d s c , t > 0
where x : R + R , K : R + R + is the delayed kernel function; a , b , c are constants; the activation function f : R R .

2.4. CNNs with Reaction–Diffusion Terms

In numerous applications of CNNs and DCNNs, the design and effective performance of the neural network model not only rely on the progression in time of the states but also exclusively depend on its location (area) [91]. In such applications, formulating models of the reaction–diffusion type and evaluating the effects of the reaction–diffusion parameters on the neural network dynamics is crucial.
A DCNN model with reaction–diffusion terms is given by [92,93]
u i ( t , x ) t = q = 1 n x q D i q u i ( t , x ) x q c i u i ( t , x ) + j = 1 m w i j f j u j ( t , x ) + j = 1 m h i j g j u j ( t τ j ( t ) , x ) + I i , i = 1 , 2 , , m , ( t , x ) ( 0 , ) × Ω ,
where u i ( t , x ) represents the state of the ith neuron (cell) at time t ( 0 , ) and space x Ω ; Ω R n is a bounded open set containing the origin with smooth boundary Ω ; u i ( t , x ) t is the partial derivative of u i ( t , x ) with respect to time t, which represents the rate of change in cell density with respect to time; the continuous functions D i q = D i q ( t , x ) 0 correspond to the transmission diffusion coefficients along the ith neuron, q = 1 , 2 , , n .

2.5. Cohen–Grossberg DCNNs

The effects of time delays are also considered for the specific class of Cohen–Grossberg CNNs represented by [26]
x ˙ i ( t ) = a i ( x i ( t ) ) b i ( t , x i ( t ) ) j = 1 n w i j ( t ) f j ( x j ( t ) ) j = 1 n h i j ( t ) g j ( x j ( t τ j ( t ) ) ) I i ( t ) ,
where t 0 , a i denote the amplification functions, and b i correspond to appropriately behaved functions, i = 1 , 2 , , n .
Some generalizations of the model (6) considering mixed time-varying delays and distributed delays are investigated in [29,30,94,95,96].
It is well seen that some very applicable NN models, such as CNNs and Hopfield neural networks, can be examined as particular cases of NNs of the Cohen–Grossberg type [31].

2.6. DCNNs with Reaction–Diffusion Terms of Cohen–Grossberg Type

The hybrid class of DCNNs with reaction–diffusion terms of Cohen–Grossberg type is also a major topic of interest because of the great opportunities for their applications in science, medicine and engineering. Such NN models can be given as
u i ( t , x ) t = q = 1 n x q D i q u i ( t , x ) x q a i ( u i ( t , x ) ) [ b i ( u i ( t , x ) ) I i ( t , x ) j = 1 m w i j ( t ) f j u j ( t , x ) j = 1 m h i j ( t ) g j u j ( t τ j ( t ) , x ) ] ,
where i = 1 , 2 , , m , t 0 .
For more detailed results on DCNNs with reaction–diffusion terms of the Cohen–Grossberg type and their possible applications, we refer to [97,98,99].

2.7. Bidirectional Associative Memory (BAM) Neural Networks

Numerous classes of BAM neural network models are also studied in the existing literature. BAM NNs are generalizations of single-layer CNNs that can store bipolar vector pairs. Such NNs are composed of neurons arranged in two layers, the X-layer and Y-layer. A two-way associative search for stored bipolar vector pairs is performed by applying an iterative approach to the forward and backward information flows between the two layers [34,35,36,37,38,40,100].
As a generalization of the most applied BAM neural network models, we will present the Cohen–Grossberg-type BAM neural networks [101,102,103] given by
x ˙ i ( t ) = a i ( x i ( t ) ) b i ( x i ( t ) ) j = 1 m w j i f j ( y j ( t ) ) j = 1 m h j i g j ( y j ( t τ j ( t ) ) ) I i , y ˙ j ( t ) = a ^ j ( y j ( t ) ) b ^ j ( y j ( t ) ) i = 1 n w ^ i j f ^ i ( x i ( t ) ) i = 1 n h ^ i j g ^ i ( x i ( t τ ^ i ( t ) ) ) J j ,
where t 0 , x i ( t ) and y j ( t ) correspond to the states of the ith neuron in the X-layer and jth neuron in the Y-layer, respectively, at time t; f j f ^ i , g j , g ^ i are activation functions; τ j ( t ) , τ ^ i ( t ) are interneuronal transmission delays; 0 < τ j ( t ) < τ , 0 < τ ^ i ( t ) < τ ^ , a i , a ^ j denote amplification functions; b i , b ^ j denote well-behaved functions; w j i , w ^ i j , h j i , h ^ i j are the connection weights, and I i , J j are external inputs, i = 1 , 2 , , n , j = 1 , 2 , , m .

2.8. Impulsive DCNNs

The impulsive control approach in neural network modeling addresses the case wherein short-term perturbations at some moments of time affect the dynamical behavior of the neuronal models. This approach is closely related to the brain plasticity–stability dilemma, which is essential also in other biological aspects [3,66,72,73,74]. By means of impulsive control neural networks, it is possible to analyze how impulses can be used to preserve the stability properties of the model or to design efficient impulsive controllers. Abrupt changes are very often caused by changes in the environment, external stimuli or may be inherent to the system due to the cell-intrinsic potential for structural change [75]. Indeed, adding a mass of new cells to an already trained network can slow down the old memory that has been trained. In such a case, using external perturbations, it can be returned to the trained state. Hence, with the development of impulsive control theory [64,65], increasing attention has been paid to the study of impulsive CNN models.
Let t k , k = 1 , 2 , be the impulsive points and satisfy 0 < t 1 < t 2 < , lim k t k = .
An impulsive control DCNN model can be represented as [62]
x ˙ i ( t ) = c i x i ( t ) + j = 1 n w i j f j x j ( t ) + j = 1 n h i j f j x j ( t τ j ( t ) ) + I i , t t k , t 0 , Δ x i ( t k ) = x i ( t k + ) x i ( t k ) = P i k ( x i ( t k ) ) , k = 1 , 2 , ,
where i = 1 , 2 , , n , t k , ( k = 1 , 2 , ) are the instances of impulsive perturbations at which the density x i ( t ) of a neuronal cell shifts from the amount of x i ( t k ) = x i ( t k ) into the number x i ( t k + ) , and P i k are functions that characterize the size of the impulsive control effects of the states x i ( t ) at the moments t k . A graph of the trajectory of a state x i ( t ) is shown in Figure 1.
The control model (9) generalizes many existing type (3) DCCN models to the impulsive case. It can be applied to impulsively control the behavior of the neurons in type (3) neuronal networks using appropriate impulsive functions P i k . Moreover, by adding impulsive controllers to the nodes in model (3), we can synchronize the trajectories of all nodes.
As generalizations of the above models, we will consider impulsive reaction–diffusion DCNNs of Cohen–Grossberg type given as [104]
u i ( t , x ) t = q = 1 n x q D i q u i ( t , x ) x q a i ( u i ( t , x ) ) [ b i ( u i ( t , x ) ) I i ( t , x ) j = 1 m w i j ( t ) f j u j ( t , x ) j = 1 m h i j ( t ) g j u j ( t τ j ( t ) , x ) ] , t t k , u i ( t k + , x ) = u i ( t k , x ) + P i k ( u i ( t k , x ) ) ,
in which the points t k , k = 1 , 2 , again represent instants (impulsive) where short-term perturbations on the node u i ( t , x ) from the level u i ( t k , x ) = u i ( t k , x ) toward the level u i ( t k + , x ) are observed, and the functions P i k ( u i ( t , x ) ) determine the controlled outputs u i ( t k + , x ) , which measure the effects of the impulsive controllers on the states u i ( t , x ) at the moments t k .
The impulsive model (10) admits the use of a suitable impulsive control strategy to a class of reaction–diffusion delayed CNNs that appear naturally in a broad range of applications. For example, several therapeutic impulsive control strategies have been recommended for some recently developed epidemic models of great interest for the contemporary world [105,106,107].
In the absence of reaction–diffusion terms, the model (10) is reduced to an impulsive Cohen–Grossberg-type model [108].
For the DCNN model (10), a more general type of impulsive control using impulsive jumps that are not performed at fixed instants also can be studied [109]. In this case, the impulsive moments t l k occur when the integral surface of the solution u ( t , x ) meets hypersurfaces defined as
θ k = ( t , u ) [ 0 , ) × R m : t = σ k ( u ( t , x ) ) ,
where σ k are continuous functions and k l k , in general [62,109]. Moreover, as a result of the features of short-term/impulsive perturbations at non-fixed moments of time, distinct neuronal states corresponding to distinct initial data may have distinct impulsive moments. This makes the analysis of models with non-fixed impulsive perturbations more complicated due to the potentiality for the loss of the autonomy property, bifurcation, “merging” of solutions, meeting one and the same hypersurface several or infinitely many times, etc. [110]. A suitable choice of the impulsive forces for such models is essential.
The Cohen–Grossberg-type model (10) has been also generalized to the BAM case with and without reaction–diffusion terms [111,112,113]. A BAM impulsive control Cohen–Grossberg-type model with n neurons ( i = 1 , 2 , , n ) in the X-layer and m neurons ( j = 1 , 2 , , m ) in the Y-layer, impulsive perturbations at non-fixed moments and time-varying delays is given as
x ˙ i ( t ) = a i ( x i ( t ) ) [ b i ( x i ( t ) ) j = 1 m w j i f j ( y j ( t ) ) j = 1 m h j i g j ( y j ( t τ j ( t ) ) ) I i ] , t σ k ( x ( t ) , y ( t ) ) , y ˙ j ( t ) = a ^ j ( y j ( t ) ) [ b ^ j ( y j ( t ) ) i = 1 n w ^ i j f ^ i ( x i ( t ) ) i = 1 n h ^ i j g ^ i ( x i ( t τ ^ i ( t ) ) ) J j ] , t σ k ( x ( t ) , y ( t ) ) , x i ( t + ) = x i ( t ) + P i k ( x i ( t ) ) , t = σ k ( x ( t ) , y ( t ) ) , y j ( t + ) = y j ( t ) + Q j k ( y j ( t ) ) , t = σ k ( x ( t ) , y ( t ) ) ,
for t > 0 , where σ k : R n + m R + , k = 1 , 2 , .

2.9. Fractional-Order Impulsive CNNs

The contemporary approaches in network theory have led to the introduction of more multiplex models. The fractional-order scheme, which is more relevant in the description of accurate CNNs, has achieved increased attention in recent years [41,44,47,48,50,52,53,63,72]. A variety of investigations have studied the long-term dependence of the current activities in neuronal states [45,46]. To better model the long-term memory processes, numerous existing neural network models have been generalized to the fractional-order case.
Moreover, the influence of the fractional-order derivatives on the stability and synchronization performance of the neurons is recognized [52].
Let us consider a fractional-order delayed impulsive CNN of the type [63]
C D t α x i ( t ) = c i x i ( t ) + j = 1 n w i j ( t ) f j ( x j ( t ) ) + j = 1 n h i j ( t ) g j ( x j ( t τ j ( t ) ) ) + I i , t t k , Δ x i ( t k ) = x i ( t k + ) x i ( t k ) = P i k x i ( t k ) , k = 1 , 2 , ,
i = 1 , 2 , , n , where
C D t α l ( t ) = 1 Γ ( 1 α ) 0 t l ( σ ) ( t σ ) α d σ
is the Caputo fractional derivative of order α , 0 < α < 1 with the lower limit 0 for a continuously differentiable function l R and Γ ( z ) = 0 e t t z 1 d t is the standard Gamma function [114].
Indeed, fractional-order derivatives of Caputo type are the most applied in modeling applications since they have the superiority of handling initial conditions that are defined in a form similar to the form used in the cases of integer-order derivatives. This situation is observed in most natural phenomena [115,116].
Another direction for generalizations of the model (12) is considering finite and infinite delays [117]. In this case, the corresponding impulsive control model will be represented as
C D α x i ( t ) = c i x i ( t ) + j = 1 n w i j f j ( x j ( t ) ) + j = 1 n h i j g j ( x j ( t τ j ( t ) ) ) + j = 1 n w i j t m j ( t , s ) g j x j ( s ) d s + I i , t t k , Δ x i ( t k ) = P i k ( x i ( t k ) ) , i = 1 , 2 , , n , k = 1 , 2 , ,
where 0 τ j ( t ) τ , j = 1 , 2 , , n , are the finite transmission delays and the delay kernel m j ( t , s ) = m j ( t s ) ( j = 1 , 2 , , n ) is of the convolution type.
To reflect the fact that, in numerous neural network models, the activations depend on time and on space, the model (12) can be generalized to a fractional-order model with reaction–diffusion terms given as [118,119,120]
α u i ( t , x ) t α = q = 1 n x q D i q u i ( t , x ) x q c i u i ( t , x ) + j = 1 m w i j ( t ) f j u j ( t , x ) + j = 1 m h i j ( t ) g j u j ( t τ j ( t ) , x ) , t t k , u i ( t k + , x ) = u i ( t k , x ) + P i k ( u i ( t k , x ) ) , k = 1 , 2 , ,
where [41,114]
α u i ( t , x ) t α = 1 Γ ( 1 α ) 0 t u i ( s , x ) s d s ( t s ) α , t > 0 , i = 1 , 2 , , m .
Impulsive control fractional-order models of Cohen–Grossberg type [121,122] and impulsive BAM neural network models of fractional-order with and without reaction–diffusion setting [123,124] can be also applied as extended fractional CNN models of natural processes influenced or controlled by short-lived forces at some points in time. Indeed, the great attention to and the huge number of existing publications on neural network models of the impulsive type with fractional dynamics are an indication of their considerable significance.
Impulsive control fractional-order models of Cohen–Grossberg type are described by [121]
C D α x i ( t ) = a i ( x i ( t ) ) b i ( t , x i ( t ) ) j = 1 n w i j ( t ) f j ( x j ( t ) ) I i ( t ) , t t k , x i ( t k + ) = x i ( t k ) + P i k ( x i ( t k ) ) , k = 1 , 2 , ,
and fractional BAM reaction–diffusion neural network models are given as [123]
α u i ( t , x ) t α = q = 1 n x q D i q u i ( t , x ) x q c i u i ( t , x ) + j = 1 l w i j ( t ) f j v j ( t , x ) + j = 1 l h i j ( t ) t K i j ( t s ) f j v j ( s , x ) d s , t t k , α v j ( t , x ) t α = q = 1 n x q D j q * v j ( t , x ) x q c j * v j ( t , x ) + i = 1 m w ^ j i ( t ) g i u i ( t , x ) + i = 1 m h ^ j i ( t ) t N j i ( t s ) g i u i ( s , x ) d s , t t k , Δ u i ( t k , x ) = u i ( t k + , x ) u i ( t k , x ) = P i k ( u i ( t k , x ) ) , Δ v j ( t k , x ) = v j ( t k + , x ) v j ( t k , x ) = Q j k ( v j ( t k , x ) ) ,
where ( u , v ) R m R l is the state vector; K i j , N j i are the delay kernels; P i k , Q j k determine the level of the states’ changes at the impulsive moments t k ; the numbers u i ( t k , x ) = u i ( t k , x ) and u i ( t k + , x ) correspond, respectively, to the levels of the ith and jth neurons before and after an impulsive jump at the moment t k , and the numbers v j ( t k , x ) = v j ( t k , x ) and v j ( t k + , x ) are, respectively, the states of the jth neuron of the second layer before and after an impulsive perturbation at the moment t k .
Fractional-order BAM neural network models can be efficiently used in many applied problems where the associative study of pairs of states designed in two layers through iterating information back and forth between the layers is fundamental. This class of CNNs is also a compelling tool for modeling in neuroscience. For example, the gene regulatory networks (GRNs) that model the regulation of genes’ expression in the process of managing molecular-level organisms is one particular class of BAM neural networks [72,125,126,127]. A fractional-order delayed impulsive GRN model studied in [128] has been given as
C D α m i ( t ) = a i m i ( t ) + j = 1 n w i j ( t ) f j ( p j ( t τ ^ j ( t ) ) ) + B i ( t ) , t t k , C D α p i ( t ) = c i p i ( t ) + d i ( t ) m i ( t τ i ( t ) ) , t t k , m i ( t k + ) = m i ( t k ) + P i k ( m i ( t k ) ) , p i ( t k + ) = p i ( t k ) + Q i k ( p i ( t k ) ) ,
with i = 1 , 2 , , n , t 0 , where the i th mRNA state at time t is represented by m i ( t ) ; the i t h protein’s concentration at time t is represented by p i ( t ) ; the degradation rates in the i th mRNA and i t h protein’s molecule are denoted by the constants a i , c i R , respectively; d i denote the translation rates; the regulatory function f j , j = 1 , 2 , , n is of the particular Hill form
f j ( x ) = ( x / β j ) H j 1 + ( x / β j ) H j
in which β j denote positive constants and the Hill coefficients H j are real constants; the function B i ( t ) = j I i b i j ( t ) denotes the basal rate of the repressor of gene i under the set I i of all its repressors j; the weight coefficients w i j ( t ) are determined by b i j ( t ) or by b i j ( t ) depending on whether j is an activator of gene i or j is a repressor of gene i; w i j ( t ) = 0 only when there is no connection between the node j and the gene i; the distinct functions τ i ( t ) and τ ^ j ( t ) denote time-varying bounded delay functions for mRNA i and protein concentration j, respectively; i , j = 1 , 2 , , n , the i th mRNA state and the i t h protein’s concentration at time t k are given by the values m i ( t k ) = m i ( t k ) and p i ( t k ) = p i ( t k ) , respectively; m i ( t k + ) and p i ( t k + ) correspond to i th mRNA and i t h protein concentration, respectively, at t k + , i.e., after an impulsive short-term effect on them at t k ; and the impulsive functions P i k and Q i k denote the amounts of the abrupt deviation from m i ( t ) and p i ( t ) , respectively, at the point t k for k = 1 , 2 , and i = 1 , 2 , , n .

2.10. Extended Stability Concepts

Stability is among the most essential questions in the investigation of neural networks. Hence, research on stability strategies that regulate the neuronal stability properties has attracted tremendous attention. Moreover, stability characteristics are closely related to synchronization and control issues [116,120,126].
For CNNs, the most applied stability concept in the existing literature is that of global asymptotic stability [7,29,59,79,85,89,92,98,127]. Despite the great possibilities of application, this concept is not comprehensive. In numerous particular problems, even if a CNN is globally asymptotically stable, it is actually impractical in implementations because of some inappropriate features. Moreover, there are CNNs that are not asymptotically stable in the classical case; however, their behavior is acceptable. For such cases, some extended stability concepts, which we will review below, are more appropriate.

2.10.1. Practical stability

The specific conception of practical stability is distinct from the classical asymptotic stability and, as a result of its benefits, it has been studied for various applied systems [62,79,129,130,131,132,133], including NNs [57,134,135,136]. This extended stability concept is useful in many applied models when the states’ trajectories are contained within specific constraints during a fixed time interval. In such cases, the global asymptotic notion is not applicable. The practical stability strategy is also very efficient when a neuronal state is unstable in the classical sense and yet state trajectories may oscillate sufficiently near the desired state such that its behavior is admissible, which does not imply stability or the convergence of trajectories. In addition, there are many applied systems that are stable or asymptotically stable in the classical sense, but are in fact meaningless in practice due to a small or inappropriate stability or attraction domain.
Let x ( t ) = x ( t ; 0 , ϕ ) be a solution of the impulsive DCNN model (9) corresponding to the initial function ϕ that is bounded and piecewise continuous on [ τ , 0 ] with points of jump discontinuities at which the one-sided limits exist and the functions are continuous from the left. We will denote the norm of the function ϕ : [ τ , 0 ] R n that corresponds to the norm | | · | | by | | ϕ | | τ = sup s [ τ , 0 ] | | ϕ ( s ) | | .
Let ( λ , A ) with 0 < λ < A be given.
Definition 1 
([79,130]). The impulsive DCNN (9) is called
(a) practically stable with respect to ( λ , A ) , if | | ϕ | | τ < λ implies | | x ( t ; 0 , ϕ ) | | < A , t 0 ;
(b) practically asymptotically stable with respect to ( λ , A ) , if (a) holds and lim t | | x ( t ; 0 , ϕ ) | | = 0 .
Definition 1 can be also adapted to other types of impulsive DCNNs. It again restates the noted circumstance that the concept of practical stability is fairly autonomous from the basic notion of asymptotic stability and both notions are neither presumptive nor purely mutually exclusive [62,79,129,130,131,132,133]. In reality, the practical stability can be achieved in a set time and this is why it seems more appropriate for neural network models from an applied point of view [134,135,136]. This is due to the fact that the practical stability requires the trajectories of the investigated model to be examined when the boundaries of the initial conditions and the region where the trajectories must remain as the independent variable evolves over a fixed interval are set in advance.

2.10.2. Stability of Sets

One of the most important aspects of the stability theory of differential equations is the so-called stability of sets. It is an extension of the stability of single trajectories notion, which is related to the study of the stability properties of a region of solutions. Such a concept seems to be very appropriate in population dynamics and biology, including the stability of cell populations across different brain regions. Such an extended stability notion is also of ample interest to networks able to approach not only one state of interest.
The introduction of the stability of sets notion is related to the following question: How far can initial data be allowed to vary without disordering the stability properties established in the immediate proximity of the specific states? On this question, scientists have proposed to study stable sets [137]. Some demonstrated applications for the use of stability of sets are delay systems [137], the biological control of invasive species [138], Rosenblatt processes [139], planar homeomorphisms [140], maneuver systems [141] and Kolmogorov-type systems that generalize various models studied in population dynamics [142].
Because of the considerable opportunities for the application of the extended stability of sets approach, it has been developed for Cohen–Grossberg impulsive DCNNs with reaction-diffusion terms of the type (10) [104].
Let M be a set,
M [ τ , ) × Ω × R m
and introduce the set M ( t , x ) of all u R m such that ( t , x , u ) M , ( t , x ) R + × Ω , and the set M 0 ( t , x ) of all z R m for which ( t , x , z ) M for ( t , x ) [ τ , 0 ] × Ω .
Denote the distance between a u R m and M ( t , x ) by
d ( u , M ( t , x ) ) = inf v M ( t , x ) | | u v | | 2 ,
where
| | u ( t , x ) | | 2 = Ω i = 1 m u i 2 ( t , x ) d x 1 / 2
is the norm defined for a u ( t , x ) = ( u 1 ( t , x ) , u 2 ( t , x ) , , u m ( t , x ) ) T R m , and set an ε - neighborhood of M ( t , x ) as
M ( t , x ) ( ε ) = u R m : d ( u , M ( t , x ) ) < ε ( ε > 0 ) .
Set
d 0 ( φ , M 0 ( t , x ) ) = sup ξ [ τ , 0 ] d ( φ ( ξ , x ) , M 0 ( ξ , x ) ) , φ PC ,
where PC denotes the set of all functions φ = ( φ 1 , φ 2 , , φ m ) T from [ τ , 0 ] × Ω to R m , which are piecewise continuous and for which φ i ( ξ + , x ) and φ i ( ξ , x ) exist and satisfy φ i ( ξ , x ) = φ i ( ξ , x ) , i = 1 , 2 , m , for a finite number of points ξ [ τ , 0 ] , x Ω .
We assume also that the set M ( t , x ) for any t 0 , x Ω , and the set M 0 ( t , x ) for any t [ τ , 0 ) and x Ω .
The following stability of sets definition in regard to the reaction–diffusion impulsive DCNN of Cohen–Grossberg type (10) is presented in [104].
Definition 2. 
The set M is called uniformly globally exponentially stable with respect to the impulsive control reaction–diffusion DCNN (10), if there exist real constants k > 0 and υ > 0 such that
d ( u ( t , x ; φ 0 ) , M ( t , x ) ) k d 0 ( φ 0 , M 0 ( t , x ) ) e υ t , φ 0 PC , t 0 , x Ω .
Other stability of sets notions for the system (10) are also introduced in [104]. These extended stability concepts can be applied to numerous types of impulsive DCNNs, such as BAM DCNNs and fractional-order DCNNs. As is seen from Definition 2, the stability of sets concept is more general than the stability of a single neuronal trajectory and includes, as special cases, the stability of steady states, stability of periodic trajectories and stability of integral manifolds or other manifolds that can represent regions of neuronal states.

2.10.3. Stability with Respect to Manifolds

Stability with respect to manifolds notions are specific cases of stability of sets concepts for particular sets. While, in stability of sets definitions, sets of a sufficiently general type contained in some domain are considered, stability of manifolds notions consider sets defined by specific conditions, which is common in most of the biological DCNNs. Such stability notions are also known as conditional stability [79]. One of the most applied stability with respect to manifolds concepts treats manifolds that are defined by a function that describes specific constraints on the determination of the manifold [113,119]. The introduction of this notion is directed by the evidence that the stability behavior of neurons depends on limitations or restrictions that can be represented by a function. Another stability with respect to manifolds notion is related to integral manifolds defined by the system trajectories [58,109].
For quantifying the stability of manifolds determined by constraints, a particular function h is defined in the extended phase space of the CNN with values in R l , l n . For stability of integral manifolds, the set (18) is defined as a set M in the extended phase space [ τ , ) × Ω × R m of (10) such that ( ξ , x , φ ( ξ , x ) ) M , ( ξ , x ) [ τ , 0 ] × Ω implies ( t , x , u ( t , x ) ) M , ( t , x ) R + × Ω for any solution u ( t , x ) = u ( t ; x ; φ ) .
For example, in order to study the stability with respect to a manifold of the impulsive fractional-order model with reaction–diffusion terms (14), in [119], a function h = h ( t , u ) , h : [ τ , ) × R m R l , l m is defined together with the following manifolds related to it:
M t ( m l ) = { u R m : h ( t , u ) = 0 , t R + } , M t , τ ( m l ) = { u R m : h ( t , u ) = 0 , t [ τ , 0 ] } , M t ( m l ) ( ε ) = { u R m : | | h ( t , u | | < ε , t R + } , ε > 0 .
The following definition is introduced in [119] for the stability of the trajectories of the CNN (14) with respect to the manifold M t ( m l ) determined by the function h.
Definition 3. 
The fractional-order DCNN model (14) is globally Mittag–Leffler stable with respect to the function h if, for φ 0 PC , there exists a constant η > 0 such that
u ( t , x ) M t ( m l ) M ¯ ( φ 0 ) E α ( η t α ) , t 0 ,
where E α , 0 < α < 1 , is the corresponding Mittag–Leffler function, defined as
E α ( z ) = κ = 0 z κ Γ ( α κ + 1 ) ,
M ¯ ( 0 ) = 0 , M ¯ ( φ ) is Lipschitz continuous with respect to φ PC , and M ¯ ( φ ) 0 .
Note that the Mittag–Leffler stability concepts are extensions of the exponential stability notions to the fractional-order case [41,44,63,72,116,117,123,124].
The numerous application possibilities of the stability with respect to manifolds concepts foster their intensive development [123,143,144,145,146,147]. These extended stability notions have been recently applied in the study of the brain’s stability behavior [148,149,150]. Indeed, fractional-order systems are intensively used in the study of biological phenomena [151]. For fractional-order DCNN models, the concept is combined with the Mittag–Leffler stability definitions, which are as important as exponential stability definitions for the integer-order cases.

2.10.4. Practical Stability with Respect to Manifolds

The extended concept of practical stability with respect to manifolds also has received considerable attention recently and has been applied to different classes of impulsive control DCNNs. See, for example, [108,152,153] and the references therein. Such a stability concept is more appropriate for applied neuronal models, since it combines the generalization provided by the manifolds stability concept and the flexibility of the practical stability when some behavioral characteristics are not worthwhile.
The combined practical stability notion with respect to a h ^ -manifold
M t ( n + m l ) = { z R n + m : h ^ ( t , z ) = 0 , t [ 0 , ) }
defined by a function h ^ = h ^ ( t , z ) , h ^ : [ ν , ) × R n + m R l , where ν = max { τ , τ ^ } and z = ( x , y ) R n + m , is successfully applied in [108] for the BAM impulsive Cohen–Grossberg model (11) using the following hybrid definition.
Definition 4. 
The impulsive control neural network model (11) is globally practically exponentially stable with respect to the function h ^ , if, given A > 0 and ψ 0 = ( φ 0 , ϕ 0 ) T PC , there exist positive constants γ , μ such that
( x ( t ; 0 , φ 0 ) , y ( t ; 0 , ϕ 0 ) ) T M t ( n + m l ) A + γ | | h ^ ( 0 , ψ 0 ) | | ν e μ t
for t 0 , where M t ( n + m l ) ( ε ) is the manifold of all z R n + m such that | | h ^ ( t , z ) | | < ε .
If, in Definition 4, A = 0 , then it is reduced to the global exponential stability with respect to the h ^ -manifold case.

2.10.5. Lipschitz stability

The extended Lipschitz stability concept introduced in [154] is also appropriate for different applied models [155,156,157,158]. For linear systems, the notion of Lipschitz stability is equivalent to that of uniform stability, which is not the case for nonlinear systems [154,159]. The Lipschitz stability concept is adopted for impulsive control models in [62,79,159] and for fractional-order models in [160]. The problem of Lipschitz stability seems more relevant in the neural network approach, because the Lipschitz nonlinearity and Lipschitz continuity are typical for most of the neural network models [161].
The notion is applied to impulsive reaction–diffusion fractional Cohen–Grossberg-type neural network models for the first time in [162], where the investigated model is a generalization of the model (14) and is defined as
α u i ( t , x ) t α = q = 1 n x q D i q u i ( t , x ) x q a i ( u i ( t , x ) ) [ b i ( u i ( t , x ) ) j = 1 m w i j ( t ) f j u j ( t , x ) j = 1 m h i j ( t ) g j u j ( t τ j ( t ) , x ) ] , t t k , u i ( t k + , x ) = u i ( t k , x ) + P i k ( u i ( t k , x ) ) , i = 1 , 2 , , m ,
where m 2 .
For the above impulsive control DCNN model, the Lipschitz stability definition is given as follows.
Definition 5. 
The fractional impulsive reaction–diffusion CNN model (20) of Cohen–Grossberg type is globally uniformly Lipschitz stable, if there exists a constant M > 0 such that, for any φ 0 PC , we have
| | u ( t , x ; 0 , φ 0 ) | | 2 M | | φ 0 | | τ , t 0 , x Ω ,
where | | φ 0 | | τ is the norm of the initial function φ 0 PC corresponding to the norm | | u ( t , x ; 0 , φ 0 ) | | 2 .

2.10.6. Lyapunov Approach

To establish the extended stability result, we adapt the Lyapunov approach and use appropriate Lyapunov functions [62,63,79,87]. In the case of DCNNs and impulsive CNNs, some modifications are proposed. More precisely, in the case of DCNNs, the Razumikhin technique is applied, which requires us to estimate the derivatives of the Lyapunov candidate functions only on specific sets of trajectories [62,137], and in the impulsive control case, piecewise continuous Lyapunov functions are considered [62]. For fractional-order systems, the fractional Lyapunov method is applied [63].

3. Results

Using a Lyapunov-based analysis, numerous extended stability criteria have been proposed for different types of impulsive and fractional-order DCNNs. Most of the authors’ earlier results are collected in [62,63,79]. After the publication of these books, many new impulsive neural network models and corresponding stability strategies were developed by the authors [57,58,72,104,108,109,113,119,123,128,162]. The primary aim of this review article is to present most of these recent results. The proposed results are in the form of bounds on the system’s parameters, including synaptic weights and impulsive control functions. The practical meaning of the introduced criteria is that if the system’s parameters and impulsive controls are driven to these bounds, this will reflect the extended stability behavior of the applied models.

3.1. Stability of Sets

To present some recent results related to the stability of sets concept, we will consider the impulsive control reaction–diffusion DCNN of Cohen–Grossberg type (10). Since the model (10) is more general and includes some of the special cases of impulsive DCNNs, the established results are more comprehensive.
To guarantee the existence and uniqueness of solutions, the activation functions are considered to be bounded, the impulsive functions are continuous, and the neural network’s parameters are assumed to satisfy the following assumptions [104].
A1. There exist constants a ̲ i and a ¯ i such that, for the continuous amplification functions a i , i = 1 , 2 , , m , we have
1 < a ̲ i a i ( ι ) a ¯ i
for ι R .
A2. There exist constants B i > 0 such that, for the continuous functions b i , i = 1 , 2 , , m , we have
b i ( ι 1 ) b i ( ι 2 ) ι 1 ι 2 B i > 0
for ι 1 , ι 2 R , ι 1 ι 2 .
A3. There exist Lipschitz constants L j > 0 , M j > 0 , j = 1 , 2 , , m , such that
| f j ( ι 1 ) f j ( ι 2 ) | L j | ι 1 ι 2 | , | g j ( ι 1 ) g j ( ι 2 ) | M j | ι 1 ι 2 |
for all ι 1 , ι 2 R , ι 1 ι 2 .
A4. The transmission diffusion functions D i q are nonnegative and there exist constants d i q 0 , such that
D i q ( t , x ) d i q , i = 1 , 2 , , m , q = 1 , 2 , , n , t > 0 , x Ω .
Let x Ω , and Ω be an n-dimensional domain determined by the inequalities | x q | < l q , where l q > 0 , q = 1 , 2 , n . Set d ˜ i = q = 1 n d i q l q 2 , w i j + = sup t R + | w i j ( t ) | , h i j + = sup t R + | h i j ( t ) | , i , j = 1 , 2 , , m .
To define a set M of trajectories related to the model (10), we consider two constant solutions of the impulsive control model (10) denoted as u ̲ * R + m and u ¯ * R + m .
Then, the following criteria for stability with respect to M = [ τ , ) × Ω × D , where D R + m , u = ( u 1 , u 2 , , u n ) T D , when u ̲ i * u i u ¯ i * , i = 1 , 2 , , m , are established in [104].
Under the assumptions A1–A4, if the system parameters satisfy
λ 1 = min 1 i m 2 d ˜ i + a ̲ i B i a ¯ i j = 1 m L j w i j + + M j h i j + + L i w j i + > max 1 i m M i j = 1 m a ¯ j h j i + = λ 2 > 0
and the impulsive control functions P i k are such that there exist constants γ i k , 0 < γ i k < 2 , such that
P i k ( u i ( t k , x ) ) = γ i k u i ( t k , x ) , i = 1 , 2 , , m , k = 1 , 2 , ,
then the set M is uniformly globally exponentially stable with respect to the reaction–diffusion impulsive Cohen–Grossberg DCNN (10).
The set M considered in the above result consists of all trajectories that are between two constant solutions and generalize a single state stability concept. A set M of a more general nature can also be considered. The proposed result and the extended stability concept are useful in the cases wherein the consideration of attractors other than single steady states is essential.
The impulsive reaction–diffusion Cohen–Grossberg DCNN (10) is the relevant closed-loop system to the model (7), and can be also represented as a control system
u i ( t , x ) t = q = 1 n x q D i q u i ( t , x ) x q a i ( u i ( t , x ) ) [ b i ( u i ( t , x ) ) I i ( t , x ) j = 1 m w i j ( t ) f j u j ( t , x ) j = 1 m h i j ( t ) g j u j ( t τ j ( t ) , x ) ] + v i ( t , x ) ,
where i = 1 , 2 , , m , t 0 , and
v i ( t , x ) = k = 1 P i k ( u i ( t k , x ) ) δ ( t t k ) , i = 1 , 2 , , m
represent the control contribution, and δ ( t ) is the impulsive Dirac function (Figure 2). The addition of the controller v ( t , x ) = ( v 1 ( t , x ) , , v n ( t , x ) ) T leads to sudden changes in the neuronal states of (7) at the time moments t k due to which the states u i ( t , x ) of neuronal units momentarily shift from u i ( t k , x ) into the state u i ( t k + , x ) , and P i k are the impulsive functions. Thus, the above result establishes a generic design method of the impulsive control strategy (24) for the impulse-free DCNN model (7). The constants γ i k define the control sizes of the synchronizing impulses. Therefore, the proposed criteria can be used to design impulsive control techniques with which the trajectories of the impulsive neural networks (10) (including those from the set M ) can be uniformly globally exponentially synchronized onto those of system (7).
Example 1 
([104]). For n = m = 2 and Ω = { x R 2 : | x q | < 1 , q = 1 , 2 } , an impulsive CNN model of the type (10) is considered in [104] with the model’s parameters defined as a i ( u i ) = 1 , b 1 ( u i ) = u i , b 2 ( u i ) = 3 u i , i = 1 , 2 , f i ( u i ) = g i ( u i ) = 1 2 ( | u i + 1 | | u i 1 | ) , I 1 = I 2 = 0 , τ 1 ( t ) = τ 2 ( t ) = e t / ( 1 + e t ) , 0 τ i ( t ) 1 ,
( w i j ) 2 × 2 ( t ) = 0.6 0.4 sin ( t ) 0.1 0.4 cos ( t ) 0.2 0.4 cos ( t ) 0.2 0.3 sin ( t ) , ( h i j ) 2 × 2 ( t ) = 0.3 sin ( t ) 0.4 cos ( t ) 0.4 cos ( t ) 0.6 sin ( t ) ,
( D i k ) 2 × 2 = D 11 D 12 D 21 D 22 = 1 + 2 sin t 0 0 cos t , or ( d i k ) 2 × 2 = 3 0 0 1
under impulsive controls defined by
u ( t k + , x ) u ( t k , x ) = 2 / 5 0 0 1 / 7 u ( t k , x ) , k = 1 , 2 , .
For this particular choice of the neuronal parameters, we have that L 1 = L 2 = M 1 = M 2 = 1 , a ̲ i = a ¯ i = 1 , i = 1 , 2 , and
4.2 = min 1 i 2 2 d ˜ i + a ̲ i B i a ¯ i j = 1 2 L j w i j + + M j h i j + + L i w j i + > max 1 i 2 M i j = 1 2 a ¯ j h j i + = 1 .
Since assumptions A1–A4 and conditions (21) and (22) are all satisfied, if u * = ( u 1 * , u 2 * ) T is a unique steady state of the model, then the set M = [ τ , ) × Ω × { R 2 : u i u i * , i = 1 , 2 , , m } is uniformly globally exponentially stable with respect to the considered impulsive DCNN of type (10).

3.2. Stability with Respect to Manifolds

To demonstrate a stability with respect to h-manifolds result, we consider the DCNN of fractional order (14) with reaction–diffusion terms. In [119], a h-manifold stability result has been established as follows.
If, in addition to (21) and (22), the function h ( t , u ) satisfies
| | h ( t , u ) | | Ω 1 2 i = 1 m u i 2 ( t , x ) d x Λ ( H ) | | h ( t , u ) | | , ( t , u ) R + × R m ,
where Λ ( L ) 1 exists for any 0 < H , then the fractional-order DCNN model (14) is globally Mittag–Leffler stable with respect to the function h.
When investigating the stability of a neural network, it is important to characterize the effects of uncertain terms. In fact, a real-world neural network system always involves uncertainties, which may give rise to the instability of the network trajectories [119,163,164,165].
To study how some uncertain parameters can affect the stability behavior of the model (14), we consider the next uncertain reaction–diffusion impulsive delayed neural network related to the system (14):
α u i ( t , x ) t α = q = 1 n x q D i q u i ( t , x ) x q ( c i + c ˜ i ) u i ( t , x ) + j = 1 m w i j ( t ) + w ˜ i j ( t ) f j ( u j ( t , x ) ) + f ˜ j ( u j ( t , x ) ) + j = 1 m h i j ( t ) + h ˜ i j ( t ) g j u j ( t τ j ( t ) , x ) + g ˜ j u j ( t τ j ( t ) , x ) , t t k , u i ( t k + , x ) = u i ( t k , x ) + P i k ( u i ( t k , x ) ) + P ˜ i k ( u i ( t k , x ) ) , k = 1 , 2 , .
In (26), the constants c ˜ i R + and the real-valued functions w ˜ j i , h ˜ i j , f ˜ j , g ˜ j , P ˜ i k , i , j = 1 , 2 , , m , k = 1 , 2 , denote the uncertainties. We set sup t R + | w ˜ i j ( t ) | = w ˜ i j + , sup t R + | h ˜ i j ( t ) | = b ˜ i j + , i , j = 1 , 2 , , m . To conduct a robust stability analysis of the model (26) with respect to the function h, we assume that there exist constants L ˜ i > 0 , M ˜ i > 0 , H ˜ i 1 > 0 , H ˜ i 2 > 0 such that
| f ˜ i ( χ 1 ) f ˜ i ( χ 2 ) | L ˜ i | χ 1 χ 2 | , | g ˜ i ( χ 1 ) g ˜ i ( χ 2 ) | M ˜ i | χ 1 χ 2 | , | f ˜ i ( χ ) | H ˜ i 1 , | g ˜ i ( χ ) | H ˜ i 2
and f ˜ i ( 0 ) = 0 , g ˜ i ( 0 ) = 0 for all χ 1 , χ 2 R , χ 1 χ 2 , i = 1 , 2 , , m and the uncertain functions P ˜ i k are such that P ˜ i k ( u i ) = γ ˜ i k u i with γ i k 2 γ ˜ i k γ i k , i = 1 , 2 , , m , k = 1 , 2 , .
Definition 6. 
The neural network model (14) is called globally robustly Mittag–Leffler stable with respect to the function h, if, for any φ 0 PC , any w ˜ j i , h ˜ i j , f ˜ j , g ˜ j , P ˜ i k i , j = 1 , , m and any c ˜ i R + , i = 1 , 2 , , m , the model (26) is globally Mittag–Leffler stable with respect to the function h.
We will denote by λ = λ 1 λ 2 and μ = μ 1 μ 2 , where λ 1 , λ 2 are from (21) and
μ 1 = min 1 i m ( 2 c ˜ i j = 1 m L j w ˜ i j + + L ˜ j w i j + + w ˜ i j + + M j h ˜ i j + + M ˜ j h i j + + h ˜ i j + + L i w ˜ j i + + L ˜ i w j i + + w ˜ j i + > 0 ,
μ 2 = max 1 i m j = 1 m M ˜ i h j i + + h ˜ j i + + M i h ˜ j i + > 0 .
We found that [119] if λ μ , then the model (14) is globally robustly Mittag–Leffler stable with respect to the function h.
The above result can be useful as a tool to measure the effects of dynamic changes in an uncertain environment on the stability behavior of neural networks. It can be also applied to predict instabilities in neural network systems with multiple sources of structural uncertain perturbations.
If the set Ω is defined by the real constants a q , b q , q = 1 , 2 , , n as Ω = q = 1 n [ a q , b q ] so that 0 = ( 0 , 0 , , 0 ) T Ω , then [109] for the stability of an integral manifold M of the type (18) with respect to the model (10) with variable impulsive perturbations, the neural network’s parameters are requested to satisfy
min 1 i m 2 4 n D ̲ B 2 + a ̲ i B i a ¯ i j = 1 m L j w i j + + M j h i j + + L i w j i + > max 1 i m M i j = 1 m a ¯ j h j i + > 0 ,
where B = max { b q a q } , for i = 1 , 2 , , m , q = 1 , 2 , , n , D ̲ = min { d i q } , and the continuous functions σ k ( u ) satisfy 0 < σ 1 ( u i ) < σ 2 ( u i ) < , and
σ k ( u i ) as k
uniformly on u i R for all i = 1 , 2 , , m .
Example 2 
([109]). Consider again the impulsive reaction–diffusion DCNN model (10) for n = m = 2 , Ω = [ 0 , 1 ] × [ 0 , 2 ] , with activation functions and delays as in Example 1, a i ( u i ) = 1 , b 1 ( u i ) = 2 u i , b 2 ( u i ) = u i , i = 1 , 2 , variable impulsive perturbations of type σ k ( u i ) = | u i | + k , and an impulsive control defined by
u ( t + , x ) = 1 1 2 k 0 0 1 1 3 k u ( t , x ) , t = σ k ( u ( t , x ) ) , k = 1 , 2 , ,
where the rest of the parameters are
( w i j ) ( t ) = 0.5 0.2 cos ( t ) 0.5 0.1 sin ( t ) 0.6 0.4 sin ( t ) 0.3 0.2 cos ( t ) , ( h i j ) ( t ) = 0.1 + 0.3 cos ( t ) 0.2 0.3 sin ( t ) 0.2 0.1 sin ( t ) 0.5 0.1 cos ( t ) ,
( D i q ) 2 × 2 = 3 + sin t 0 0 2 + cos t .
Since the inequality (27) holds for L 1 = L 2 = M 1 = M 2 = 1 , B = 2 , D ̲ = 1 and a ̲ i = a ¯ i = 1 , i = 1 , 2 , and the impulsive controllers satisfy (22), then the manifold
M = [ 1 , ) × Ω × { u R + 2 : u u C } ,
determined by a constant solution u C = ( u 1 C , u 2 C ) T of the considered model is globally exponentially stable. This guarantees the global exponential synchronization of the master system (without impulses) and the impulsive response system with respect to the manifold M . Hence, the demonstrated extended stability criteria could be used as impulsive synchronization criteria, which are useful for various applied phenomena when some classical stability concepts cannot be used. Therefore, the illustrated criteria clarify the understanding of the impulses as suitable stability strategies on the dynamics of a neural network system.

3.3. Practical Stability with Respect to Manifolds

The conception of practical stability with respect to a manifold determined by a specific function has been applied in [108] to the BAM model (11), and to the particular case considering single-layer correlation. For the particular case, the paper [108] studies the global practical exponential stability of the impulsive control DCNN model of Cohen–Grossberg type
x ˙ i ( t ) = a i ( x i ( t ) ) [ b i ( x i ( t ) ) I i j = 1 m w i j ( t ) f j x j ( t ) j = 1 m h i j ( t ) g j x j ( t τ j ( t ) ) ] , t σ k ( x ( t ) ) , x i ( t + ) x i ( t ) = P i k ( x i ( t ) ) , t = σ k ( x ( t ) ) ,
where t > 0 , i = 1 , 2 , , n , σ k : R n R , k = 1 , 2 , with respect to a function h : [ τ , ) × R n R , which defines a manifold of the type (19).
The global practical exponential stability criteria for the model (30) with respect to the function h consist of the following assumptions:
  • P i k ( x i ( t ) ) = γ i k x i ( t ) , | 1 γ i k | a ̲ a ¯ , t = σ k ( x ( t ) ) ,
  • a ̲ min 1 i m B i j = 1 n | w j i | L i > a ¯ max 1 i m j = 1 m | h j i | M i > 0 ,
where a ̲ = min 1 i n a ̲ i , a ¯ = max 1 i n a ¯ i , and the existence of a real positive Q such that i = 1 n | I i | < A Q for a given A > 0 .
Example 3. 
As an example, let us consider the model (30) with n = 2 , where I 1 = I 2 = 0.03 , a 1 ( x 1 ) = 3 + 0.2 sin ( x 1 ) , a 2 ( x 2 ) = 4 0.1 cos ( x 2 ) , b i ( x i ) = 3 x i , i = 1 , 2 , activation functions as in Example 1, 0 τ i ( t ) 1 , w 11 = 1 , w 12 = 0.5 , w 21 = 0.6 , w 22 = 0.5 , h 11 = 0.2 , h 12 = 0.1 , h 21 = 0.15 , f 22 = 0.1 , σ k ( x ) = | x | + 2 k , and γ i k = 1 3 k , i = 1 , 2 , k = 1 , 2 ,
For the above choice of the model’s parameters and 0 < A < 0.025 , the model (30) is practically globally exponentially stable with respect to the function h = | x 1 | + | x 2 | [108].
To give a better interpretation of the practical stability with respect to manifolds notion, the graph of the function h = | x 1 | + | x 2 | is shown in Figure 3. Instead, to consider a single trajectory, the function h determines a manifold of trajectories whose stability behavior is of interest. Hence, this extended notion is applicable when we have to study the stability properties of a region of trajectories under some constraints defined by the function h. The practical stability of the neural network model (30) with respect to the manifold defined by h means that any neuronal state that starts close to the h-manifold oscillates around it and is bounded by a particular bound defined by the constant A so that its behavior is admissible from the practical point of view.
To further demonstrate the feasibility of the extended practical stability with respect to the manifolds impulsive control strategy, we will apply it to the Cohen–Grossberg BAM delay CNN model (11) considering the function h ^ = h ^ ( t , z ) , h ^ : [ ν , ) × R n + m R ,
h ^ = x 1 2 + x 2 2 + + x n 2 + y 1 2 + y 2 2 + + y m 2
which defines a h ^ manifold of the type (19).
It is proven in [108] that, for a given A > 0 , if there exists a positive constant Q ^ such that i = 1 n | I i | + j = 1 m | J j | < A Q ^ , the system’s parameters satisfy
p ^ 1 = A ̲ min 1 i n B ¯ i j = 1 m | w ^ i j | L ^ i + min 1 j m B ^ ¯ j i = 1 n | w j i | L j
> p ^ 2 = A ¯ max 1 j m i = 1 n | h j i | M j + max 1 i n j = 1 m | h ^ i j | M ^ i ,
where
A ̲ = min ( min 1 i n a ̲ i , min 1 j m a ^ ̲ j ) , A ¯ = max ( max 1 i n a ¯ i , max 1 j m a ^ ¯ j ) , p ^ > q ^ , p ^ = p ^ 1 p ^ 2 ,
L j , L ^ i , M j , M ^ i are the Lipschitz constants of the activation functions, and the impulsive functions P i k , Q j k are such that
P i k ( x i ( t ) ) = γ i k x i ( t ) , Q j k ( y j ( t ) ) = μ j k y j ( t ) , max | 1 γ i k | , | 1 μ j k | A ̲ A ¯ ,
i = 1 , 2 , , n , j = 1 , 2 , , m , k = 1 , 2 , , then system (11) is practically globally exponentially stable with respect to the function h ^ .
For A = 0 , the global exponential stability of the impulsive BAM DCNN (11) has been studied in [113].
Example 4. 
If we consider the Cohen–Grossberg-type delayed BAM neural network (11) for n = m = 2 for t > 0 , where
x ( t ) = x 1 ( t ) x 2 ( t ) , y ( t ) = y 1 ( t ) y 2 ( t ) ,
a i ( ι i ) = a ^ j ( ι j ) = 1 , b 1 ( ι i ) = 2 ι i , b 2 ( ι i ) = 3 ι i , b ^ 1 ( ι j ) = b ^ 2 ( ι j ) = 2 ι j , i , j = 1 , 2 , I 1 = I 2 = J 1 = J 2 = 1 , 0 τ j ( t ) 1 , 0 τ i ^ ( t ) 1 ,
f j ( ι j ) = g j ( ι j ) = f ^ i ( ι i ) = g ^ i ( ι i ) = | ι j + 1 | | ι j 1 | 2 , i , j = 1 , 2 ,
( w i j ) 2 × 2 = 1 0.5 0.6 0.5 , ( h i j ) 2 × 2 = 0.3 0.4 0.4 0.2 ,
( w ^ i j ) 2 × 2 = 0.7 0.6 0.9 0.8 , ( h ^ i j ) 2 × 2 = 0.2 0.1 0.1 0.2 ,
under impulsive conditions in the form
x ( t + ) x ( t ) = 1 + 1 2 k 0 0 1 2 k x ( t ) , t = σ k ( x ( t ) , y ( t ) ) , k = 1 , 2 , , y ( t + ) y ( t ) = 1 + 1 3 k 0 0 1 + 1 3 k y ( t ) , t = σ k ( x ( t ) , y ( t ) ) , k = 1 , 2 , ,
for σ k ( x , y ) = | x | + | y | + k , k = 1 , 2 , , then (31) holds for
a ̲ i = a ¯ i = 1 , a ̲ i ^ = a ¯ i ^ = 1 , B 1 = 2 , B 2 = 3 , B ^ 1 = B ^ 2 = 2 , L 1 = L 2 = 1 , M 1 = M 2 = 1 , L ^ 1 = L ^ 2 = 1 , M ^ 1 = M ^ 2 = 1 .
However, the condition (32) is not satisfied because
γ 2 k = 1 2 k < 0 , k = 1 , 2 , .
Hence, drawing a conclusion about the practical stability behavior of the neuronal states of (11) is impossible using the results presented here. For example, for the particular choice of the models’ parameters and impulsive conditions, and the manifold build function h ^ = x 1 2 + x 2 2 + y 1 2 + y 2 2 , the unstable trajectory of x 2 ( t ) with respect to h ^ is as illustrated in Figure 4.
Examples 3 and 4 again illustrate the strength of the proposed criteria. These examples also demonstrate how the extended stability dynamics of the states of the considered Cohen–Grossberg BAM delayed CNNs may be controlled via suitable impulsive controllers. In addition, the opportunity for the extension of the stability concepts to fractional-order models is also demonstrated.

3.4. Lipschitz Stability

The extended notion of Lipschitz stability is gradually gaining popularity, receiving more and more attention, especially in the fractional-order modeling approach [157,158,160]. Very recently, it has been also applied to some impulsive control fractional neural network models [162].
For the model (20), some global uniform Lipschitz stability criteria are proposed in [162] using appropriate Lyapunov-type functions. It is proven that the fractional reaction–diffusion neural network model (20) of Cohen–Grossberg type is globally uniformly Lipschitz stable if the impulsive control is designed as in (22) and (24), and at least one of the following criteria is satisfied for the models’ parameters.
  • There exists a continuous for t ( t k 1 , t k ] function β ( t ) , k = 1 , 2 , , such that
    λ 1 λ 2 β ¯ ( t ) ;
  • For D ¯ i = q = 1 n 4 n d i q B 2 ,
    min 1 i m D ¯ i a ̲ + B i L i j = 1 m w j i + > a ¯ a ̲ max 1 i m M i j = 1 m h j i +
    and
    | 1 γ i k | a ̲ a ¯ , i = 1 , 2 , , m , k = 1 , 2 , .
As is seen, for β ( t ) = 0 , t R + , the inequality (35) is reduced to (27). Hence, in this case, the Lipschitz stability can be considered as a generalization of the uniform stability of an integral manifold relevant to the model (20). For more detailed results, see [162].
Moreover, for α = 1 , the established result may be applied to the impulsive control neural network model (10).
Example 5. 
Consider the model (20) for m = 3 , n = 2 , Ω = { ( x 1 , x 2 ) T : 0 x 1 , x 2 2 } R 2 , a i ( ι ) = 1.5 + 0.5 sin ( ι ) , i = 1 , 2 , 3 , b 1 ( ι ) = 2.9 ι , b 2 ( ι ) = 3.1 ι , b 3 ( ι ) = 2.5 ι , f i ( ι i ) = g i ( ι i ) = 0.5 ( | ι i + 1 | | ι i 1 | ) , 0 τ i ( t ) 2 ,
( w i j ) ( t ) = 0.5 0.2 cos ( t ) 0.4 + 0.1 sin ( t ) 0.1 + 0.3 cos ( t ) 0.3 + 0.1 sin ( t ) 0.2 0.1 cos ( t ) 0.2 + 0.4 sin ( t ) 0.3 0.2 sin ( t ) 0.1 0.2 sin ( t ) 0.3 + 0.4 cos ( t ) ,
( h i j ) ( t ) = 0.4 1 0.3 0.2 0.3 0 0.3 0.5 1 , D i q = d i q = 0.8 0.85 1.125 0.9 0.9 1.05 .
If the impulsive control is regulated by the constants γ i k
γ i k = 1 + cos ( 1 + k ) 2 i , i = 1 , 2 , 3 , k = 1 , 2 ,
then (35) and (36) hold for B = 2 , L i = M i = 1 , a ̲ i = 1 , a ¯ i = 2 , i = 1 , 2 , 3 , B ̲ 1 = 2.9 , B ̲ 2 = 3.1 , B ̲ 3 = 2.5 , D ¯ 1 = 3.3 , D ¯ 2 = 4.05 , D ¯ 3 = 3.9 , and, hence, the fractional impulsive reaction–diffusion neural network (20) of the Cohen–Grossberg type is globally uniformly Lipschitz stable.

4. Discussion

Different classes of CNNs are intensively applied in numerous areas of science, engineering and medicine to model and study relevant phenomena. One of the directions in the recent development of CNN modeling is connected to the use of impulsive modeling approaches. As such, numerous impulsive control CNN models have been introduced [54,55,56,57,58,59,61,67,68,69,70,71,72,107,108,112,113,116,117,118,119,120,121,122,123,124,128]. The use of the impulsive control approach in neuroscience is motivated by efforts to more adequately model processes in the presence of short-term disturbances caused by external interference, the natural environment or inherent in neuronal activity.
CNNs under impulsive disturbances are represented by two–component systems that consist of continuous and discrete components and are actually hybrid models under different hypotheses on the dynamical characteristics of the continuous and discrete components. Thus, the impulsive control modeling paradigm bridges the gap between continuous and discrete CNN models and offers opportunities for refinements to applied neuronal systems.
Stability of neural network states is a key property necessary for the efficient application of the CNN model. In the presence of impulsive effects, it is crucial to analyze the effect of the impulsive perturbations on the stability manner of the neural network model, and to determine how impulses can be used to manage the stability properties of a neural network system. External and internal short-term (impulsive) perturbations can be also appropriately used to create impulsive control strategies for the stability and synchronization of CNN models.
Parallel to the development of the impulsive control modeling approach, the variety of CNNs to which it is applied has been also expanded. Recently, the classes of CNN models to which this approach has been applied include DCNNs, DCNNs with reaction–diffusion terms, DCNNs of Cohen–Grossberg type, BAM DCNNs, fractional-order DCNNs and some others. See, for example, [57,58,72,104,108,109,113,116,117,118,119,120,123,162] and the references therein.
Although a number of classical global asymptotic stability [29,59,79,85,89,92,98,127] results exist for some major classes of CNNs, this stability notion is not applicable in the cases where several steady states exist, or an entire region with neuronal states is considered, or the classical stability criteria are not met, but the stability performance of a CNN model is acceptable from an applied point of view. In such cases, some extended stability notions, such as practical stability, stability with respect to a set or Lipschitz stability, are of particular interest. In their recent research, the authors contributed to the development of these stability notions and enhanced the rigorous understanding of extended impulsive stability and control strategies for DCNN models, including models with fractional dynamics.
This paper has overviewed the research area of the application of extended stability criteria to some main classes of impulsive and fractional DCNNs. The main advantages of using extended stability strategies are mainly demonstrated via examples in terms of their implementation and usage in neural network models.
A chart of the existing results on extended stability strategies applied to impulsive and fractional DCNNs is provided in Table 1.
It is seen from Table 1 that some stability concepts are still not applied to important classes of neural network models. Hence, the framework of extended stability strategies for impulsive control and fractional-order neural network models is far from completion and needs more developments. The presented chart also emphasizes the future research directions in this important topic.
The abbreviations used in Table 1 are as follows:
  • PS = Practical stability;
  • SS = Stability of sets;
  • SRhM = Stability with respect to h-manifolds;
  • SRIM = Stability with respect to integral manifolds;
  • PSRhM = Practical stability with respect to h-manifolds;
  • PSRIM = Practical stability with respect to integral manifolds;
  • LS = Lipschitz stability;
  • DCNNs = Delayed cellular neural networks;
  • RDDCNNs = Reaction–diffusion delayed cellular neural networks;
  • CGDCNNs = Cohen–Grossberg delayed cellular neural networks;
  • RDCGDCNNs = Reaction–diffusion Cohen–Grossberg delayed cellular neural networks;
  • BAMDCNNs - BAM delayed cellular neural networks;
  • FDCNNs = Fractional delayed cellular neural networks;
  • FRDDCNNs = Fractional reaction–diffusion delayed cellular neural networks;
  • FCGDCNNs = Fractional Cohen–Grossberg delayed cellular neural networks;
  • FRDCGDCNNs = Fractional reaction–diffusion Cohen–Grossberg delayed cellular neural networks;
  • FBAMDCNNs = Fractional BAM delayed cellular neural networks.
In conclusion, the presented impulsive control and fractional DCNN models provide excellent perspectives and tools that would be extremely useful, as they would provide a framework enabling control bioscientists to better design neural network models that are resilient in the face of impulsive shocks, fractional dynamic and uncertainty. In addition, the developed extended stability and control criteria for such DCNNs can be used by neuroscientists to determine how the neurons respond to impulsive stimulation, and how such stimulation can be used to stabilize the model. Such results are also important in the process of the verification of DCNNs designed to demonstrate an optimal solution regardless of the initial data.

5. Conclusions

Neural network modeling is a very vital research area with broad applications. This research is an overview of the authors’ main results on impulsive control neural network modeling applied to different classes of DCNNs, including fractional-order models. It also presents the recent progress in the applied extended stability and control strategies to such neural network models. The proposed studies would provide a better theoretical understanding of various types of impulsive control neural network systems, such as impulsive CNNs, impulsive DCNNs, impulsive BAM DCNNs, impulsive Hopfield neural networks, impulsive Cohen–Grossberg DCNNs, impulsive reaction–diffusion DCNNs and their fractional-order generalizations. The presented extended stability strategies can be used to understand the extent to which the DCNNs change under the influence of internal or external factors (for instance, external forces). The analysis developed may lead to a real understanding of the modeling approach by identifying mechanisms responsible for the stable model behavior.

Author Contributions

Conceptualization, G.S. and I.S.; methodology, G.S. and I.S.; formal analysis, G.S. and I.S.; investigation, G.S. and I.S.; writing—original draft preparation, I.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Güçlü, U.; van Germen, M. Probing human brain function with artificial neural networks. In Computational Models of Brain and Behavior; Moustafa, A.A., Ed.; John Wiley & Sons: Hoboken, NJ, USA, 2017; pp. 413–423. [Google Scholar]
  2. Kozachkov, L.; Lundqvist, M.; Slotine, J.J.; Miller, E.K. Achieving stable dynamics in neural circuits. PLoS Comput. Biol. 2020, 16, e1007659. [Google Scholar]
  3. Parisi, G.I.; Kemker, R.; Part, J.L.; Kanan, C.; Wermer, S. Continual lifelong learning with neural networks: A review. Neural Netw. 2019, 113, 54–71. [Google Scholar] [CrossRef] [PubMed]
  4. Zapf, B.; Haubner, J.; Kuchta, M.; Ringstad, G.; Eide, P.K.; Marda, K.A. Investigating molecular transport in the human brain from MRI with physics-informed neural networks. Sci. Rep. 2022, 12, 15475. [Google Scholar] [CrossRef] [PubMed]
  5. Moustafa, A.A. Computational Models of Brain and Behavior, 1st ed.; John Wiley & Sons: Hoboken, NJ, USA, 2017; ISBN 978-1119159063. [Google Scholar]
  6. Arbib, M.A. Brains, Machines, and Mathematics, 2nd ed.; Springer: New York, NY, USA, 1987; ISBN 978-0387965390. [Google Scholar]
  7. Haykin, S. Neural Networks: A Comprehensive Foundation, 1st ed.; Prentice-Hall: Englewood Cliffs, NJ, USA, 1998; ISBN 9780132733502. [Google Scholar]
  8. Chua, L.O.; Yang, L. Cellular neural networks: Theory. IEEE Trans. Circuits Syst. CAS 1988, 35, 1257–1272. [Google Scholar] [CrossRef]
  9. Chua, L.O.; Yang, L. Cellular neural networks: Applications. IEEE Trans. Circuits Syst. CAS 1988, 35, 1273–1290. [Google Scholar] [CrossRef]
  10. Liu, Y.H.; Zhu, J.; Constantinidis, C.; Zhou, X. Emergence of prefrontal neuron maturation properties by training recurrent neural networks in cognitive tasks. iScience 2021, 24, 103178. [Google Scholar] [CrossRef]
  11. Al-Darabsah, I.; Chen, L.; Nicola, W.; Campbell, S.A. The impact of small time delays on the onset of oscillations and synchrony in brain networks. Front. Syst. Neurosci. 2021, 15, 688517. [Google Scholar] [CrossRef]
  12. Popovych, O.V.; Tass, P.A. Adaptive delivery of continuous and delayed feedback deep brain stimulation—A computational study. Sci. Rep. 2019, 9, 10585. [Google Scholar] [CrossRef] [Green Version]
  13. Yu, H.; Wang, J.; Liu, C.; Deng, B.; Wei, X. Delay-induced synchronization transitions in modular scale-free neuronal networks with hybrid electrical and chemical synapses. Physics A 2014, 405, 25–34. [Google Scholar] [CrossRef]
  14. Ziaeemehr, A.; Zarei, M.; Valizadeh, A.; Mirasso, C.R. Frequency-dependent organization of the brain’s functional network through delayed-interactions. Neural Netw. 2020, 132, 155–165. [Google Scholar] [CrossRef]
  15. Lara, T.; Ecomoviciz, P.; Wu, J. Delayed cellular neural networks: Model, applications, implementations, and dynamics. Differ. Equ. Dyn. Syst. 2002, 3, 71–97. [Google Scholar]
  16. Sun, X.; Li, G. Synchronization transitions induced by partial time delay in a excitatory-inhibitory coupled neuronal network. Nonlinear Dyn. 2017, 89, 2509–2520. [Google Scholar] [CrossRef]
  17. Xu, Y. Weighted pseudo-almost periodic delayed cellular neural networks. Neural Comput. Appl. 2018, 30, 2453–2458. [Google Scholar] [CrossRef]
  18. Lefe´vre, J.; Mangin, J.-F. A reaction-diffusion model of human brain development. PLoS Comput. Biol. 2010, 6, e1000749. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Jia, Y.; Zhao, Q.; Yin, H.; Guo, S.; Sun, M.; Yang, Z.; Zhao, X. Reaction-diffusion model-based research on formation mechanism of neuron dendritic spine patterns. Front. Neurorobot. 2021, 15, 563682. [Google Scholar] [CrossRef]
  20. Ouyang, M.; Dubois, J.; Yu, Q.; Mukherjee, P.; Huang, H. Delineation of early brain development from fetuses to infants with diffusion MRI and beyond. NeuroImage 2019, 185, 836–850. [Google Scholar] [CrossRef]
  21. Abdelnour, F.; Voss, H.U.; Raj, A. Network diffusion accurately models the relationship between structural and functional brain connectivity networks. NeuroImage 2014, 90, 335–347. [Google Scholar] [CrossRef] [Green Version]
  22. Marinov, T.; Santamaria, F. Modeling the effects of anomalous diffusion on synaptic plasticity. BMC Neurosci. 2013, 14, P343. [Google Scholar] [CrossRef] [Green Version]
  23. Alshammari, S.; Al-Sawalha, M.M.; Humaidi, J.R. Fractional view study of the brusselator reaction–diffusion model occurring in chemical reactions. Fractal Fract. 2023, 7, 108. [Google Scholar] [CrossRef]
  24. Landge, A.; Jordan, B.; Diego, X.; Mueller, P. Pattern formation mechanisms of self-organizing reaction-diffusion systems. Dev. Biol. 2020, 460, 2–11. [Google Scholar] [CrossRef]
  25. Li, A.; Chen, R.; Farimani, A.B.; Zhang, Y.J. Reaction diffusion system prediction based on convolutional neural network. Sci. Rep. 2020, 10, 3894. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Cohen, M.A.; Grossberg, S. Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybern. 1983, 13, 815–826. [Google Scholar] [CrossRef]
  27. Aouiti, C.; Assali, E.A. Nonlinear Lipschitz measure and adaptive control for stability and synchronization in delayed inertial Cohen–Grossberg-type neural networks. Int. J. Adapt. Control 2019, 33, 1457–1477. [Google Scholar] [CrossRef]
  28. Lu, W.; Chen, T. Dynamical behaviors of Cohen–Grossberg neural networks with discontinuous activation functions. Neural Netw. 2005, 18, 231–242. [Google Scholar] [CrossRef]
  29. Ozcan, N. Stability analysis of Cohen—Grossberg neural networks of neutral-type: Multiple delays case. Neural Netw. 2019, 113, 20–27. [Google Scholar] [CrossRef] [PubMed]
  30. Peng, D.; Li, J.; Xu, W.; Li, X. Finite-time synchronization of coupled Cohen-Grossberg neural networks with mixed time delays. J. Frankl. Inst. 2020, 357, 11349–11367. [Google Scholar] [CrossRef]
  31. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [Green Version]
  32. Dong, T.; Gong, X.; Huang, T. Zero-Hopf bifurcation of a memristive synaptic Hopfield neural network with time delay. Neural Netw. 2022, 149, 146–156. [Google Scholar] [CrossRef] [PubMed]
  33. Ma, T.; Mou, J.; Li, B.; Banerjee, S.; Yan, H.Z. Study on complex dynamical behavior of the fractional-order Hopfield neural network system and its implementation. Fractal Fract. 2022, 6, 637. [Google Scholar] [CrossRef]
  34. Kosko, B. Adaptive bi-directional associative memories. Appl. Opt. 1987, 26, 4947–4960. [Google Scholar] [CrossRef] [Green Version]
  35. Kosko, B. Bidirectional associative memories. IEEE Trans. Syst. Man Cybern. 1988, 18, 49–60. [Google Scholar] [CrossRef] [Green Version]
  36. Kosko, B. Neural Networks and Fuzzy Systems: A Dynamical System Approach to Machine Intelligence; Prentice-Hall: Englewood Cliffs, NJ, USA, 1992; ISBN 0136114350, 9780136114352. [Google Scholar]
  37. Wang, H.; Song, Q.; Duan, C. LMI criteria on exponential stability of BAM neural networks with both time-varying delays and general activation functions. Math. Comput. Simul. 2010, 81, 837–850. [Google Scholar] [CrossRef]
  38. Chau, F.T.; Cheung, B.; Tam, K.Y.; Li, L.K. Application of a bi-directional associative memory (BAM) network in computer assisted learning in chemistry. Comput. Chem. 1994, 18, 359–362. [Google Scholar] [CrossRef] [PubMed]
  39. Palm, G. Neural associative memories and sparse coding. Neural Netw. 2013, 37, 165–171. [Google Scholar] [CrossRef]
  40. Tryon, W.W. A bidirectional associative memory explanation of posttraumatic stress disorder. Clin. Psychol. Rev. 1999, 19, 789–818. [Google Scholar] [CrossRef] [PubMed]
  41. Baleanu, D.; Diethelm, K.; Scalas, E.; Trujillo, J.J. Fractional Calculus: Models and Numerical Methods, 1st ed.; World Scientific: Singapore, 2012; ISBN 978-981-4355-20-9. [Google Scholar]
  42. Magin, R. Fractional Calculus in Bioengineering, 1st ed.; Begell House: Redding, CA, USA, 2006; ISBN 978-1567002157. [Google Scholar]
  43. Petra´s˘, I. Fractional-Order Nonlinear Systems, 1st ed.; Springer: Heidelberg, Germany; Dordrecht, The Netherlands; London, UK; New York, NY, USA, 2011; ISBN 978-3-642-18101-6. [Google Scholar]
  44. Sandev, T.; Tomovski, Z. Fractional Equations and Models, Theory and Applications, 1st ed.; Springer: Cham, Switzerland, 2019; ISBN 978-3-030-29616-2. [Google Scholar]
  45. Teka, W.; Marinov, T.M.; Santamaria, F. Neuronal spike timing adaptation described with a fractional leaky integrate-and-fire model. PLoS Comput. Biol. 2014, 10, e1003526. [Google Scholar] [CrossRef]
  46. Coronel-Escamilla, A.; Gomez-Aguilar, J.F.; Stamova, I.; Santamaria, F. Fractional order controllers increase the robustness of closed-loop deep brain stimulation systems. Chaos Solitons Fract. 2020, 17, 110149. [Google Scholar] [CrossRef]
  47. Mondal, A.; Sharma, S.K.; Upadhyay, R.K.; Mondal, A. Firing activities of a fractional-order FitzHugh-Rinzel bursting neuron model and its coupled dynamics. Sci. Rep. 2019, 9, 15721. [Google Scholar] [CrossRef] [Green Version]
  48. Lundstrom, B.N.; Higgs, M.H.; Spain, W.J.; Fairhall, A.L. Fractional differentiation by neocortical pyramidal neurons. Nat. Neurosci. 2008, 11, 1335–1342. [Google Scholar] [CrossRef]
  49. Ganji, R.M.; Jafari, H.; Moshokoa, S.P.; Nkimo, N.S. A mathematical model and numerical solution for brain tumor derived using fractional operator. Results Phys. 2021, 28, 104671. [Google Scholar] [CrossRef]
  50. Jun, D.; Guang-Jun, Z.; Yong, X.; Hong, Y.; Jue, W. Dynamic behavior analysis of fractional-order Hindmarsh–Rose neuronal model. Cogn. Neurodyn. 2014, 8, 167–175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Li, P.; Lu, Y.; Xu, C.; Ren, J. Bifurcation phenomenon and control technique in fractional BAM neural network models concerning delays. Fractal Fract. 2023, 7, 7. [Google Scholar] [CrossRef]
  52. Ramakrishnan, B.; Parastesh, F.; Jafari, S.; Rajagopal, K.; Stamov, G.; Stamova, I. Synchronization in a multiplex network of nonidentical fractional-order neurons. Fractal Fract. 2022, 6, 169. [Google Scholar] [CrossRef]
  53. Weinberg, S.H.; Santamaria, F. History dependent neuronal activity modeled with fractional order dynamics. In Computational Models of Brain and Behavior; Moustafa, A.A., Ed.; John Wiley & Sons: Hoboken, NJ, USA, 2017; pp. 531–548. [Google Scholar]
  54. Guan, Z.H.; Lam, J.; Chen, G. On impulsive autoassociative neural networks. Neural Netw. 2000, 13, 63–69. [Google Scholar] [CrossRef] [PubMed]
  55. Hu, B.; Guan, Z.-H.; Chen, G.; Lewis, F.L. Multistability of delayed hybrid impulsive neural networks with application to associative memories. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1537–1551. [Google Scholar]
  56. Liu, C.; Liu, X.; Yang, H.; Zhang, G.; Cao, Q.; Huang, J. New stability results for impulsive neural networks with time delays. Neural Comput. Appl. 2019, 31, 6575–6586. [Google Scholar] [CrossRef]
  57. Stamov, G.; Stamova, I.; Martynyuk, A.; Stamov, T. Design and practical stability of a new class of impulsive fractional-like neural networks. Entropy 2020, 22, 337. [Google Scholar] [CrossRef] [Green Version]
  58. Stamov, G.; Stamova, I.; Spirova, C. Impulsive reaction-diffusion delayed models in biology: Integral manifolds approach. Entropy 2021, 23, 1631. [Google Scholar] [PubMed]
  59. Xu, B.; Liu, Z.; Teo, K.L. Global exponential stability of impulsive high-order Hopfield type neural networks with delays. Comput. Math. Appl. 2009, 57, 1959–1967. [Google Scholar] [CrossRef] [Green Version]
  60. Benchohra, M.; Henderson, J.; Ntouyas, S. Impulsive Differential Equations and Inclusions, 1st ed.; Hindawi Publishing Corporation: New York, NY, USA, 2006; ISBN 9789775945501. [Google Scholar]
  61. Li, X.; Song, S. Impulsive Systems with Delays: Stability and Control, 1st ed.; Science Press & Springer: Singapore, 2022; ISBN 978-981-16-4687-4. [Google Scholar]
  62. Stamova, I.M.; Stamov, G.T. Applied Impulsive Mathematical Models, 1st ed.; Springer: Cham, Switzerland, 2016; ISBN 978-3-319-28060-8/978-3-319-28061-5. [Google Scholar]
  63. Stamova, I.M.; Stamov, G.T. Functional and Impulsive Differential Equations of Fractional Order: Qualitative Analysis and Applications, 1st ed.; CRC Press, Taylor and Francis Group: Boca Raton, MA, USA, 2017; ISBN 9781498764834. [Google Scholar]
  64. Yang, T. Impulsive Control Theory, 1st ed.; Springer: Berlin, Germany, 2001; ISBN 978-3-540-47710-5. [Google Scholar]
  65. Yang, X.; Peng, D.; Lv, X.; Li, X. Recent progress in impulsive control systems. Math. Comput. Simulation 2019, 155, 244–268. [Google Scholar] [CrossRef]
  66. Cacace, F.; Cusimano, V.; Palumbo, P. Optimal impulsive control with application to antiangiogenic tumor therapy. IEEE Trans. Control Syst. Technol. 2020, 28, 106–117. [Google Scholar] [CrossRef]
  67. Cao, J.; Stamov, T.; Sotirov, S.; Sotirova, E.; Stamova, I. Impulsive control via variable impulsive perturbations on a generalized robust stability for Cohen—Grossberg neural networks with mixed delays. IEEE Access 2020, 8, 222890–222899. [Google Scholar] [CrossRef]
  68. Li, M.; Li, X.; Han, X.; Qiu, J. Leader-following synchronization of coupled time-delay neural networks via delayed impulsive control. Neurocomputing 2019, 357, 101–107. [Google Scholar] [CrossRef]
  69. Lv, X.; Li, X.; Cao, J.; Perc, M. Dynamical and static multisynchronization of coupled multistable neural networks via impulsive control. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 6062–6072. [Google Scholar] [CrossRef]
  70. Li, X.; Rakkiyappan, R. Impulsive controller design for exponential synchronization of chaotic neural networks with mixed delays. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 1515–1523. [Google Scholar] [CrossRef]
  71. Xu, Z.; Li, X.; Duan, P. Synchronization of complex networks with time-varying delay of unknown bound via delayed impulsive control. Neural Netw. 2020, 125, 224–232. [Google Scholar] [CrossRef]
  72. Stamov, T.; Stamova, I. Design of impulsive controllers and impulsive control strategy for the Mittag—Leffler stability behavior of fractional gene regulatory networks. Neurocomputing 2021, 424, 54–62. [Google Scholar] [CrossRef]
  73. Gatto, E.M.; Aldinio, V. Impulse control disorders in Parkinson’s disease. A brief and comprehensive review. Front. Neurol. 2019, 10, 351. [Google Scholar] [CrossRef] [Green Version]
  74. Dlala, M.; Saud Almutairi, A. Rapid exponential stabilization of nonlinear wave equation derived from brain activity via event-triggered impulsive control. Mathematics 2021, 9, 516. [Google Scholar] [CrossRef]
  75. Hammad, H.A.; De la Sen, M. Stability and controllability study for mixed integral fractional delay dynamic systems endowed with impulsive effects on time scales. Fractal Fract. 2023, 7, 92. [Google Scholar] [CrossRef]
  76. Ditzler, G.; Roveri, M.; Alippi, C.; Polikar, R. Learning in nonstationary environments: A survey. IEEE Comput. Intell. Mag. 2015, 10, 12–25. [Google Scholar] [CrossRef]
  77. Popa, C.-A. Neutral-type and mixed delays in fractional-order neural networks: Asymptotic stability analysis. Fractal Fract. 2023, 7, 36. [Google Scholar] [CrossRef]
  78. Yang, Z.; Zhang, J.; Hu, J.; Mei, J. New results on finite-time stability for fractional-order neural networks with proportional delay. Neurocomputing 2021, 442, 327–336. [Google Scholar] [CrossRef]
  79. Stamova, I. Stability Analysis of Impulsive Functional Differential Equations, 1st ed.; De Gruyter: Berlin, Germany, 2009; ISBN 9783110221817. [Google Scholar]
  80. McCulloch, W.; Pitts, W. A logical calculus of the ideas imminent in nervous activity. Bull. Math. Biol. 1943, 5, 115–133. [Google Scholar]
  81. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Umar, A.M.; Linus, O.U.; Arshad, H.; Kazaure, A.A.; Gana, U.; Kiru, M.U. Comprehensive review of artificial neural network applications to pattern recognition. IEEE Access 2019, 7, 158820–158846. [Google Scholar] [CrossRef]
  82. Kasai, H.; Ziv, N.E.; Okazaki, H.; Yagishita, S.; Toyoizumi, T. Spine dynamics in the brain, mental disorders and artificial neural networks. Nat. Rev. Neurosci. 2021, 22, 407–422. [Google Scholar] [CrossRef]
  83. Shehab, M.; Abualigan, L.; Omari, M.; Shambour, M.K.Y.; Alshinwan, M.; Abuaddous, H.Y.; Khasawneh, A.M. Artificial neural networks for engineering applications: A review. In Artificial Neural Networks for Renewable Energy Systems and Real-World Applications; Elsheikh, A., Elaziz, M.E.A., Eds.; Academic Press: London, UK, 2022; pp. 189–206. [Google Scholar]
  84. Gao, Z.; Ure, K.; Ables, J.L.; Lagace, D.C.; Nave, K.A.; Goebbels, S.; Eisch, A.J.; Hsieh, J. Neurod1 is essential for the survival and maturation of adult-born neurons. Nat. Neurosci. 2009, 12, 1090–1092. [Google Scholar] [CrossRef] [Green Version]
  85. Huang, H.; Cao, J. On global asymptotic stability of recurrent neural networks with time-varying delays. Appl. Math. Comput. 2003, 142, 143–154. [Google Scholar] [CrossRef]
  86. Ensari, T.; Arik, S. Global stability of a class of neural networks with time-varying delay. IEEE Trans. Circuits Syst. I 2005, 52, 126–130. [Google Scholar] [CrossRef]
  87. Lee, T.H.; Trinh, H.M.; Park, J.H. Stability analysis of neural networks with time-varying delay by constructing novel Lyapunov functionals. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 4238–4247. [Google Scholar] [CrossRef]
  88. Manivannan, R.; Samidurai, R.; Cao, J.; Alsaedi, A.; Alsaadi, F.E. Stability analysis of interval time-varying delayed neural networks including neutral time-delay and leakage delay. Chaos Soliton Fract. 2018, 114, 433–445. [Google Scholar] [CrossRef]
  89. Cao, J.; Li, J. The stability of neural networks with interneuronal transmission delay. Appl. Math. Mech. 1998, 19, 457–462. [Google Scholar]
  90. Zhang, F.Y.; Li, W.T.; Huo, H.F. Global stability of a class of delayed cellular neural networks with dynamical thresholds. Int. J. Appl. Math. 2003, 13, 359–368. [Google Scholar]
  91. Li, R.; Cao, J.; Alsaedi, A.; Ahmad, B. Passivity analysis of delayed reaction-diffusion Cohen–Grossberg neural networks via Hardy–Poincare inequality. J. Franklin Inst. 2017, 354, 3021–3038. [Google Scholar] [CrossRef]
  92. Lu, J.G. Global exponential stability and periodicity of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions. Chaos Solitons Fract. 2008, 35, 116–125. [Google Scholar] [CrossRef]
  93. Qiu, J. Exponential stability of impulsive neural networks with time-varying delays and reaction-diffusion terms. Neurocomputing 2007, 70, 1102–1108. [Google Scholar] [CrossRef]
  94. Gan, Q. Adaptive synchronization of Cohen—Grossberg neural networks with unknown parameters and mixed time-varying delays. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 3040–3049. [Google Scholar] [CrossRef]
  95. Song, Q.; Cao, J. Stability analysis of Cohen—Grossberg neural network with both time-varying and continuously distributed delays. Comput. Appl. Math. 2006, 197, 188–203. [Google Scholar] [CrossRef]
  96. Yuan, K.; Cao, J.; Li, H. Robust stability of switched Cohen—Grossberg neural networks with mixed time-varying delays. IEEE Trans. Syst. Man Cybern. 2006, 36, 1356–1363. [Google Scholar] [CrossRef]
  97. Chen, W.; Huang, Y.; Ren, S. Passivity and robust passivity of delayed Cohen—Grossberg neural networks with and without reaction-diffusion terms. Circuits Syst. Signal Process. 2018, 37, 2772–2804. [Google Scholar] [CrossRef]
  98. Wang, Z.; Zhang, H. Global asymptotic stability of reaction-diffusion Cohen—Grossberg neural networks with continuously distributed delays. IEEE Trans. Neral Netw. 2010, 21, 39–49. [Google Scholar] [CrossRef] [PubMed]
  99. Zhao, H.; Wang, K. Dynamical behaviors of Cohen—Grossberg neural networks with delays and reaction–diffusion terms. Neurocomputing 2006, 70, 536–543. [Google Scholar] [CrossRef]
  100. Song, Q.K.; Cao, J.D. Global exponential stability of bidirectional associative memory neural networks with distributed delays. J. Comput. Appl. Math. 2007, 202, 266–279. [Google Scholar] [CrossRef] [Green Version]
  101. Ali, M.S.; Saravanan, S.; Rani, M.E.; Elakkia, S.; Cao, J.; Alsaedi, A.; Hayat, T. Asymptotic stability of Cohen–Grossberg BAM neutral type neural networks with distributed time varying delays. Neural Process. Lett. 2017, 46, 991–1007. [Google Scholar] [CrossRef]
  102. Du, Y.; Zhong, S.; Zhou, N.; Shi, K.; Cheng, J. Exponential stability for stochastic Cohen—Grossberg BAM neural networks with discrete and distributed time-varying delays. Neurocomputing 2014, 127, 144–151. [Google Scholar] [CrossRef]
  103. Wang, J.; Tian, L.; Zhen, Z. Global Lagrange stability for Takagi-Sugeno fuzzy Cohen—Grossberg BAM neural networks with time-varying delays. Int. J. Control Autom. 2018, 16, 1603–1614. [Google Scholar] [CrossRef]
  104. Stamov, G.; Tomasiello, S.; Stamova, I.; Spirova, C. Stability of sets criteria for impulsive Cohen–Grossberg delayed neural networks with reaction-diffusion terms. Mathematics 2020, 8, 27. [Google Scholar] [CrossRef] [Green Version]
  105. Chatterjee, A.N.; Al Basir, F.; Takeuchi, Y. Effect of DAA therapy in hepatitis C treatment–an impulsive control approach. Math. Biosci. Eng. 2021, 18, 1450–1464. [Google Scholar] [CrossRef]
  106. Rao, R. Impulsive control and global stabilization of reaction-diffusion epidemic model. Math. Methods Appl. Sci. 2021. [Google Scholar] [CrossRef]
  107. Rao, X.B.; Zhao, X.P.; Chu, Y.D.; Zhang, J.G.; Gao, J.S. The analysis of mode-locking topology in an SIR epidemic dynamics model with impulsive vaccination control: Infinite cascade of Stern-Brocot sum trees. Chaos Solitons Fractals 2020, 139, 110031. [Google Scholar] [CrossRef]
  108. Stamov, G.; Gospodinova, E.; Stamova, I. Practical exponential stability with respect to h-manifolds of discontinuous delayed Cohen–Grossberg neural networks with variable impulsive perturbations. Math. Model. Control 2021, 1, 26–34. [Google Scholar] [CrossRef]
  109. Stamov, G.; Stamova, I.; Venkov, G.; Stamov, T.; Spirova, C. Global stability of integral manifolds for reaction-diffusion Cohen-Grossberg-type delayed neural networks with variable impulsive perturbations. Mathematics 2020, 8, 1082. [Google Scholar] [CrossRef]
  110. Benchohra, M.; Henderson, J.; Ntouyas, S.K.; Ouahab, A. Impulsive functional differential equations with variable times. Comput. Math. Appl. 2004, 47, 1659–1665. [Google Scholar] [CrossRef] [Green Version]
  111. Li, X. Exponential stability of Cohen—Grossberg-type BAM neural networks with time-varying delays via impulsive control. Neurocomputing 2009, 73, 525–530. [Google Scholar] [CrossRef]
  112. Maharajan, C.; Raja, R.; Cao, J.; Rajchakit, G.; Alsaedi, A. Impulsive Cohen–Grossberg BAM neural networks with mixed time-delays: An exponential stability analysis issue. Neurocomputing 2018, 275, 2588–2602. [Google Scholar] [CrossRef]
  113. Stamov, G.; Stamova, I.; Simeonov, S.; Torlakov, I. On the stability with respect to h-manifolds for Cohen–Grossberg-type bidirectional associative memory neural networks with variable impulsive perturbations and time-varying delays. Mathematics 2020, 8, 335. [Google Scholar] [CrossRef] [Green Version]
  114. Podlubny, I. Fractional Differential Equations, 1st ed.; Academic Press: San Diego, CA, USA, 1999; ISBN 558840-2. [Google Scholar]
  115. Delavari, H.; Baleanu, D.; Sadati, J. Stability analysis of Caputo fractional-order nonlinear systems revisited. Nonlinear Dyn. 2012, 67, 2433–2439. [Google Scholar] [CrossRef]
  116. Stamova, I.M. Global Mittag—Leffler stability and synchronization of impulsive fractional-order neural networks with time-varying delays. Nonlinear Dyn. 2014, 77, 1251–1260. [Google Scholar] [CrossRef]
  117. Stamova, I.; Stamov, G. Impulsive control strategy for the Mittag—Leffler synchronization of fractional-order neural networks with mixed bounded and unbounded delays. AIMS Math. 2021, 6, 2287–2303. [Google Scholar] [CrossRef]
  118. Cao, J.; Stamov, G.; Stamova, I.; Simeonov, S. Almost periodicity in reaction-diffusion impulsive fractional neural networks. IEEE Trans. Cybern. 2021, 51, 151–161. [Google Scholar] [CrossRef]
  119. Stamov, G.; Stamov, T.; Stamova, I. On the stability with respect to manifolds of reaction-diffusion impulsive control fractional-order neural networks with time-varying delays. AIP Conf. Proc. 2021, 2333, 060004. [Google Scholar]
  120. Stamova, I.; Stamov, G. Mittag—Leffler synchronization of fractional neural networks with time-varying delays and reaction-diffusion terms using impulsive and linear controllers. Neural Netw. 2017, 96, 22–32. [Google Scholar] [CrossRef] [PubMed]
  121. Stamova, I.; Sotirov, S.; Sotirova, E.; Stamov, G. Impulsive fractional Cohen-Grossberg neural networks: Almost periodicity analysis. Fractal Fract. 2021, 5, 78. [Google Scholar] [CrossRef]
  122. Zhang, L.; Yang, Y.; Xu, X. Synchronization analysis for fractional order memristive Cohen-Grossberg neural networks with state feedback and impulsive control. Physics A 2018, 506, 644–660. [Google Scholar] [CrossRef]
  123. Stamov, G.T.; Stamova, I.M.; Spirova, C. Reaction-diffusion impulsive fractional-order bidirectional neural networks with distributed delays: Mittag-Leffler stability along manifolds. AIP Conf. Proc. 2019, 2172, 050002. [Google Scholar]
  124. Stamova, I.; Stamov, G.; Simeonov, S.; Ivanov, A. Mittag-Leffler stability of impulsive fractional-order bi-directional associative memory neural networks with time-varying delays. Trans. Inst. Meas. Control. 2018, 40, 3068–3077. [Google Scholar] [CrossRef]
  125. Ren, F.; Cao, F.; Cao, J. Mittag—Leffler stability and generalized Mittag—Leffler stability of fractional-order gene regulatory networks. Neurocomputing 2015, 160, 185–190. [Google Scholar] [CrossRef]
  126. Qiao, Y.; Yan, H.; Duan, L.; Miao, J. Finite-time synchronization of fractional-order gene regulatory networks with time delay. Neural Netw. 2020, 126, 1–10. [Google Scholar] [CrossRef]
  127. Wu, Z.; Wang, Z.; Zhou, T. Global stability analysis of fractional-order gene regulatory networks with time delay. Int. J. Biomath. 2019, 12, 1950067. [Google Scholar] [CrossRef]
  128. Stamova, I.; Stamov, G. Lyapunov approach for almost periodicity in impulsive gene regulatory networks of fractional order with time-varying delays. Fractal Fract. 2021, 5, 268. [Google Scholar] [CrossRef]
  129. Ballinger, G.; Liu, X. Practical stability of impulsive delay differential equations and applications to control problems. In Optimization Methods and Applications, Applied Optimization; Yang, X., Teo, K.L., Caccetta, L., Eds.; Kluwer: Dordrecht, The Netherlands, 2001; Volume 52, pp. 3–21. [Google Scholar]
  130. Lakshmikantham, V.; Leela, S.; Martynyuk, A.A. Practical Stability of Nonlinear Systems; World Scientific: Teaneck, NJ, USA, 1990; ISBN 981-02-0351-9/981-02-0356-X. [Google Scholar]
  131. Tian, Y.; Sun, Y. Practical stability and stabilisation of switched delay systems with non-vanishing perturbations. IET Control Theory Appl. 2019, 13, 1329–1335. [Google Scholar] [CrossRef]
  132. Stamova, I.; Henderson, J. Practical stability analysis of fractional-order impulsive control systems. ISA Trans. 2016, 64, 77–85. [Google Scholar] [CrossRef] [PubMed]
  133. Yao, Q.; Lin, P.; Wang, L.; Wang, Y. Practical exponential stability of impulsive stochastic reaction-diffusion systems with delays. IEEE Trans. Cybern. 2022, 52, 2687–2697. [Google Scholar] [CrossRef] [PubMed]
  134. Chen, F.C.; Chang, C.H. Practical stability issues in CMAC neural network control systems. IEEE Trans. Control Syst. Technol. 1996, 4, 86–91. [Google Scholar] [CrossRef] [Green Version]
  135. Jiao, T.; Zong, G.; Ahn, C.K. Noise-to-state practical stability and stabilization of random neural networks. Nonlinear Dyn. 2020, 100, 2469–2481. [Google Scholar] [CrossRef]
  136. Stamov, T. Neural networks in engineering design: Robust practical stability analysis. Cybern. Inf. Technol. 2021, 21, 3–14. [Google Scholar] [CrossRef]
  137. Hale, J.K.; Verduyn Lunel, S.M. Introduction to Functional Differential Equations, 1st ed.; Springer: New York, NY, USA, 1993; ISBN 978-0-387-94076-2/978-1-4612-8741-4/978-1-4612-4342-7. [Google Scholar]
  138. Parshad, R.D.; Kouachi, S.; Gutierrez, J.B. Global existence and asymptotic behavior of a model for biological control of invasive species via supermale introduction. Commun. Math. Sci. 2013, 11, 971–992. [Google Scholar]
  139. Li, Z.; Yan, L.; Zhou, X. Global attracting sets and stability of neutral stochastic functional differential equations driven by Rosenblatt process. Front. Math. China 2018, 13, 87–105. [Google Scholar] [CrossRef]
  140. Ruiz del Portal, F.R. Stable sets of planar homeomorphisms with translation preudo-arcs. Discret. Contin. Dynam. Syst. 2019, 12, 2379–2390. [Google Scholar] [CrossRef] [Green Version]
  141. Skjetne, R.; Fossen, T.I.; Kokotovic, P.V. Adaptive output maneuvering, with experiments, for a model ship in a marine control laboratory. Automatica 2005, 41, 289–298. [Google Scholar] [CrossRef]
  142. Stamova, I.M.; Stamov, G.T. On the stability of sets for delayed Kolmogorov-type systems. Proc. Amer. Math. Soc. 2014, 142, 591–601. [Google Scholar] [CrossRef]
  143. Bohner, M.; Stamova, I.; Stamov, G. Impulsive control functional differential systems of fractional order: Stability with respect to manifolds. Eur. Phys. J. Spec. Top. 2017, 226, 3591–3607. [Google Scholar] [CrossRef]
  144. Smale, S. Stable manifolds for differential equations and diffeomorphisms. Ann. Sc. Norm. Sup. Pisa 1963, 3, 97–116. [Google Scholar]
  145. Burby, J.W.; Hirvijoki, E. Normal stability of slow manifolds in nearly periodic Hamiltonian systems. J. Math. Phys. 2021, 62, 093506. [Google Scholar] [CrossRef]
  146. Moura, A.; Feudel, U.; Gouillart, E. Mixing and chaos in open flows. Adv. Appl. Mech. 2012, 45, 1–50. [Google Scholar]
  147. Stamov, G.; Stamova, I. On stable integral manifolds for impulsive Kolmogorov systems of fractional order. Mod. Phys. Lett. B 2017, 31, 1750168. [Google Scholar] [CrossRef]
  148. Gallego, J.A.; Perich, M.G.; Chowdhury, R.H.; Solla, S.A.; Miller, L.E. Long-term stability of cortical population dynamics underlying consistent behavior. Nat. Neurosci. 2020, 23, 260–270. [Google Scholar] [CrossRef] [PubMed]
  149. Gallego, J.A.; Perich, M.G.; Miller, L.E.; Solla, S.A. Neural manifolds for the control of movement. Neuron 2017, 94, 978–984. [Google Scholar] [CrossRef] [Green Version]
  150. Sadtler, P.T.; Quick, K.M.; Golub, M.D.; Chase, S.M.; Ryu, S.I.; Tyler-Kabara, E.C.; Yu, B.M.; Batista, A.P. Neural constraints on learning. Nature 2014, 512, 423–426. [Google Scholar] [CrossRef] [Green Version]
  151. Ionescu, C.; Lopes, A.; Copot, D.; Machado, J.A.T.; Bates, J.H.T. The role of fractional calculus in modeling biological phenomena: A review. Commun. Nonlinear Sci. Numer. Simul. 2017, 51, 141–159. [Google Scholar] [CrossRef]
  152. Martynyuk, A.A.; Stamov, G.T.; Stamova, I.M. Impulsive fractional-like differential equations: Practical stability and boundedness with respect to h-manifolds. Fractal Fract. 2019, 3, 50. [Google Scholar]
  153. Stamov, T. Discrete bidirectional associative memory neural networks of the Cohen–Grossberg type for engineering design symmetry related problems: Practical stability of sets analysis. Symmetry 2022, 14, 216. [Google Scholar] [CrossRef]
  154. Dannan, F.M.; Elaydi, S. Lipschitz stability of nonlinear systems of differential equations. J. Math. Anal. Appl. 1986, 113, 562–577. [Google Scholar] [CrossRef] [Green Version]
  155. Harrach, B.; Meftahi, H. Global uniqueness and Lipschitz stability for the inverse Robin transmission problem. SIAM J. Appl. Math. 2019, 79, 525–550. [Google Scholar] [CrossRef] [Green Version]
  156. Imanuvilov, O.; Yamamoto, M. Global Lipschitz stability in an inverse hyperbolic problem by interior observations. Inverse Probl. 2001, 17, 717–728. [Google Scholar] [CrossRef]
  157. Kawamoto, A.; Machida, M. Global Lipschitz stability for a fractional inverse transport problem by Carleman estimates. Appl. Anal. 2021, 100, 752–771. [Google Scholar] [CrossRef] [Green Version]
  158. Ru¨land, A.; Sincich, E. Lipschitz stability for finite dimensional fractional Calderón problem with finite Cauchy data. Inverse Probl. Imaging 2019, 13, 1023–1144. [Google Scholar] [CrossRef] [Green Version]
  159. Kulev, G.K.; Bainov, D.D. Lipschitz stability of impulsive systems of differential equations. Int. J. Theor. Phys. 1991, 30, 737–756. [Google Scholar] [CrossRef]
  160. Stamova, I.; Stamov, G. Lipschitz stability criteria for functional differential systems of fractional order. J. Math. Phys. 2013, 54, 043502. [Google Scholar] [CrossRef]
  161. Gouk, H.; Frank, E.; Pfahringer, B.; Cree, M.J. Regularisation of neural networks by enforcing Lipschitz continuity. Mach. Learn. 2021, 110, 393–416. [Google Scholar] [CrossRef]
  162. Stamova, I.; Stamov, T.; Stamov, G. Lipschitz stability analysis of fractional-order impulsive delayed reaction-diffusion neural network models. Chaos Solitons Fract. 2022, 162, 112474. [Google Scholar] [CrossRef]
  163. Martynyuk, A.A.; Martynyuk-Chernienko, Y.A. Uncertain Dynamical Systems: Stability and Motion Control, 1st ed.; Chapman and Hall/CRC: Boca Raton, MA, USA, 2011; ISBN 9781439876855. [Google Scholar]
  164. Stamov, G.; Stamova, I. Uncertain impulsive differential systems of fractional order: Almost periodic solutions. Int. J. Sys. Sci. 2018, 49, 631–638. [Google Scholar] [CrossRef]
  165. Zecevic, A.I.; Siljak, D.D. Control of Complex Systems: Structural Constraints and Uncertainty; Springer: New York, NY, USA, 2010; ISBN 978-1-4419-1216-9. [Google Scholar]
Figure 1. The trajectory of a mature state x i ( t ) of the impulsive control model (9).
Figure 1. The trajectory of a mature state x i ( t ) of the impulsive control model (9).
Fractalfract 07 00289 g001
Figure 2. The Dirac impulsive function.
Figure 2. The Dirac impulsive function.
Fractalfract 07 00289 g002
Figure 3. The graph of h = | x 1 | + | x 2 | .
Figure 3. The graph of h = | x 1 | + | x 2 | .
Fractalfract 07 00289 g003
Figure 4. The unstable trajectory of the state variable x 2 ( t ) of the CNN in Example 4.
Figure 4. The unstable trajectory of the state variable x 2 ( t ) of the CNN in Example 4.
Fractalfract 07 00289 g004
Table 1. Recent results on extended stability concepts applied to impulsive control DCNNs.
Table 1. Recent results on extended stability concepts applied to impulsive control DCNNs.
NNsPSSSSRhMSRIMPSRhMPSRIMLS
DCNNs
RDDCNNs×
CGDCNNs××××
RDCGDCNNs××××
BAMDCNNs×××
FDCNNs××
FRDDCNNs×××
FCGDCNNs××××××
FRDCGDCNNs××××××
FBAMRDDCNNs××××××
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stamov, G.; Stamova, I. Extended Stability and Control Strategies for Impulsive and Fractional Neural Networks: A Review of the Recent Results. Fractal Fract. 2023, 7, 289. https://doi.org/10.3390/fractalfract7040289

AMA Style

Stamov G, Stamova I. Extended Stability and Control Strategies for Impulsive and Fractional Neural Networks: A Review of the Recent Results. Fractal and Fractional. 2023; 7(4):289. https://doi.org/10.3390/fractalfract7040289

Chicago/Turabian Style

Stamov, Gani, and Ivanka Stamova. 2023. "Extended Stability and Control Strategies for Impulsive and Fractional Neural Networks: A Review of the Recent Results" Fractal and Fractional 7, no. 4: 289. https://doi.org/10.3390/fractalfract7040289

APA Style

Stamov, G., & Stamova, I. (2023). Extended Stability and Control Strategies for Impulsive and Fractional Neural Networks: A Review of the Recent Results. Fractal and Fractional, 7(4), 289. https://doi.org/10.3390/fractalfract7040289

Article Metrics

Back to TopTop