Next Article in Journal
Equitable Coloring of IC-Planar Graphs with Girth g ≥ 7
Previous Article in Journal
Positive Solutions for Periodic Boundary Value Problems of Fractional Differential Equations with Sign-Changing Nonlinearity and Green’s Function
Previous Article in Special Issue
Global Dynamics of an Age-Structured Tuberculosis Model with Vaccine Failure and Nonlinear Infection Force
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Boundary Controlling Synchronization and Passivity Analysis for Multi-Variable Discrete Stochastic Inertial Neural Networks

1
Department of Mathematics, Puyang Petrochemical Vocational and Techenical College, Puyang 457001, China
2
Department of Computer Science and Mathematics, Anyang University, Anyang 455131, China
3
Department of Mathematics, Yunnan University, Kunming 650091, China
4
Department of Mathematics, Yuxi Normal University, Yuxi 653100, China
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(9), 820; https://doi.org/10.3390/axioms12090820
Submission received: 8 July 2023 / Revised: 9 August 2023 / Accepted: 15 August 2023 / Published: 26 August 2023
(This article belongs to the Special Issue Differential Equations in Applied Mathematics)

Abstract

:
The current paper considers discrete stochastic inertial neural networks (SINNs) with reaction diffusions. Firstly, we give the difference form of SINNs with reaction diffusions. Secondly, stochastic synchronization and passivity-based control frames of discrete time and space SINNs are newly formulated. Thirdly, by designing a boundary controller and constructing a Lyapunov-Krasovskii functional, we address decision theorems for stochastic synchronization and passivity-based control for the aforementioned discrete SINNs. Finally, to illustrate our main results, a numerical illustration is provided.

1. Introduction

Neural networks (NNs) can be considered as complicated nonlinear models coupled with numerous internal nodes, and they are capable of offering an effective approach to solving many difficult tasks in the fields of engineering. Due to their huge potential in real-world applications, they have become a significant research topic over the last few decades and have garnered increasing interest in many areas of technology (please refer to refs. [1,2,3,4,5,6,7]). On the other hand, it is necessary to address practical problems by studying the dynamic properties of non-linear neural networks not only in the over-damped case but also under weakly damped conditions [8]. Hence, inertial neural networks (INNs), which can act as second-order differential systems, have been extensively studied. Additionally, numerous publications have addressed synchronization problems, including finite-time synchronization [9], nonfragile H synchronization [10], event-triggered impulsive synchronization [11], fuzzy synchronization [12], Mittag-Leffler synchronization [13], and others.
Passivity, as a specific form of dissipativity, constitutes a fundamental characteristic of physical problems. A system is considered passive when dissipative elements are present in the modeled system, and the accumulated energies remain lower than the external input over a certain time span. Consequently, passivity ensures internal stability of the systems. Due to its widespread applicability in mechanical and electrical systems, the concept of passivity has garnered increasing attention, leading to extensive studies on the passivity of nonlinear systems. In the literature [14], Zhou et al. discussed passivity-based boundary control for stochastic delay reaction-diffusion systems with boundary input-output. Padmaja and Balasubramaniam [15] analyzed passivity-based stability in fractional-order delayed gene regulatory networks. By leveraging Lyapunov-Krasovskii functionals, novel linear matrix inequality conditions were developed to guarantee certain levels of passivity performance in the networks. For further details on this topic, please consult the references [16,17,18].
Widely, NNs were implemented through IC in engineering applications; spatial diffusions invariably occur when electronic motion takes place in an inhomogeneous electromagnetic domain. Therefore, it is important to consider NNs that incorporate the impact of spatial diffusions. In recent years, greater attention has been devoted to NNs with spatial diffusions; please refer to papers [19,20,21,22,23,24]. Stochastic neural networks have received substantial attention in our everyday reality. Typically, actions of random networks are heavily time- and space-dependent. As a result, reaction diffusion must be taken into account. Relevant research topics are discussed in references [14,19,20,22,25,26], etc. While there have been reports on space-time discrete models [27,28,29] to date, the problems of synchronization and passivity-based control for discrete-time SINNs involving diffusions have not been explored.
It is well known that discrete systems,(DSs) can be utilized to simulate a wide range of phenomena, including biological dynamics and artificial NNs, among others. In many scenarios, it has been demonstrated that DSs outperform continuous systems. As a result, the theory of DSs holds significant importance; please refer to references [30,31,32,33,34,35,36,37,38]. Reports [35,36,37,38] have explored various types of discrete INNs. However, they have not focused on the effects of other variables, such as spatial variables. Addressing this gap, the present paper investigates the issues of stochastic synchronization and passivity-based control for time and space discrete SINNs by designing a novel boundary controller.
Our main contributions include the following:
(1)
Establishment of a discrete space and time SINNs model, which complements the continuous cases in literature [22,23,24] and the discrete-time cases in literature [35,36,37,38].
(2)
Unlike prior works in the literature [22,23,24], a controller is formulated at the boundary to achieve synchronization and passivity-based control of discrete space and time SINNs.
In what follows, Section 2 establishes the discrete space and time SINNs based on prior works in the literature [27,29]. Section 3 discusses synchronization and passivity-based control of the discrete SINNs. In Section 4, in order to illustrate our main results, a numerical illustration is provided. Finally, the conclusions and perspectives are described in Section 5.

2. Problem Formulation

2.1. SINNs in Discrete Form

Now, our primary focus is dedicated to the time and space discrete SINNs, as noted below
Δ 2 z i , k + 1 [ ι ] = ( e D h + e I h 2 I ) Δ z i , k [ ι ] + ( I e D h ) ( I e I h ) D [ M Δ 2 z i , k [ ι 1 ] C z i , k [ ι ] + A f ( z i , k [ ι ] ) + α j = 1 N b i j Γ z j , k + 1 [ ι ] e I h z j , k [ ι ] I e I h + Ξ g ( z i , k [ ι ] ) w i , k + Λ γ i , k [ ι ] + J ] ,
where ( ι , k ) ( 0 , l ) Z × Z 0 and l Z +  (here, Z is the set of integral numbers, Z 0 : = { 0 , 1 , 2 , } and Z + : = Z 0 { 0 } ), z i = ( z i 1 , , z i n ) T R n is the state of node i; i = 1 , 2 , , N ; Δ 2 z i , k + 1 [ · ] = z i , k + 2 [ · ] 2 z i , k + 1 [ · ] + z i , k [ · ] , Δ z i , k [ · ] = z i , k + 1 [ · ] z i , k [ · ] for k Z 0 ;
Δ 2 z i , · [ · ] : = z i , · [ · + 2 ] 2 z i , · [ · + 1 ] + z i , · [ · ] 2 ,
and h of less than 1 denote the space and time steps’ length in order; C = diag { c 1 , c 2 , , c n } and D = diag { d 1 , d 2 , , d n } are constant positive definite matrices, D = D I , I denotes n-order identity matrix; M R n × n with | M | 0 , A , Ξ and Λ are the connection weight n-order matrices; α > 0 is the coupling strength, Γ R n × n is the inner coupling matrix, and B = ( b i j ) N × N is the outer coupling configuration matrix satisfying b i j > 0 ( i j ) and b i i = j = 1 , j i N b i j ; f ( · ) and g ( · ) are n dimensional activation functions; γ i = ( γ i 1 , , γ i n ) T R n is the external input of the node i, J R n is the external input; w 1 , k , , w n , k , which are scalar mutually independent random variables on complete probability space ( Ω , F , P ) , are F k : = σ { ( w 1 , q , , w N , q ) : q = 0 , 1 , , k } -adaptive, independent of F k 1 and satisfy
E w j , k = 0 , E w j , k 2 = 1 , E ( w i , k w j , k ) = 0 ( i j ) , E ( w j , k w j , k ) = 0 ( k k )
for k , k Z 0 , i , j = 1 , 2 , , N . Hereby, E represents the expectation operator with respect to probability space ( Ω , F , P ) . The INNs Equation (1) possesses the following controlled boundary conditions
Δ z i , k [ ι ] | ι = 0 = 0 , Δ z i , k [ ι ] | ι = l 1 = ρ i , k ,
where Δ z i , k [ · ] : = 1 ( z i , k [ · + 1 ] z i , k [ · ] ) and ρ i , k denotes the control input, k Z 0 ,   i = 1 , 2 , , N . Further, the initial condition of the INNs Equation (1) is given by
z i , 0 ι = φ i , 0 ι , Δ z i , 0 ι = φ ˜ i , 0 ι , ι [ 0 , l ] Z ,
where φ i , 0 · and φ ˜ i , 0 · are F 0 -adaptive and F 1 -adaptive, respectively, i = 1 , 2 , , N .
Let z i , k [ ι ] = z i ( ι , k h ) for ( ι , k ) [ 0 , l ] Z × Z 0 . So discrete space and time INNs Equation (1) provides a full discretization scheme for the following stochastic INNs with reaction diffusions
2 z i ( x , t ) t 2 = D z i ( x , t ) t + M 2 z i ( x , t ) x 2 C z i ( x , t ) + A f ( z i ( x , t ) ) + α j = 1 N b i j Γ z j ( x , t ) t + z j ( x , t ) + Ξ g ( z i ( x , t ) ) d B i ( t ) d t + Λ γ i ( x , t ) + J ,
where ( x , t ) ( 0 , L ) × [ 0 , + ) with L = l , B i is a one-dimensional Brownian motion on some complete probability space, i = 1 , 2 , , N .
Recently, continuous-time INNs Equation (4) with reaction diffusions has been studied by a few authors (see refs. [21,22,23,24]) and the corresponding discrete networks have been discussed in reports [27,29]. The different approach of INNs Equation (1) is similar to those in refs. [27,29].
Hereon, INNs Equation (1) can be regarded as slaver networks and the isolated node w R n satisfies the master networks below
Δ 2 w k + 1 [ ι ] = ( e D h + e I h 2 I ) Δ w k [ ι ] + ( I e D h ) ( I e I h ) D × M Δ 2 w k [ ι 1 ] C w k [ ι ] + A f ( w k [ ι ] ) + Ξ g ( w k [ ι ] ) w i , k + J , Δ w k [ ι ] | ι = 0 = Δ w k [ ι ] | ι = l = 0 , ( ι , k ) ( 0 , l ) Z × Z 0 .
The initial condition of INNs Equation (5) is described as
w 0 ι = ϕ 0 ι , Δ w 0 ι = ϕ ˜ 0 ι , ι [ 0 , l ] Z ,
where ϕ 0 · and ϕ ˜ 0 · are F 0 -adaptive and F 1 -adaptive, respectively.
Let u i = z i w , then the error networks of INNs Equations (1) and (5) are described by
Δ 2 u i , k + 1 [ ι ] = ( e D h + e I h 2 I ) Δ u i , k [ ι ] + ( I e D h ) ( I e I h ) D [ M Δ 2 u i , k [ ι 1 ] C u i , k [ ι ] + A f ˜ ( u i , k [ ι ] ) + α j = 1 N b i j Γ u j , k + 1 [ ι ] e I h u j , k [ ι ] I e I h + Ξ g ˜ ( u i , k [ ι ] ) w i , k + Λ γ i , k [ ι ] ] , Δ u i , k [ ι ] | ι = 0 = 0 , Δ u i , k [ ι ] | ι = l 1 = ρ i , k , ( ι , k ) ( 0 , l ) Z × Z 0 ,
where f ˜ ( u i ) : = f ( z i ) f ( w ) and g ˜ ( u i ) : = g ( z i ) g ( w ) , i = 1 , 2 , , N . With the help of Equations (3) and (6), the initial condition for INNs in Equation (7) can be derived, as depicted by
u i , 0 ι = φ i , 0 ι ϕ 0 ι , Δ u i , 0 ι = φ ˜ i , 0 ι ϕ ˜ 0 ι , ι [ 0 , l ] Z , i = 1 , 2 , , N .
To study INNs Equation (1) effectively, let
u i , k + 1 [ ι ] = e I h u i , k [ ι ] + ε ( I e I h ) v i , k [ ι ] , ( ι , k ) ( 0 , l ) Z × Z ,
where ε > 0 is a controlling parameter, which can be adjusted freely, i = 1 , 2 , , N . Then, the first equation in INNs Equation (7) is changed into
v i , k + 1 [ ι ] = e D h v i , k [ ι ] + I e D h D [ M ε Δ 2 u i , k [ ι 1 ] + C ε u i , k [ ι ] + A ε f ˜ ( u i , k [ ι ] ) + α j = 1 N b i j Γ v j , k [ ι ] + Ξ ε g ˜ ( u i , k [ ι ] ) w i , k + Λ ε γ i , k [ ι ] ] ,
( ι , k ) ( 0 , l ) Z × Z , C ε = ε 1 ( D C I ) , M ε = ε 1 M , A ε = ε 1 A , Ξ ε = ε 1 Ξ , Λ ε = ε 1 Λ , i = 1 , 2 , , N .
The vector forms of INNs Equations (9) and (10) are written as
e u , k + 1 [ ι ] = ( e I h ) e u , k [ ι ] + ε ( I e I h ) e v , k [ ι ] , e v , k + 1 [ ι ] = ( e D h ) e v , k [ ι ] + ( I e D h ) M ε D Δ 2 e u , k [ ι 1 ] + ( I e D h ) C ε D e u , k [ ι ] + ( I e D h ) A ε D F ( e u , k [ ι ] ) + α ( I e D h ) Γ D B e v , k [ ι ] + ( I e D h ) Λ ε D γ k [ ι ] + ( I e D h ) Ξ ε D w k G ( e u , k [ ι ] ) , Δ e u , k [ ι ] | ι = 0 = 0 , Δ e u , k [ ι ] | ι = l 1 = ρ k ,
where
e u = ( u 1 , , u N ) T , e v = ( v 1 , , v N ) T ,
F ( e u ) : = ( f ˜ ( u 1 ) , , f ˜ ( u N ) ) T , G ( e u ) : = ( g ˜ ( u 1 ) , , g ˜ ( u N ) ) T ,
w = diag ( w 1 , , w N ) T , γ = ( γ 1 , , γ N ) T , ρ = ( ρ 1 , , ρ N ) T ,
I N denotes the N-order identity matrix. Hereby, ( A ) : = I N A and ( A ) B : = B A . In accordance with Equations (8) and (9), the initial condition of INNs Equation (11) is expressed by
e u , 0 ι = ψ 0 ι , e v , 0 ι = ε 1 ( I e I h ) 1 ψ ˜ 0 ι + ε 1 I N ψ 0 ι ,
where ι [ 0 , l ] Z , i = 1 , 2 , , N , ψ 0 · = ( φ 1 , 0 · ϕ 0 · , , φ N , 0 · ϕ 0 · ) T and ψ ˜ 0 · = ( φ ˜ 1 , 0 · ϕ ˜ 0 · , , φ ˜ N , 0 · ϕ ˜ 0 · ) T . Throughout this article, supposing that
ι = 1 l 1 E φ i , 0 ι 2 < , ι = 1 l 1 E ϕ 0 ι 2 < , ι = 1 l 1 E φ ˜ i , 0 ι 2 < , ι = 1 l 1 E ϕ ˜ 0 ι 2 <
for i = 1 , 2 , , N . Based on Equation (12), we have
ι = 1 l 1 E e u , 0 ι 2 < , ι = 1 l 1 E e v , 0 ι 2 < .
The current discussion will establish a boundary controller to synchronize and passivity-based control the master INNs Equations (5) and slave INNs (1), which will be demonstrated in Section 3.
Hereon, we need the following assumption for activation functions.
(F) 
L f and L g are n-order matrices ensuring
[ f ( x ) f ( y ) ] T [ f ( x ) f ( y ) ] ( x y ) T L f ( x y ) ,
[ g ( x ) g ( y ) ] T [ g ( x ) g ( y ) ] ( x y ) T L g ( x y ) , x , y R n .

2.2. Some Important Inequalities

Lemma 1 
([39]). Let X , Y R m . Then X T Y + Y T X α X T X + 1 α Y T Y for any α > 0 .
Lemma 2 
([40]). If X : [ 0 , l ] Z R m , P R m × m , one has
ι = 1 l 1 X ι T P Δ 2 X ι 1 = X ι T P Δ X ι 1 | 1 l ι = 1 l 1 Δ X ι T P Δ X ι .
Lemma 3 
([41,42]). If X : [ 0 , l ] Z R m , P R m × m , P 0 , and X 0 = 0 , one has
ν l ι = 0 l X ι T P X ι ι = 0 l 1 Δ X ι T P Δ X ι μ l ι = 0 l X ι T P X ι ,
where μ l = 4 cos 2 π 2 l + 1 and ν l = 4 sin 2 π 2 ( 2 l + 1 ) .
Lemma 4 
([41,43]). If X : [ 1 , l ] Z R m , P R m × m , P 0 , one has
κ l ι = 1 l X ι T P X ι ι = 1 l 1 Δ X ι T P Δ X ι + ( X 1 + X l ) T P ( X 1 + X l ) ,
ι = 1 l 1 Δ X ι T P Δ X ι + X 1 + ( 1 ) l X l T P X 1 + ( 1 ) l X l ( 4 κ l ) ι = 1 l X ι T P X ι .
Using Lemma 3, we get
ι = 0 l 2 Δ 2 e u , k [ ι ] T P Δ 2 e u , k [ ι ] μ l 1 2 ι = 0 l 1 Δ e u , k [ ι ] T P Δ e u , k [ ι ] , k Z 0 ,
where P is defined as in Lemma 3.

3. Stochastic Synchronization and Passivity-Based Control

The slave INNs Equation (1) is said to be stochastically synchronized with the master INNs Equation (5) if the error vector networks Equation (11) achieves globally asymptotically stability in mean square, i.e.,
lim k ι = 1 l 1 E e u , k [ ι ] 2 = 0 = lim k ι = 1 l 1 E e v , k [ ι ] 2 .

3.1. Stochastic Synchronization

Define
ρ k = ι = 1 l 1 Θ e u , k [ ι ] , k Z 0 ,
where Θ R n × n . Set D : = I e D h D .
Theorem 1. 
Assuming that (F) is valid, and ε > 0 is given in advance, D and M ε are nonsingular. The slaver INNs Equation (1) stochastically synchronizes with the master INNs Equation (5); in other words, model Equation (11) is globally mean-squared asymptotically stable if it has positive constants λ f , λ g and n-order matrices P > 0 , Q > 0 , H > 0 , K > 0 such that
O : = O 11 O 12 O 13 O 14 O 15 O 16 O 22 O 23 O 24 O 25 O 26 O 33 O 34 O 35 O 36 O 44 O 45 O 46 O 55 O 56 O 66 < 0 ,
where
O 11 = 1 s y m ( C ε K ) + e I h P e I h P + C ε D Q D C ε + λ f ( L f ) + λ g ( L g ) ,
O 12 = ε e I h P ( I e I h ) + e D h Q D C ε T + α C ε D Q D Γ B , O 13 = 1 ( C ε K ) ,
O 15 = C ε D Q D A ε , O 25 = e D h Q D A ε + α A ε T D Q D Γ B T ,
O 22 = Q + ε 2 ( I e I h ) P ( I e I h ) + α s y m e D h Q D Γ B
+ 2 e D h Q e D h + 2 α 2 Γ T D Q D Γ B T B ,
O 33 = H , O 44 = s y m C ε D Q D M ε + 4 μ l 1 2 M ε T D Q D M ε + 2 l κ l H ,
O 55 = λ f I + 2 A ε T D Q D A ε , O 66 = λ g I + Ξ ε T D Q D Ξ ε ,
O 14 = O 16 = O 23 = O 24 = O 26 = O 34 = O 35 = O 36 = O 45 = O 46 = O 56 = 0 . Here s y m ( A ) = A + A T . The controller gain
Θ = D Q D M ε 1 K .
Proof. 
Let us define a Lyapunov-Krasovskii function, which is described by
V k = V 1 , k + V 2 , k ,
where
V 1 , k = ι = 1 l 1 e u , k [ ι ] T ( I N P ) e u , k [ ι ] , V 2 , k = ι = 1 l 1 e v , k [ ι ] T ( I N Q ) e v , k [ ι ] , k Z 0 .
In the line with the first segment of the error networks Equation (11), we can derive
E Δ V 1 , k = E ι = 1 l 1 e u , k + 1 [ ι ] T ( I N P ) e u , k + 1 [ ι ] V 1 , k = E ι = 1 l 1 e u , k [ ι ] T e I h P e I h P e u , k [ ι ] + ε E ι = 1 l 1 s y m e u , k [ ι ] T e I h P ( I e I h ) e v , k [ ι ] + ε 2 E ι = 1 l 1 e v , k [ ι ] T ( I e I h ) P ( I e I h ) e v , k [ ι ] , k Z 0 .
According to the second equation of networks Equation (11), we get
E V 2 , k + 1 = E ι = 1 l 1 e v , k + 1 [ ι ] T ( I N Q ) e v , k + 1 [ ι ] = E ι = 1 l 1 e v , k [ ι ] T e D h Q e D h e v , k [ ι ] U 1 , k + E ι = 1 l 1 s y m e v , k [ ι ] T e D h Q D M ε Δ 2 e u , k [ ι 1 ] U 2 , k + E ι = 1 l 1 s y m e v , k [ ι ] T e D h Q D C ε e u , k [ ι ] U 3 , k + E ι = 1 l 1 s y m e v , k [ ι ] T e D h Q D A ε F ( e u , k [ ι ] ) U 4 , k + α E ι = 1 l 1 s y m e v , k [ ι ] T e D h Q D Γ B e v , k [ ι ] U 5 , k + E ι = 1 l 1 Δ 2 e u , k [ ι 1 ] T M ε T D Q D M ε Δ 2 e u , k [ ι 1 ] U 6 , k + E ι = 1 l 1 s y m Δ 2 e u , k [ ι 1 ] T M ε T D Q D C ε e u , k [ ι ] U 7 , k + E ι = 1 l 1 s y m Δ 2 e u , k [ ι 1 ] T M ε T D Q D A ε F ( e u , k [ ι ] ) U 8 , k + α E ι = 1 l 1 s y m Δ 2 e u , k [ ι 1 ] T M ε T D Q D Γ B e v , k [ ι ] U 9 , k + E ι = 1 l 1 e u , k [ ι ] T C ε D Q D C ε e u , k [ ι ] U 10 , k + E ι = 1 l 1 s y m e u , k [ ι ] T C ε D Q D A ε F ( e u , k [ ι ] ) U 11 , k + α E ι = 1 l 1 s y m e u , k [ ι ] T C ε D Q D Γ B e v , k [ ι ] U 12 , k + E ι = 1 l 1 F T ( e u , k [ ι ] ) A ε T D Q D A ε F ( e u , k [ ι ] ) U 13 , k + α E ι = 1 l 1 s y m F T ( e u , k [ ι ] ) A ε T D Q D Γ B e v , k [ ι ] U 14 , k + α 2 E ι = 1 l 1 e v , k [ ι ] T Γ T D Q D Γ B T B e v , k [ ι ] U 15 , k + E ι = 1 l 1 G T ( e u , k [ ι ] ) Ξ ε T D Q D Ξ ε w k 2 G ( e u , k [ ι ] ) U 16 , k ,
where k Z 0 .
According to Lemmas 1–3 and boundary conditions in Equation (11), we calculate
U 2 , k E ι = 1 l 1 e v , k [ ι ] T e D h Q e D h e v , k [ ι ] + E ι = 1 l 1 Δ 2 e u , k [ ι 1 ] T M ε T D Q D M ε Δ 2 e u , k [ ι 1 ] E ι = 1 l 1 e v , k [ ι ] T e D h Q e D h e v , k [ ι ] + μ l 1 2 E ι = 1 l 1 Δ e u , k [ ι ] T M ε T D Q D M ε Δ e u , k [ ι ] ,
U 6 , k = E ι = 0 l 2 Δ 2 e z T ( x l , t k ) M ε T D Q D M ε Δ 2 e u , k [ ι ] μ l 1 2 E ι = 1 l 1 Δ e u , k [ ι ] T M ε T D Q D M ε Δ e u , k [ ι ] ,
U 7 , k = 1 E s y m e u , k [ ι ] T C ε D Q D M ε Δ e u , k [ ι 1 ] | 1 l E ι = 1 l 1 s y m Δ e u , k [ ι ] T C ε D Q D M ε Δ e u , k [ ι ] = 1 E s y m e u , k [ l ] T C ε D Q D M ε ρ k E ι = 1 l 1 Δ e u , k [ ι ] T s y m C ε D Q D M ε Δ e u , k [ ι ] ,
U 8 , k E ι = 1 l 1 Δ 2 e u , k [ ι 1 ] T M ε T D Q D M ε Δ 2 e u , k [ ι 1 ] + E ι = 1 l 1 F T ( e u , k [ ι ] ) A ε T D Q D A ε F ( e u , k [ ι ] ) μ l 1 2 E ι = 1 l 1 Δ e u , k [ ι ] T M ε T D Q D M ε Δ e u , k [ ι ] + E ι = 1 l 1 F T ( e u , k [ ι ] ) A ε T D Q D A ε F ( e u , k [ ι ] ) ,
U 9 , k α 2 E ι = 1 l 1 e v , k [ ι ] T Γ T D Q D Γ B T B e v , k [ ι ] + E ι = 1 l 1 Δ 2 e u , k [ ι 1 ] T M ε T D Q D M ε Δ 2 e u , k [ ι 1 ] α 2 E ι = 1 l 1 e v , k [ ι ] T Γ T D Q D Γ B T B e v , k [ ι ] + μ l 1 2 E ι = 1 l 1 Δ e u , k [ ι ] T M ε T D Q D M ε Δ e u , k [ ι ] ,
U 16 , k = E ι = 1 l 1 G T ( e u , k [ ι ] ) Ξ ε T D Q D Ξ ε G ( e u , k [ ι ] ) , k Z 0 .
With the help of (F), we have
ι = 1 l 1 F T ( e u , k [ ι ] ) F ( e u , k [ ι ] ) ι = 1 l 1 e u , k [ ι ] T ( L f ) e u , k [ ι ] , ι = 1 l 1 G T ( e u , k [ ι ] ) G ( e u , k [ ι ] ) ι = 1 l 1 e u , k [ ι ] T ( L g ) e u , k [ ι ] ,
and by using e ^ u , · [ · ] : = e u , · [ l ] e u , · [ · ] and Lemma 4, it gets
ι = 1 l e ^ u , k [ ι ] T H e ^ u , k [ ι ] 2 κ l ι = 1 l 1 Δ e ^ u , k [ ι ] T H Δ e ^ u , k [ ι ] + 1 κ l e u , k [ l ] e u , k [ 1 ] T H e u , k [ l ] e u , k [ 1 ] = 2 κ l ι = 1 l 1 Δ e ^ u , k [ ι ] T H Δ e ^ u , k [ ι ] + 2 κ l ι = 1 l 1 Δ e ^ u , k [ ι ] T H ι = 1 l 1 Δ e ^ u , k [ ι ] 2 l κ l ι = 1 l 1 Δ e ^ u , k [ ι ] T H Δ e ^ u , k [ ι ] , k Z 0 .
Considering Equation (20), we have
ϱ k : = 1 s y m e u , k [ l ] T C ε D Q D M ε ρ k 1 ι = 1 l 1 s y m e u , k [ l ] T C ε D Q D M ε Θ e u , k [ ι ] = 1 ι = 1 l 1 s y m e ^ u , k [ ι ] T C ε D Q D M ε Θ e u , k [ ι ] 1 ι = 1 l 1 s y m e u , k [ ι ] T C ε D Q D M ε Θ e u , k [ ι ] ,
for all k Z 0 .
Taking into account Equations (16)–(26), we obtain
E Δ V k = E Δ V 1 , k + E Δ V 2 , k E ι = 1 l 1 ξ k [ ι ] T O ξ k [ ι ] , k Z 0 ,
where ξ k [ ι ] : = e u , k [ ι ] , e v , k [ ι ] , e ^ u , k [ ι ] , Δ e u , k [ ι ] , F ( e u , k [ ι ] ) , G ( e u , k [ ι ] ) T for k Z 0 , ι = 1 , 2 , , l .
Based on Equation (27), we get
E Δ V k λ max ( O ) ι = 1 l 1 E e u , k [ ι ] 2 + E e v , k [ ι ] 2 , k Z 0 .
With the help of Equation (13), we get
E V 0 max λ max ( P ) , λ max ( Q ) E ι = 1 l 1 e u , 0 ι 2 + e v , 0 ι 2 < .
Noting that λ max ( O ) < 0 owing to the assumption O < 0 in Theorem 1, we can use Equations (28) and (29) to arrive at
λ max ( O ) k = 1 k 1 ι = 1 l 1 E e u , k [ ι ] 2 + E e v , k [ ι ] 2 E V k E V 0 E V 0 ,
which is equal to
k = 1 k 1 ι = 1 l 1 E e u , k [ ι ] 2 + E e v , k [ ι ] 2 E V 0 λ max ( O ) < k k = 1 ι = 1 l 1 E e u , k [ ι ] 2 + E e v , k [ ι ] 2 < .
Then,
lim k ι = 1 l 1 E e u , k [ ι ] 2 = 0 = lim k ι = 1 l 1 E e v , k [ ι ] 2 ,
which implies that model Equation (11) achieves global mean-squared asymptotic stability. This completes the proof.    □
From Lemma 4, the following inequality is valid:
μ l 1 2 E ι = 1 l 1 Δ e u , k [ ι ] T M ε T D Q D M ε Δ e u , k [ ι ] 4 κ l 2 μ l 1 2 E ι = 1 l 1 e ^ u , k [ ι ] T M ε T D Q D M ε e ^ u , k [ ι ] ,
where k Z 0 . Further,
E Δ V k E ι = 1 l 1 ξ k [ ι ] T O ˜ ξ k [ ι ] , k Z 0 ,
here O ˜ = ( O ˜ i j ) 1 i , j 6 is defined as O defined in Theorem 1, except that
O ˜ 33 = H + 4 ( 1 β ) ( 4 κ l ) μ l 1 4 M ε T D Q D M ε ,
O ˜ 44 = s y m C ε D Q D M ε + 4 μ l 1 β 2 M ε T D Q D M ε + 2 l κ l H .
So, we have the following:
Corollary 1. 
Assuming that (F) is valid, we pre-give values of ε > 0 and β [ 0 , 1 ] . Additionally, we assume that D and M ε are nonsingular, and we define Θ as indicated in Theorem 1. Under these conditions, the slave INNs Equation (1) stochastically synchronize with the master INNs Equation (5), meaning that the model Equation (11) achieves global mean-squared asymptotic stability. This holds true if the model has positive constants λ f , λ g , and positive definite n-order matrices P, Q, H, and K such that the O ˜ matrix defined in Equation (31) is negative definite.
Remark 1. 
Reports [22,24] addressed the issues of synchronization for inertial neural networks with reaction-diffusion terms. However, the networks in reports [22,24] were involved in the Dirichlet boundary condition and the controller is embedded in the model of the networks. In this article, the controller does not exist in the model of the networks, but it is designed in the boundary.

3.2. Passivity-Based Control

The error vector networks described by Equation (11) with respect to a supply rate can be represented as
ϖ ( Y , γ ) : = ι = 1 l 1 Y · [ ι ] T γ · [ ι ] for some Y R N n .
This system is stochastically passive if there exists a nonnegative mapping θ that satisfies
E k = s 1 s 2 1 ι = 1 l 1 Y k [ ι ] T γ k [ ι ] θ ( s 2 ) θ ( s 1 ) , s 1 < s 2 , s 1 , s 2 Z 0 .
Theorem 2. 
Let Hypothesis (F) be satisfied, ε > 0 be given, and D , M ε be nonsingular. Additionally, let the controller gain Θ be as provided in Theorem 1. The error networks Equation (11) are stochastically passive if there exist positive constants λ f , λ g and n-order positive definite matrices P, Q, H, K, 1 , 2 , 3 such that
O : = O 11 O 12 O 13 O 14 O 15 O 16 O 17 O 22 O 23 O 24 O 25 O 26 O 27 O 33 O 34 O 35 O 36 O 37 O 44 O 45 O 46 O 47 O 55 O 56 O 57 O 66 O 67 O 77 < 0 ,
where
O 44 = O 44 + μ l 1 2 M ε T D Q D M ε , O 17 = ( 1 ) + C ε D Q D Λ ε ,
O 27 = ( 2 ) + e D h Q D Λ ε + α Γ T D Q D Λ ε B T , O 57 = A ε T D Q D Λ ε ,
O 77 = 2 ( 3 ) + 2 Λ ε T D Q D Λ ε , O 37 = O 47 = O 67 = 0 ,
and the other unmentioned block matrices O i j in O are equal to O i j in O for i , j = 1 , 2 , , 6 .
Proof. 
Define the Lyapunov-Krasovskii function V for the error vector networks Equation (11), following the approach described in Section 3.1. Additionally, introduce an output vector Y R N n to the error vector networks Equation (11) using the expression
Y = ( I N 1 ) e u + ( I N 2 ) e v + ( I N 3 ) γ .
Similar to the argument in Equation (17), we get
E V 2 , k + 1 = i = 1 16 U i , k + E ι = 1 l 1 s y m e v , k [ ι ] T e D h Q D Λ ε γ k [ ι ] U 17 , k + E ι = 1 l 1 s y m Δ 2 e u , k [ ι 1 ] T M ε T D Q D Λ ε γ k [ ι ] U 18 , k + E ι = 1 l 1 s y m e u , k [ ι ] T C ε D Q D Λ ε γ k [ ι ] U 19 , k + E ι = 1 l 1 s y m F T ( e u , k [ ι ] ) A ε T D Q D Λ ε γ k [ ι ] U 20 , k + α E ι = 1 l 1 s y m e v , k [ ι ] T Γ T D Q D Λ ε B T γ k [ ι ] U 21 , k + E ι = 1 l 1 γ k [ ι ] T Λ ε T D Q D Λ ε γ k [ ι ] U 22 , k , k Z 0 .
Meanwhile, similar to the estimates in inequalities Equations (18)–(23), we obtain from Equation (33) the following:
U 18 , k E ι = 1 l 1 Δ 2 e u , k [ ι 1 ] T M ε T D Q D M ε Δ 2 e u , k [ ι 1 ] + E ι = 1 l 1 γ k [ ι ] T Λ ε T D Q D Λ ε γ k [ ι ] μ l 1 2 E ι = 1 l 1 Δ e u , k [ ι ] T M ε T D Q D M ε Δ e u , k [ ι ] + E ι = 1 l 1 γ k [ ι ] T Λ ε T D Q D Λ ε γ k [ ι ] ,
k Z 0 .
By employing Equations (16)–(26) and (33) and (34), we can compute
E Δ V k 2 E ι = 1 l 1 Y k [ ι ] T γ k [ ι ] E ι = 1 l 1 η k [ ι ] T O η k [ ι ] , k Z 0 ,
where η k [ ι ] : = e u , k [ ι ] , e v , k [ ι ] , e ^ u , k [ ι ] , Δ e u , k [ ι ] , F ( e u , k [ ι ] ) , G ( e u , k [ ι ] ) , γ k [ ι ] T for k Z 0 , ι = 1 , 2 , , l .
In accordance with Equation (35), we get
2 E ι = 1 l 1 Y k [ ι ] T γ k [ ι ] E Δ V k ,
which is equal to
2 E k = s 1 s 2 1 ι = 1 l 1 Y k [ ι ] T γ k [ ι ] E V s 2 E V s 1 , s 1 < s 2 , s 1 , s 2 Z 0 .
Accordingly, INNs Equation (11) is stochastic passive. This completes the proof.    □
So, we have the following:
Corollary 2. 
Assuming that (F) is satisfied, ε > 0 and β [ 0 , 1 ] are pre-given, D and M ε are nonsingular, and the controller gain Θ is provided in Theorem 1, the error network Equation (11) is stochastically passive if there exist positive constants λ f , λ g , and n-order positive definite matrices P, Q, H, K, 1 , 2 , and 3 such that O ˜ < 0 . Here, O ˜ = ( O ˜ i j ) 1 i , j 7 is defined as O in Theorem 2 except for the following modifications:
O ˜ 33 = H + 5 ( 1 β ) ( 4 κ l ) μ l 1 4 M ε T D Q D M ε ,
O ˜ 44 = s y m C ε D Q D M ε + 5 μ l 1 β 2 M ε T D Q D M ε + 2 l κ l H .
According to Theorems 1 and 2, a realizable algorithm for stochastic synchronization or passivity of INNs Equations (1) and (5) is designed as Algorithm 1, and its O-chart is described in Figure 1.
Algorithm 1 Stochastic synchronization or passivity of INNs Equations (1) and (5)
(1) 
Initialize the values of the coefficient matrices in INNs Equations (1) and (5)
(2) 
Compute LMIs in Theorems 1 or 2. When they are unviable, modify the values of coefficient matrices in INNs Equation (1); otherwise, switch to next step.
(3) 
Receive the values of matrices P, Q, K, etc. Calculate the controller gain Θ = D Q D M ε 1 K .
(4) 
Write iterative program based on INNs Equations (1) and (5) and plot the response trajectories.
Remark 2. 
Papers [44,45] investigated the passivity of inertial neural networks without reaction-diffusion terms. This paper considers the effects of the reaction diffusions, which complements the works in the literature [44,45].

4. Numerical Example

In view of INNs Equation (1), we take α = 0.1 , J = ( 10 , 12 ) T ,
D = 50 2 0 0 1 , C = 45 2 0 0 1 , M = 0.1 1 1 0 2 , A = 0.1 2 1 0 2 ,
B = 0.1 2 2 1 1 , Γ = 0.01 2 0 0 3 , Ξ = 0.01 2 0 0 1 .
Taking ε = 0.1 , h = 0.01 , = 0.2 , l = 25 , f ( x ) = ( f 1 ( x ) , f 2 ( x ) ) T = 0.1 ( sin x 1 , | x 2 | ) T = ( g 1 ( x ) , g 2 ( x ) ) T = g ( x ) for any x = ( x 1 , x 2 ) T R 2 . From Theorem 1, we can determine that λ f = 32693 , λ g = 32686 ,
P = 1.2059 0.0142 0.0142 1.6238 × 10 5 , Q = 2.5836 0.0082 0.0082 1.5422 × 10 4 ,
H = 6.9393 3.4242 3.4242 5.5926 , K = 0.0197 0.0049 0.0049 0.0042 .
In addition,
Θ = 0.0164 0.0026 0.0026 0.0022 .
By Theorem 1, INNs Equations (1) and (5) realize stochastic synchronization, see Figure 2, Figure 3, Figure 4 and Figure 5.
Furthermore, taking Λ = 0.1 1 0 0 3 ,   γ 1 , k [ ι ] = ( 10 + sin ( ι + k ) , 8 + cos ( ι + k ) ) T , γ 2 , k [ ι ] = ( 10 + sin ( 2 ι + k ) , 8 + cos ( 2 ι + k ) ) T , k Z 0 , ι = 1 , 2 , , l . The output vector Y R 4 for the network is defined as in Equation (32) with the following matrices:
1 = 183.9618 0.2256 0.2256 173.1908 , 2 = 376.9862 0.1127 0.1127 167.3857 , 3 = 1617.7 0.1 0.1 1721 .
By Theorem 2, we have λ f = 1825.8 , λ g = 1825.7 ,
P = 8041.2 71.1 71.1 8871.3 , Q = 1374.2 8.6 8.6 635.3 ,
H = 0.4183 0.1974 0.1974 0.2709 , K = 0.0011 0.0005 0.0005 0.0011 .
Now, the controller gain of the boundary controller is given by
Θ = 0.0147 0.0058 0.0059 0.0142 .
According to Theorem 2, INNs Equations (1) and (5) achieve stochastic passivity, as in Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11.
Remark 3. 
In the previous work in article [38], the authors discussed passivity of non-autonomous discrete-time inertial neural networks, overlooking discrete spatial diffusions. By contrast, the present literature addresses it, as can be seen in Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11.

5. Conclusions and Future Works

For the first time, this discussion focuses on investigating discrete SINNs with the influence of spatial diffusions.
Firstly, we present the time and space difference model of SINNs with reaction diffusions using the time and space difference approaches, respectively.
Secondly, with the aid of a controller designed at the boundary, we address the issues of both stochastic synchronization and passivity-based control, employing the Lyapunov-Krasovskii function method.
As anticipated, we provide decision theorems for the aforementioned research topics concerning discrete SINNs. It is important to note that the method employed in this article predominantly considers homogeneous networks described by INNs Equations (1) and (5), making the study of heterogeneous networks challenging (see ref. [46]).
Moving forward, several aspects merit consideration in future work:
  • Fractional dynamics has become a research hotspot in recent years, which could be discussed in the SINNs of this article.
  • This paper only considers 1-dimensional space variables, which could be extended to higher dimensions.
  • Exploration of alternative control techniques, such as impulsive controls and adaptive controls, holds promise for further investigation.

Author Contributions

Conceptualization, Y.Y. and T.Z.; Methodology, Y.Y. and T.Z.; Investigation, Y.Y., T.Z. and Z.L.; Writing-original draft, Y.Y., T.Z. and Z.L.; Writing-review and editing, Y.Y., T.Z. and Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Key Scientific Research Projects of Colleges and Universities of Henan Province under Grant No. 21A110002.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ganesan, B.; Mani, P.; Shanmugam, L.; Annamalai, M. Synchronization of stochastic neural networks using looped-Lyapunov functional and its application to secure communication. IEEE Trans. Neural Netw. Learn. Syst. 2022; in press. [Google Scholar] [CrossRef]
  2. Alsaedi, A.; Cao, J.D.; Ahmad, B.; Alshehri, A.; Tan, X. Synchronization of master-slave memristive neural networks via fuzzy output-based adaptive strategy. Chaos Soliton Fractal 2022, 158, 112095. [Google Scholar] [CrossRef]
  3. Liu, F.; Meng, W.; Lu, R.Q. Anti-synchronization of discrete-time fuzzy memristive neural networks via impulse sampled-data communication. IEEE Trans. Cybern. 2022; in press. [Google Scholar] [CrossRef]
  4. Zhou, C.; Wang, C.; Sun, Y.; Yao, W.; Lin, H. Cluster output synchronization for memristive neural networks. Inf. Sci. 2022, 589, 459–477. [Google Scholar] [CrossRef]
  5. Li, H.Y.; Fang, J.A.; Li, X.F.; Rutkowski, L.; Huang, T.W. Event-triggered synchronization of multiple discrete-time Markovian jump memristor-based neural networks with mixed mode-dependent delays. IEEE Trans. Circuits Syst. I-Regul. Pap. 2022, 69, 2095–2107. [Google Scholar] [CrossRef]
  6. Boonsatit, N.; Rajendran, S.; Lim, C.P.; Jirawattanapanit, A.; Mohandas, P. New adaptive finite-time cluster synchronization of neutral-type complex-valued coupled neural networks with mixed time delays. Fractal Fract. 2022, 6, 6090515. [Google Scholar] [CrossRef]
  7. Zaferani, E.J.; Teshnehlab, M.; Khodadadian, A.; Heitzinger, C.; Vali, M.; Noii, N.; Wick, T. Hyper-parameter optimization of stacked asymmetric auto-encoders for automatic personality traits perception. Sensors 2022, 22, 6206. [Google Scholar] [CrossRef]
  8. Alimi, A.M.; Aouiti, C.; Assali, E.A. Finite-time and fixed-time synchronization of a class of inertial neural networks with multiproportional delays and its application to secure communication. Neurocomputing 2019, 332, 29–43. [Google Scholar] [CrossRef]
  9. Song, X.N.; Man, J.; Park, J.H.; Song, S. Finite-time synchronization of reaction-diffusion inertial memristive neural networks via gain-scheduled pinning control. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 5045–5056. [Google Scholar] [CrossRef] [PubMed]
  10. Shen, H.; Huang, Z.; Wu, Z.; Cao, J.D.; Park, J.H. Nonfragile H synchronization of BAM inertial neural networks subject to persistent dwell-time switching regularity. IEEE Trans. Cybern. 2022, 52, 6591–6602. [Google Scholar] [CrossRef]
  11. Shanmugasundaram, S.; Udhayakumar, K.; Gunasekaran, D.; Rakkiyappan, R. Event-triggered impulsive control design for synchronization of inertial neural networks with time delays. Neurocomputing 2022, 483, 322–332. [Google Scholar] [CrossRef]
  12. Liu, J.; Shu, L.; Chen, Q.; Zhong, S. Fixed-time synchronization criteria of fuzzy inertial neural networks via Lyapunov functions with indefinite derivatives and its application to image encryption. Fuzzy Sets Syst. 2022; in press. [Google Scholar] [CrossRef]
  13. Peng, Q.; Jian, J. Synchronization analysis of fractional-order inertial-type neural networks with time delays. Math. Comput. Simul. 2023, 205, 62–77. [Google Scholar] [CrossRef]
  14. Zhou, W.J.; Long, M.; Liu, X.Z.; Wu, K.N. Passivity-based boundary control for stochastic delay reaction-diffusion systems. Int. J. Syst. Sci. 2022; in press. [Google Scholar] [CrossRef]
  15. Padmaja, N.; Balasubramaniam, P. Mixed H/passivity based stability analysis of fractional-order gene regulatory networks with variable delays. Math. And Computers Simul. 2022, 192, 167–181. [Google Scholar] [CrossRef]
  16. Shafiya, M.; Nagamani, G. New finite-time passivity criteria for delayed fractional-order neural networks based on Lyapunov function approach, Chaos. Solitons Fractals 2022, 158, 112005. [Google Scholar] [CrossRef]
  17. Wang, J.; Jiang, H.; Hu, C.; Ma, T. Exponential passivity of discrete-time switched neural networks with transmission delays via an event-triggered sliding mode control. Neural Netw. 2021, 143, 271–282. [Google Scholar] [CrossRef]
  18. Huang, Y.; Wu, F. Finite-time passivity and synchronization of coupled complex-valued memristive neural networks. Inf. Sci. 2021, 580, 775–800. [Google Scholar] [CrossRef]
  19. Han, X.X.; Wu, K.N.; Niu, Y. Asynchronous boundary control of Markov jump neural networks with diffusion terms. IEEE Trans. Cybern. 2022; in press. [Google Scholar] [CrossRef]
  20. Liu, X.Z.; Wu, K.N.; Ding, X.; Zhang, W. Boundary stabilization of stochastic delayed Cohen-Grossberg neural networks with diffusion terms. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 3227–3237. [Google Scholar] [CrossRef]
  21. Wang, L.M.; He, H.B.; Zeng, Z.G. Intermittent stabilization of fuzzy competitive neural networks with reaction diffusions. IEEE Trans. Fuzzy Syst. 2021, 29, 2361–2372. [Google Scholar] [CrossRef]
  22. Song, X.N.; Man, J.T.; Ahn, C.K.; Song, S. Finite-time dissipative synchronization for markovian jump generalized inertial neural networks with reaction-diffusion terms. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 3650–3661. [Google Scholar] [CrossRef]
  23. Sun, L.; Su, L.; Wang, J. Non-fragile dissipative state estimation for semi-Markov jump inertial neural networks with reaction-diffusion. Appl. Math. Comput. 2021, 411, 126404. [Google Scholar] [CrossRef]
  24. Song, X.N.; Man, J.T.; Song, S.; Wang, Z. An improved result on synchronization control for memristive neural networks with inertial terms and reaction-diffusion items. ISA Trans. 2020, 99, 74–83. [Google Scholar] [CrossRef] [PubMed]
  25. Chandrasekar, A.; Radhika, T.; Zhu, Q.X. Further results on input-to-state stability of stochastic Cohen-Grossberg BAM neural networks with probabilistic time-varying delays. Neural Process. Lett. 2022, 54, 613–635. [Google Scholar] [CrossRef]
  26. Sriraman, R.; Cao, Y.; Samidurai, R. Global asymptotic stability of stochastic complex-valued neural networks with probabilistic time-varying delays. Math. Comput. Simul. 2020, 171, 103–118. [Google Scholar] [CrossRef]
  27. Zhang, T.W.; Qu, H.Z.; Liu, Y.T.; Zhou, J.W. Weighted pseudo θ-almost periodic sequence solution and guaranteed cost control for discrete-time and discrete-space stochastic inertial neural networks. Chaos Solitons Fractals 2023, 173, 113658. [Google Scholar] [CrossRef]
  28. Zhang, T.W.; Li, Z.H. Switching clusters’ synchronization for discrete space-time complex dynamical networks via boundary feedback controls. Pattern Recognit. 2023, 143, 109763. [Google Scholar] [CrossRef]
  29. Zhang, T.W.; Liu, Y.T.; Qu, H.Z. Global mean-square exponential stability and random periodicity of discrete-time stochastic inertial neural networks with discrete spatial diffusions and Dirichlet boundary condition. Comput. Math. Appl. 2023, 141, 116–128. [Google Scholar] [CrossRef]
  30. Zhang, T.W.; Xiong, L.L. Periodic motion for impulsive fractional functional differential equations with piecewise Caputo derivative. Appl. Math. Lett. 2020, 101, 106072. [Google Scholar] [CrossRef]
  31. Adhira, B.; Nagamani, G.; Dafik, D. Non-fragile extended dissipative synchronization control of delayed uncertain discrete-time neural networks. Commun. Nonlinear Sci. Numer. Simul. 2023, 116, 106820. [Google Scholar] [CrossRef]
  32. Zhang, T.W.; Li, Y.K. Global exponential stability of discrete-time almost automorphic Caputo-Fabrizio BAM fuzzy neural networks via exponential Euler technique. Knowl.-Based Syst. 2022, 246, 108675. [Google Scholar] [CrossRef]
  33. Huang, Z.K.; Mohamad, S.; Gao, F. Multi-almost periodicity in semi-discretizations of a general class of neural networks. Math. Comput. Simul. 2014, 101, 43–60. [Google Scholar] [CrossRef]
  34. Zhang, T.W.; Han, S.F.; Zhou, J.W. Dynamic behaviours for semi-discrete stochastic Cohen-Grossberg neural networks with time delays. J. Frankl. Inst. 2020, 357, 13006–13040. [Google Scholar] [CrossRef]
  35. Xiao, Q.; Huang, T.W.; Zeng, Z.G. On exponential stability of delayed discrete-time complex-valued inertial neural networks. IEEE Trans. Cybern. 2022, 52, 3483–3494. [Google Scholar] [CrossRef] [PubMed]
  36. Xiao, Q.; Huang, T.W. Quasisynchronization of discrete-time inertial neural networks with parameter mismatches and delays. IEEE Trans. Cybern. 2021, 51, 2290–2295. [Google Scholar] [CrossRef] [PubMed]
  37. Chen, X.; Lin, D.; Lan, W. Global dissipativity of delayed discrete-time inertial neural networks. Neurocomputing 2020, 390, 131–138. [Google Scholar] [CrossRef]
  38. Chen, X.; Lin, D. Passivity analysis of non-autonomous discrete-time inertial neural networks with time-varying delays. Neural Process. Lett. 2020, 51, 2929–2944. [Google Scholar] [CrossRef]
  39. Zhou, W.N.; Yang, J.; Zhou, L.W.; Tong, D.B. Stability and Synchronization Control of Stochastic Neural Networks; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  40. Agarwal, R.P. Difference Equations and Inequalities; Marcel Dekker: New York, NY, USA, 2000. [Google Scholar]
  41. Seuret, A.; Fridman, E. Wirtinger-like Lyapunov-Krasovskii functionals for discrete-time delay systems. IMA J. Math. Control. Inf. 2018, 35, 861–876. [Google Scholar] [CrossRef]
  42. Milovanović, G.V.; Milovanović, I.Ž. On discrete inequalities of Wirtinger’s type. J. Math. Anal. Appl. 1982, 88, 378–387. [Google Scholar] [CrossRef]
  43. Mollaiyan, K. Generalization of Discrete-Time Wirtinger Inequalities and a Preliminary Study of Their Application to SNR Analysis of Sinusoids Buried in Noise. Master’s Thesis, Concordia University, Montreal, QC, Canada, 2008. [Google Scholar]
  44. Zhong, X.; Ren, J.; Gao, Y. Passivity-based bipartite synchronization of coupled delayed inertial neural networks via non-reduced order method. Neural Process. Lett. 2022, 54, 4869–4892. [Google Scholar] [CrossRef]
  45. Fang, T.; Jiao, S.; Fu, D.; Su, L. Passivity-based synchronization for Markov switched neural networks with time delays and the inertial term. Appl. Math. Comput. 2021, 394, 125786. [Google Scholar] [CrossRef]
  46. Chen, W.; Ren, G.; Yu, Y.; Yuan, X. Quasi-synchronization of heterogeneous stochastic coupled reaction-diffusion neural networks with mixed time-varying delays via boundary control. J. Frankl. Inst. 2023, 360, 10080–10099. [Google Scholar] [CrossRef]
Figure 1. O-chart of Algorithm 1.
Figure 1. O-chart of Algorithm 1.
Axioms 12 00820 g001
Figure 2. Stochastic synchronization to INNs Equations (1) and (5).
Figure 2. Stochastic synchronization to INNs Equations (1) and (5).
Axioms 12 00820 g002
Figure 3. Stochastic synchronization to INNs Equations (1) and (5).
Figure 3. Stochastic synchronization to INNs Equations (1) and (5).
Axioms 12 00820 g003
Figure 4. Stochastic synchronization to INNs Equations (1) and (5).
Figure 4. Stochastic synchronization to INNs Equations (1) and (5).
Axioms 12 00820 g004
Figure 5. Stochastic synchronization to INNs Equations (1) and (5).
Figure 5. Stochastic synchronization to INNs Equations (1) and (5).
Axioms 12 00820 g005
Figure 6. Trajectory of state variable w 1 to INNs Equation (5).
Figure 6. Trajectory of state variable w 1 to INNs Equation (5).
Axioms 12 00820 g006
Figure 7. Trajectory of state variable w 2 to INNs Equation (5).
Figure 7. Trajectory of state variable w 2 to INNs Equation (5).
Axioms 12 00820 g007
Figure 8. Trajectory of state variable z 11 to INNs Equation (1).
Figure 8. Trajectory of state variable z 11 to INNs Equation (1).
Axioms 12 00820 g008
Figure 9. Trajectory of state variable z 12 to INNs Equation (1).
Figure 9. Trajectory of state variable z 12 to INNs Equation (1).
Axioms 12 00820 g009
Figure 10. Trajectory of state variable z 21 to INNs Equation (1).
Figure 10. Trajectory of state variable z 21 to INNs Equation (1).
Axioms 12 00820 g010
Figure 11. Trajectory of state variable z 22 to INNs Equation (1).
Figure 11. Trajectory of state variable z 22 to INNs Equation (1).
Axioms 12 00820 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Y.; Zhang, T.; Li, Z. Boundary Controlling Synchronization and Passivity Analysis for Multi-Variable Discrete Stochastic Inertial Neural Networks. Axioms 2023, 12, 820. https://doi.org/10.3390/axioms12090820

AMA Style

Yang Y, Zhang T, Li Z. Boundary Controlling Synchronization and Passivity Analysis for Multi-Variable Discrete Stochastic Inertial Neural Networks. Axioms. 2023; 12(9):820. https://doi.org/10.3390/axioms12090820

Chicago/Turabian Style

Yang, Yongyan, Tianwei Zhang, and Zhouhong Li. 2023. "Boundary Controlling Synchronization and Passivity Analysis for Multi-Variable Discrete Stochastic Inertial Neural Networks" Axioms 12, no. 9: 820. https://doi.org/10.3390/axioms12090820

APA Style

Yang, Y., Zhang, T., & Li, Z. (2023). Boundary Controlling Synchronization and Passivity Analysis for Multi-Variable Discrete Stochastic Inertial Neural Networks. Axioms, 12(9), 820. https://doi.org/10.3390/axioms12090820

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop