Next Article in Journal
Solvability of Boundary Value Problems for Differential Equations Combining Ordinary and Fractional Derivatives of Non-Autonomous Variable Order
Previous Article in Journal
Multi-Label Learning with Distribution Matching Ensemble: An Adaptive and Just-In-Time Weighted Ensemble Learning Algorithm for Classifying a Nonstationary Online Multi-Label Data Stream
Previous Article in Special Issue
Symmetry and Complexity in Gene Association Networks Using the Generalized Correlation Coefficient
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Frobenius Norm-Based Global Stability Analysis of Delayed Bidirectional Associative Memory Neural Networks

by
N. Mohamed Thoiyab
1,
Saravanan Shanmugam
2,3,*,
Rajarathinam Vadivel
4 and
Nallappan Gunasekaran
5
1
Department of Mathematics, Jamal Mohamed College, Affiliated to Bharathidasan University, Tiruchirappalli 620020, Tamil Nadu, India
2
Center for Computational Biology, Easwari Engineering College, Chennai 600089, Tamil Nadu, India
3
Center for Research, SRM Institute of Science and Technology-Ramapuram, Chennai 600089, Tamil Nadu, India
4
Department of Mathematics, Faculty of Science and Technology, Phuket Rajabhat University, Phuket 83000, Thailand
5
Eastern Michigan Joint College of Engineering, Beibu Gulf University, Qinzhou 535011, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(2), 183; https://doi.org/10.3390/sym17020183
Submission received: 31 December 2024 / Revised: 18 January 2025 / Accepted: 21 January 2025 / Published: 24 January 2025
(This article belongs to the Special Issue Symmetry and Asymmetry in Nonlinear Systems)

Abstract

:
The present research investigates the global asymptotic stability of bidirectional associative memory (BAM) neural networks using distinct sufficient conditions. The primary objective of this study is to establish new generalized criteria for the global asymptotic robust stability of time-delayed BAM neural networks at the equilibrium point, utilizing the Frobenius norm and the positive symmetrical approach. The new sufficient conditions are derived with the help of the Lyapunov–Krasovskii functional and the Frobenius norm, which are important in deep learning for a variety of reasons. The derived conditions are not influenced by the system parameter delays of the BAM neural network. Finally, a numerical example is provided to demonstrate the effectiveness of the proposed conclusions regarding network parameters.

1. Introduction

Neural networks (NNs) are used in engineering to solve problems such as signal processing, pattern recognition, and combinatorial optimization. Several neural network (NN) models exist, including bidirectional associative memory (BAM) NNs, Cohen–Grossberg NNs, cellular NNs, recurrent NNs, and Hopfield NNs [1,2,3]. However, one common challenge in the development and hardware implementation of NNs is the imprecision of the NN parameters; for example, parameters on NN circuits are bound to vary. Despite the ability to analyze the associated ranges and limitations of these parameters, estimation errors occur during the NN design process when examining critical data such as neuron firing rate, synaptic connection strength, and signal transmission latency. As a result, an effective NN model must possess specified features. The equilibria and stability dynamics of this NN are essential to the design of a neural system that aims to solve the specified task. In this context, it should be highlighted that different dynamical NNs displaying the desired dynamics may be needed to handle different challenges. To solve an optimization-related problem, for instance, we should create an NN model that has a single, globally stable equilibrium point for each external constant input. The value of this particular equilibrium point must not be reliant on the initial conditions of the neuronal states which are discussed by many researchers in [4,5,6,7]. In addition, certain resilience characteristics must be present in an effective NN model. The study of the global stability of synaptic connection matrices in NNs leads to the theory of the interval matrix. Furthermore, a single equilibrium point is the most important component in the modification of the NN analysis. Dynamical NN models rely largely on many-equilibrium-point stability analysis. Various stability analysis methods have been explored, such as global asymptotic stability, complete stability, and exponential stability of dynamic models that include time delays, which numerous researchers have examined in [8,9,10,11,12,13,14]. Previous research has shown different outcomes regarding the stability analysis of time-delayed NNs, utilizing stability and instability assessments through Lyapunov and non-smooth methods. As a result, an important concern is the examination of global asymptotic robust stability (GARS) and control strategies for numerous time-delayed NNs. Recently, several researchers have started to focus on investigating this topic [15,16,17,18,19,20]. As we all know, in an NN, the dynamical properties of the equilibrium point can be influenced by delay parameters. Therefore, studying the stability of the equilibrium point in delayed NNs is of great significance (see [21,22,23,24,25,26,27]).
BAM is a significant NN technique that was first presented by B. Kosko [28,29]. The BAM model consists of two layers of neurons, where neurons within the same layer do not interconnect. The BAM NN can range from a single-layer auto-switching model to a dual-layer pattern-matching heterojunction chain that retains both forward and backward pattern pairs. Numerous researchers have thoroughly examined the dynamic properties and used BAM NNs to address various real-time challenges, such as automatic control, optimization, signal processing, and pattern recognition. In addition, studies have addressed the time delay present in BAM models. The results of the BAM models can be classified into three categories. In previous research on BAM models, only two types were considered: those with time delays and those without. Numerous authors have investigated the stability results of the two aforementioned types in the literature [30,31,32]. The hybrid variant of the BAM nervous system represents a newer area of study. In this form, both delayed and immediate signals are present. A specific solution is required for every possible initial condition in hybrid BAM NNs. Mathematically speaking, this shows that the GARS function created a lag in the NNs reaching the time equilibrium point. Numerous authors have examined the worldwide asymptotic robust stability of the hybrid BAM model in [33,34,35,36]. This will lead to huge growth in the amount of computation. So, there is much room left for us to investigate the global robust stability of delayed BAM NNs.
The main contributions of this research are as follows:
  • A comprehensive new sufficient condition is derived for the GARS of BAM NNs with delays using the Frobenius norm.
  • The Frobenius norm offers a simpler approach compared to other upper-bound norms.The Frobenius norm is a flexible tool in deep learning that helps with NN stability and generalization by providing information about the size of weight matrices.
  • Additionally, various constraints on interconnection matrix norms, Lyapunov–Krasovskii functions, and specific activation functions are utilized to derive results that confirm the stability of hybrid BAM NNs.
  • Finally, numerical examples demonstrate the efficiency of the proposed findings for network parameters.
Notations: The notations that will be utilized in this paper are as follows. Define E as a matrix with elements e i j for n × m . The 2-norm of the matrix E is equal to the square root of the maximum eigenvalue of E T E . The absolute value of a matrix E = ( e i j ) n × m with real numbers is equal to the absolute value of each entry in the matrix, denoted by E = ( e i j ) n × m . A matrix A = ( a i j ) n × n is called positive definite (semidefinite) if it is a symmetric matrix and u T A u > 0 ( 0 ) holds for all real vectors u = ( u 1 , u 2 , , u n ) T R n . Also, i , j = 1 n , m = i = 1 n j = 1 m and j , i = 1 m , n = j = 1 m i = 1 n .

2. Preliminaries

Take into account the system of the hybrid NNs with delayed connection in BAM, which is given as follows [28,37]:
y ˙ j ( t ) = b ˇ j y j ( t ) + i = 1 n g ˇ i j ϕ 1 i ( w i ( t ) ) + i = 1 n g ˇ i j τ ϕ 1 i ( w i ( t σ ˇ i j ) ) + K j , j w ˙ i ( t ) = a ˇ i w i ( t ) + j = 1 m f ˇ j i ϕ 2 j ( y j ( t ) ) + j = 1 m f ˇ j i τ ϕ 2 j ( y j ( t τ ˇ j i ) ) + J i , i
where w i ( t ) and y j ( t ) represent the state of the i t h and j t h neurons in the vectors at time t. n and m represent the total number of neurons in the proposed hybrid BAM model (1). ϕ 1 i and ϕ 2 j indicate the activation functions of the neurons; f ˇ j i , f ˇ j i τ , g ˇ i j , and g ˇ i j τ are the connection weight matrices; a ˇ i and b ˇ j stand for the neuron charging time constants; J i and K j are the inputs.
For the stability of the NN model (1), the following several considerations have been made:
Assumption 1. 
Assume that there are certain l ˇ i > 0 , h ˇ j > 0 , such that the following specified conditions are satisfied:
0 ϕ 1 i ( x ¯ ) ϕ 1 i ( y ¯ ) x ¯ y ¯ l ˇ i ,
0 ϕ 2 j ( x ^ ) ϕ 2 j ( y ^ ) x ^ y ^ h ˇ j , x ^ y ^ , x ¯ y ¯
for all x ^ , y ^ , x ¯ , y ¯ R .
Assumption 2. 
Assume there are positive constants M ˇ i and N ˇ j for which certain conditions are satisfied. | ϕ 1 i ( w 1 ) |     M ˇ i and | ϕ 2 j ( w 2 ) |     N ˇ j for all w 1 , w 2 R , where i = 1 , 2 , , n , j = 1 , 2 , , m . Based on this assumption, the activation functions are limited in type.
The matrices b ˇ j , f ˇ j i , f ˇ j i τ , g ˇ i j , g ˇ i j τ , a ˇ i , τ ˇ j i and σ ˇ i j are assumed to be uncertain matrices. The usual approach to deal with the delayed system includes modifying the synaptic strength connection matrices within a specific time frame in the following manner for i = 1 , 2 , , n , j = 1 , 2 , , m .
B I = { B = d i a g ( b ˇ j ) : 0 B ̲ B B ¯ , i e . , 0 < b ˇ ̲ j b ˇ j b ˇ ¯ j } , B B I G I = { G = ( g ˇ i j ) : G ̲ G G ¯ , i e . , g ˇ ̲ i j g ˇ i j g ˇ ¯ i j } , G G I G I τ = { G τ = ( g ˇ i j τ ) : G ̲ τ G τ G ¯ τ , i e . , g ˇ ̲ i j τ g ˇ i j τ g ˇ ¯ i j τ } , G τ G I τ σ I = { σ = ( σ ˇ i j ) : σ ̲ σ σ ¯ , i e . , σ ˇ ̲ i j σ ˇ i j σ ˇ ¯ i j } , σ σ I , A I = { A = d i a g ( a ˇ i ) : 0 A ̲ A A ¯ , i e . , 0 < a ˇ ̲ i a ˇ i a ˇ ¯ i } , A A I F I = { F = ( f ˇ j i ) : F ̲ F F ¯ , i e . , f ˇ ̲ j i f ˇ j i f ˇ ¯ j i } , F F I F I τ = { F τ = ( f ˇ j i τ ) : F ̲ τ F τ F ¯ τ , i e . , f ˇ ̲ j i τ f ˇ j i τ f ˇ ¯ j i τ } , F τ F I τ ϕ I = { ϕ = ( τ ˇ j i ) : ϕ ̲ ϕ ϕ ¯ , i e . , τ ˇ ̲ j i τ ˇ j i τ ˇ ¯ j i } , ϕ ϕ I .
Next, we move the equilibrium point of the system (1) to the origin for the simplification of our proposed results. To achieve this, we employ the subsequent alteration:
x ˇ j ( · ) = y j ( · ) y j * , u ˇ i ( · ) = w i ( · ) w i * , for every j = 1 , 2 , , m , i = 1 , 2 , , n .
Through the use of the transformation mentioned above, we change the system (1) into the form shown below:
d x ˇ j ( t ) d t = b ˇ j x ˇ j ( t ) + i = i n g ˇ i j χ 1 i ( u ˇ i ( t ) ) + i = i n g ˇ i j τ χ 1 i ( u ˇ i ( t σ ˇ i j ) ) , j , d u ˇ i ( t ) d t = a ˇ i u ˇ i ( t ) + j = i m f ˇ j i χ 2 j ( x ˇ j ( t ) ) + j = i m f ˇ j i τ χ 2 j ( x ˇ j ( t τ ˇ j i ) ) , i ,
where
χ 1 i ( u ˇ i ( · ) ) = ϕ 1 i ( u ˇ i ( · ) + w i * ) ϕ 1 i ( w i * ) , χ 1 i ( 0 ) = 0 , χ 2 j ( x ˇ j ( · ) ) = ϕ 2 j ( x ˇ j ( · ) + y j * ) ϕ 2 j ( y j * ) , χ 2 j ( 0 ) = 0 ,   for every i , j .
Now, it is simple to confirm that the functions χ 1 i and χ 2 j meet the requirements for ϕ 1 i and ϕ 2 j , meaning that χ 1 i , χ 2 j satisfy both Assumptions 1 and 2.
Definition 1 
([38]). The system (3) satisfying (2) is GARS if the origin of the unique equilibrium point of the BAM system (3) is globally asymptotically stable for all B B I , G G I , G τ G I τ , A A I , F F I , F F I τ . The global asymptotic stability of the system (3) is nothing but the solutions of the system (3) that converge to the origin of the unique equilibrium point irrespective of the initial conditions.
The identification and understanding of these following lemmas and facts are pivotal in establishing the prerequisites for conducting a thorough examination of global stability in (1).
Lemma 1 
([39]). The following inequalities are valid for each matrix E in the interval [ E ̲ , E ¯ ] such that
E 2 T q ( E ) , q = 2 , 3 , 4 ,
where
T 2 ( E ) = E * 2 + E * 2 , T 3 ( E ) = E * 2 2 + E * 2 2 + 2 E * T E * 2 , T 4 ( E ) = E ^ 2 , E ^ = ( e ^ j i ) .
Here, e ^ j i = m a x ( e ̲ j i , e ¯ j i ) , E * = 1 2 ( E ¯ + E ̲ ) , E * = 1 2 ( E ¯ E ̲ ) .
Remark 1. 
The results described in Lemma 1 are consistently applicable to any synaptic connection strength matrices defined in (1).
Lemma 2 
([3]). The following inequality holds for any two vectors u = ( u 1 , u 2 , , u n ) T R n and y = ( y 1 , y 2 , , y n ) T R n .
2 u T y = 2 y T u β u T u + 1 β y T y , β > 0 .
Lemma 3 
([11]). If E E I , then
E 2 E F ,
where E F is the Frobenius norm of the matrix E and it is defined by E F = ( i = 1 n j = 1 m a i j 2 ) 1 2 = ( t r a c e [ E T E ] ) 1 2 , where t r a c e [ E T E ] is the sum of the diagonal elements of E T E .
Assumption 3. 
Consider the matrix E , which satisfies (2). Now, there exists a positive constant T ( E ) that satisfies the following condition:
E 2 T ( E ) ,
where E is any matrix as defined in (2).

3. Main Results

Within this section, we establish certain generalized adequate conditions for the GARS of the system represented by (1). Through the application of Assumption 2, the system described by (1) that fulfills (2) possesses an equilibrium point. Hence, demonstrating the uniqueness and GARS of the equilibrium point of the BAM model (1) is essential. Suppose that if the origin of system (3) is unique and GARS, then the equilibrium point of system (1) is also unique and GARS.
Theorem 1. 
Assume the activation functions χ 1 i a n d χ 2 j fulfill the conditions in Assumptions 1 and 2, and there are positive constants γ and δ in which the conditions below are satisfied:
Ψ F i = η β G F 2 ζ ( g τ * ) > 0 , i , Ω F j = ξ μ F F 2 τ ( f τ * ) > 0 , j ,
where η = 2 m a ˇ ̲ i ν , ν = m ( γ + δ ) , β = 1 δ n l ˇ i 2 , ζ = 1 γ n 2 l ˇ i 2 , ( f τ * ) = i = 1 n ( f ˇ j i τ * ) 2 , μ = 1 δ m h ˇ j 2 , f j i τ * = m a x ( f ˇ ̲ j i τ , f ˇ ¯ j i τ ) , ξ = 2 n b ˇ ̲ j θ , θ = n ( γ + δ ) , τ = 1 γ m 2 h ˇ j 2 , ( g τ * ) = j = 1 m ( g ˇ i j τ * ) 2 and g i j τ * = m a x ( g ˇ ̲ i j τ , g ˇ ¯ i j τ ) , i = 1 , 2 , , n , j = 1 , 2 , , m . Then, the system defined by (3) with network parameters that meet (2) has GARS at its origin.
Proof. 
This theorem will be shown through a two-step process. In step 1, we show that the only equilibrium point of model (3) is its origin. On the flip side, we show that model (3)’s origin is GARS.
Step 1:
Assume that the equilibrium points of model (3) are ( u ˇ 1 * , , u ˇ n * ) T = u ˇ * 0 and ( x ˇ 1 * , , x ˇ m * ) T = x ˇ * 0 . The points that satisfy the equations stated below are the equilibrium points of (3).
a ˇ i u ˇ i * + j = i m f ˇ j i χ 2 j ( x ˇ j * ) + j = i m f ˇ j i τ χ 2 j ( x ˇ j * ) = 0 , i ,
b ˇ j x ˇ j * + i = i n g ˇ i j χ 1 j ( u ˇ i * ) + i = i n g ˇ i j τ χ 1 j ( u ˇ i * ) = 0 , j .
We multiply (4) by 2 m u ˇ i * and (5) by 2 n x ˇ j * , then add the resulting equations,
0 = 2 m a ˇ i u ˇ i * 2 + i , j = 1 n , m 2 m u ˇ i * f ˇ j i χ 2 j ( x ˇ j * ) + j , i = 1 m , n 2 m u ˇ i * f ˇ j i τ χ 2 j ( x ˇ j * ) 2 n b ˇ j x ˇ j * 2 + 2 n j , i = 1 m , n x ˇ j * g ˇ i j χ 1 j ( u ˇ i * ) + 2 n j , i = 1 m , n x ˇ j * g ˇ i j τ χ 1 j ( u ˇ i * ) , 0 = 2 m a ˇ i u ˇ i * 2 + i , j = 1 n , m 2 m u ˇ i * f ˇ j i χ 2 j ( x ˇ j * ) + i , j = 1 n , m 2 m u ˇ i * f ˇ j i τ χ 2 j ( x ˇ j * ) 2 n b ˇ j x ˇ j * 2 + 2 n j , i = 1 m , n x ˇ j * g i j χ 1 j ( u ˇ i * ) + 2 n j , i = 1 m , n x ˇ j * g ˇ i j τ χ 1 j ( u ˇ i * ) + 1 γ i , j = 1 n , m m 2 ( f ˇ j i τ ) 2 χ 2 j 2 ( x ˇ j * ) 1 γ i , j = 1 n , m m 2 ( f ˇ j i τ ) 2 χ 2 j 2 ( x ˇ j * ) + 1 γ j , i = 1 m , n n 2 ( g ˇ i j τ ) 2 χ 1 j 2 ( u ˇ i * ) 1 γ j , i = 1 m , n n 2 ( g ˇ i j τ ) 2 χ 1 j 2 ( u ˇ i * )
Applying Assumptions 1 and 2 for the activation functions, we have
2 m a ˇ i u ˇ i * 2 + i , j = 1 n , m 2 m u ˇ i * f ˇ j i χ 2 j ( x ˇ j * ) + i , j = 1 n , m 2 m u ˇ i * f ˇ j i τ χ 2 j ( x ˇ j * ) 2 n b ˇ j x ˇ j * 2 + 2 n j , i = 1 m , n x ˇ j * g ˇ i j χ 1 j ( u ˇ i * ) + 2 n j , i = 1 m , n x ˇ j * g i j τ χ 1 j ( u ˇ i * ) + 1 γ i , j = 1 n , m m 2 ( f ˇ j i τ ) 2 h ˇ j 2 ( x ˇ j * 2 ) 1 γ i , j = 1 n , m m 2 ( f ˇ j i τ ) 2 χ 2 j 2 ( x ˇ j * ) + 1 γ j , i = 1 m , n n 2 ( g ˇ i j τ ) 2 l ˇ i 2 ( u ˇ i * 2 ) 1 γ j , i = 1 m , n n 2 ( g ˇ i j τ ) 2 χ 1 j 2 ( u ˇ i * )
Take into account the forthcoming inequalities:
i , j = 1 n , m 2 m u ˇ i * ( t ) f ˇ j i χ 2 j ( x ˇ j * ) = 2 m u ˇ * T F S ( x ˇ * ) m δ u ˇ * T u ˇ * + m 1 δ S T ( x ˇ * ) F T F S ( x ˇ * ) m δ u ˇ * T u ˇ * + m 1 δ F 2 2 S ( x ˇ * ) 2 2 m δ i = 1 n u ˇ i * 2 + m 1 δ F 2 2 j = 1 m h ˇ j 2 x ˇ j * 2 ,
j , i = 1 m , n 2 n x ˇ j * g ˇ i j χ 1 j ( u ˇ i * ) = 2 n x ˇ * T G S ( u ˇ * ) n δ x ˇ * T x ˇ * + n 1 δ S T ( u ˇ * ) G T G S ( u ˇ * ) n δ x ˇ * T x ˇ * + n 1 δ S 2 2 G ( u * ) 2 2 n δ j = 1 m x ˇ j * 2 + n 1 δ G 2 2 i = 1 n l ˇ i 2 u ˇ i * 2 ,
i , j = 1 n , m 2 m u ˇ i * f ˇ j i τ χ 2 j ( x ˇ j * ) i , j = 1 n , m γ u ˇ i * 2 + i , j = 1 n , m 1 γ m 2 ( f ˇ j i τ ) 2 χ 2 j 2 ( x ˇ j * ) = m γ i = 1 n u ˇ i * 2 + i , j = 1 n , m 1 γ m 2 ( f ˇ j i τ ) 2 χ 2 j 2 ( x ˇ j * ) ,
j , i = 1 m , n 2 n x ˇ j * g ˇ i j τ χ 2 j ( u ˇ i * ) j , i = 1 m , n γ x ˇ j * + j , i = 1 m , n 1 γ n 2 ( g ˇ i j τ ) 2 χ 1 i 2 ( u ˇ i * ) = n γ j = 1 m x ˇ j * + j , i = 1 m , n 1 γ n 2 ( g ˇ i j τ ) 2 χ 1 i 2 ( u ˇ i * ) .
By applying the results of (7)–(10) in (6), we have
0 i = 1 n 2 m a ˇ i u ˇ i * 2 + m δ i = 1 n u ˇ i * 2 + m 1 δ F 2 2 j = 1 m h ˇ j 2 x ˇ j * 2 j = 1 m 2 n b ˇ j x ˇ j * 2 + n δ j = 1 m x ˇ j * 2 + n 1 δ G 2 2 i = 1 n l ˇ i 2 u ˇ i * 2 + m γ i = 1 n u ˇ i * 2 + n γ j = 1 m x ˇ j * 2 + 1 γ i , j = 1 n , m m 2 ( f ˇ j i τ ) 2 h ˇ j 2 ( x ˇ j * 2 ) + 1 γ j , i = 1 m , n n 2 ( g ˇ i j τ ) 2 l ˇ i 2 ( u ˇ i * 2 ) .
Since
G 2 2 G F 2 , F 2 2 F F 2 , ( f ˇ j i τ ) 2 ( f j i τ * ) 2 , ( g ˇ i j τ ) 2 ( g i j τ * ) 2 .
0 i = 1 n { 2 m a ˇ ̲ i + m ( γ + δ ) + 1 δ n l ˇ i 2 ( G F 2 ) + 1 γ n 2 l ˇ i 2 j = 1 m ( ( g i j τ * ) 2 ) } u ˇ i * 2 + j = 1 m { 2 n b ˇ ̲ j + n ( γ + δ ) + 1 γ m 2 h ˇ j 2 i = 1 n ( ( f j i τ * ) 2 ) + 1 δ m h ˇ j 2 F F 2 ) } x ˇ j * 2
0 i = 1 n 2 m a ˇ ̲ i + ν + β G F 2 + ζ ( g τ * ) u ˇ i * 2 + j = 1 m 2 n b ˇ ̲ j + θ + μ F F 2 + τ ( f τ * ) x ˇ j * 2
0 i = 1 n η β G F 2 ζ ( g τ * ) u ˇ i * 2 j = 1 m ξ μ F F 2 τ ( f τ * ) x ˇ j * 2
0 i = 1 n Ψ F i u ˇ i 2 j = 1 m Ω F j x ˇ j 2 .
Given that Ψ F i > 0 and Ω F j > 0 , for every i , j and x ˇ * 0 u ˇ * . But i = 1 n Ψ F i u ˇ i * 2 j = 1 m Ω F j x ˇ j * 2 < 0 . Here, (11) contradicts the above result and thus, we can deduce that the only equilibrium point besides x ˇ * = 0 = u ˇ * . Therefore, the unique equilibrium point is the origin of system (3).
Step 2:
Let us examine the Lyapunov–Krasovskii method provided below:
V ( x ˇ ( t ) , u ˇ ( t ) ) = i = 1 n m u ˇ i 2 ( t ) + 1 γ i , j = 1 n , m m 2 ( f ˇ j i τ ) 2 t τ ˇ j i t χ 2 j 2 ( x ˇ j ( η ) ) d η + j = 1 m n x ˇ j 2 ( t ) + 1 γ j , i = 1 m , n n 2 ( g ˇ j i τ ) 2 t σ ˇ i j t χ 1 i 2 u ˇ i ( ξ ) d ξ .
Obtaining V ˙ ( u ˇ ( t ) , x ˇ ( t ) ) in the trajectories of system (3) and using Lemma 2 yield the following result:
V ˙ ( x ˇ ( t ) , u ˇ ( t ) ) m δ i = 1 n u ˇ i 2 ( t ) i = 1 n 2 m a ˇ i u ˇ i 2 ( t ) + n δ j = 1 m x j 2 ( t ) + m 1 δ F 2 2 j = 1 m h ˇ j 2 x ˇ j 2 ( t ) + n 1 δ G 2 2 i = 1 n l ˇ i 2 u ˇ i 2 ( t ) j = 1 m 2 n b ˇ j x ˇ j 2 ( t ) + m γ i = 1 n u ˇ i 2 ( t ) + n γ j = 1 m x ˇ j 2 ( t ) + 1 γ j = 1 m i = 1 n n 2 ( f ˇ j i τ ) 2 l ˇ i 2 u ˇ i 2 ( t ) + 1 γ i = 1 n j = 1 m m 2 ( g ˇ i j τ ) 2 h ˇ j 2 x ˇ j 2 ( t ) .
Since G 2 2 G F 2 , F 2 2 F F 2 , ( f ˇ j i τ ) 2 ( f j i τ * ) 2 and ( g ˇ i j τ ) 2 ( g i j τ * ) 2 .
V ˙ ( x ˇ ( t ) , u ˇ ( t ) ) i = 1 n { 2 m a ˇ ̲ i + m ( γ + δ ) + 1 δ n l ˇ i 2 ( G F 2 ) + 1 γ n 2 l ˇ i 2 j = 1 m ( ( f i j τ * ) 2 ) } u ˇ i * 2 + j = 1 m { 2 n b ˇ ̲ j + n ( γ + δ ) + 1 γ m 2 h ˇ j 2 i = 1 n ( ( g j i τ * ) 2 ) + 1 δ m h ˇ j 2 ( F F 2 ) } x ˇ j * 2
0 i = 1 n 2 m a ˇ ̲ i + ν + β G F 2 + ζ ( f τ * ) u ˇ i * 2 + j = 1 m 2 n b ˇ ̲ j + θ + μ F F 2 + τ ( g τ * ) x ˇ j * 2
0 i = 1 n η β G F 2 ζ ( f τ * ) u ˇ i * 2 j = 1 m ξ μ F F 2 τ ( g τ * ) x ˇ j * 2 = i = 1 n Ψ F i u ˇ i 2 j = 1 m Ω F j x ˇ j 2 .
Given that Ψ F i > 0 and Ω F j > 0 , for every i and j, for all non-zero values of u ˇ ( t ) and x ˇ ( t ) , V ˙ ( x ˇ ( t ) , u ˇ ( t ) ) < 0 . Therefore, according to the theory of Lyapunov stability, the origin of the system (3) that satisfies (2) is GARS. Therefore, the system (1) that fulfills (2) is considered GARS. □
Now, let us find the different results from Theorem 1 for the different upper bounds of the synaptic connection weight matrices which are stated as follows:
Theorem 2. 
Assume the activation functions χ 1 i and χ 2 j fulfill the conditions of Assumptions 1 and 2 and there are positive constants γ and δ where the conditions below are satisfied:
Ψ 2 i = η β ( G * 2 + G * 2 ) 2 ζ ( g τ * ) > 0 , i , Ω 2 j = ξ μ ( F * 2 + F * 2 ) 2 τ ( f τ * ) > 0 , j ,
where η = 2 m a ˇ ̲ i ν , ν = m ( γ + δ ) , β = 1 δ n l ˇ i 2 , ζ = 1 γ n 2 l ˇ i 2 , ( f τ * ) = i = 1 n ( f ˇ j i τ * ) 2 , μ = 1 δ m h ˇ j 2 , f j i τ * = m a x ( f ˇ ̲ j i τ , f ˇ ¯ j i τ ) , ξ = 2 n b ˇ ̲ j θ , θ = n ( γ + δ ) , τ = 1 γ m 2 h ˇ j 2 , ( g τ * ) = j = 1 m ( g ˇ i j τ * ) 2 and g i j τ * = m a x ( g ˇ ̲ i j τ , g ˇ ¯ i j τ ) , i = 1 , 2 , , n , j = 1 , 2 , , m . Then, the system defined by (3) with network parameters that meet (2) has GARS at its origin.
Theorem 3. 
Assume the activation functions χ 1 i and χ 2 j fulfill the conditions of Assumptions 1 and 2 and there are positive constants γ and δ where the conditions below are satisfied:
Ψ 3 i = η β ( G * 2 2 + G * 2 2 + 2 G * T G * 2 ) ζ ( g τ * ) > 0 , i , Ω 3 j = ξ μ ( F * 2 2 + F * 2 2 + 2 F * T F * 2 ) τ ( f τ * ) > 0 , j ,
where η = 2 m a ˇ ̲ i ν , ν = m ( γ + δ ) , β = 1 δ n l ˇ i 2 , ζ = 1 γ n 2 l ˇ i 2 , ( f τ * ) = i = 1 n ( f ˇ j i τ * ) 2 , μ = 1 δ m h ˇ j 2 , f j i τ * = m a x ( f ˇ ̲ j i τ , f ˇ ¯ j i τ ) , ξ = 2 n b ˇ ̲ j θ , θ = n ( γ + δ ) , τ = 1 γ m 2 h ˇ j 2 , ( g τ * ) = j = 1 m ( g ˇ i j τ * ) 2 and g i j τ * = m a x ( g ˇ ̲ i j τ , g ˇ ¯ i j τ ) , i = 1 , 2 , , n , j = 1 , 2 , , m . Then, the system defined by (3) with network parameters that meet (2) has GARS at its origin.
Theorem 4. 
Assume the activation functions χ 1 i and χ 2 j fulfill Assumptions 1 and 2 and there are positive constants γ and δ where the conditions below are satisfied:
Ψ 4 i = η β ( G ^ 2 ) 2 ζ ( g τ * ) > 0 , i , Ω 4 j = ξ μ ( F ^ 2 ) 2 τ ( f τ * ) > 0 , j ,
where η = 2 m a ˇ ̲ i ν , ν = m ( γ + δ ) , β = 1 δ n l ˇ i 2 , ζ = 1 γ n 2 l ˇ i 2 , ( f τ * ) = i = 1 n ( f ˇ j i τ * ) 2 , μ = 1 δ m h ˇ j 2 , f j i τ * = m a x ( f ˇ ̲ j i τ , f ˇ ¯ j i τ ) , ξ = 2 n b ˇ ̲ j θ , θ = n ( γ + δ ) , τ = 1 γ m 2 h ˇ j 2 , ( g τ * ) = j = 1 m ( g ˇ i j τ * ) 2 and g i j τ * = m a x ( g ˇ ̲ i j τ , g ˇ ¯ i j τ ) , i = 1 , 2 , , n , j = 1 , 2 , , m . Then, the system defined by (3) with network parameters that meet (2) has GARS at its origin.
Remark 2. 
In the available literature, the Frobenius norm for delayed BAM NNs has been discussed using different techniques to derive the global stability condition. In [40], the problem of nonlinear differential systems with infinite delay has been studied for global stability analysis with the effect of BAM NNs. The authors of [41] studied the synchronization stability criteria for the same proposed delayed BAM NNs with sufficient conditions. Recently, in [6], the proposed BAM NN problem has been discussed with the global asymptotic stability condition with a delay approach. Different from the existing literature [6,40,41], in this paper, the problem of delayed BAM NNs has been addressed by utilizing the Lyapunov–Krasovskii method. The new sufficient conditions for GARS in BAM NNs were established using the Frobenius norm. The effectiveness of the proposed new results are compared with the existing norms in the following numerical section.

4. Numerical Example

In this part, we demonstrate the contrast in outcomes of Theorems 1–4 through the following instances.
Example 1. 
Take into account the network parameters for the specified NN model (1) that adhere to (2).
l 1 = l 2 = l 3 = 1 , h 1 = h 2 = h 3 = 1 , γ = 1 8 , δ = 1 8 ,
A ̲ = A = A ¯ = 16 0 0 0 16 0 0 0 16 = B ̲ = B = B ¯ .
G ̲ = F ̲ = 1 5 1 1 1 1 1 1 1 1 1 1 1 1 , G ¯ = F ¯ = 1 5 1 1 1 1 1 1 1 1 1 1 1 1 ,
G * = F * = 0 0 0 0 0 0 0 0 0 , G * = F * = 1 8 1 1 1 1 1 1 1 1 1 , G ^ = F ^ = 1 8 1 1 1 1 1 1 1 1 1 , G = F = 1 30 1 1 1 1 1 1 1 1 1 .
G ̲ τ = 1 8 d 1 d 1 d 1 d 1 d 1 d 1 d 1 d 1 d 1 , G ¯ τ = 1 8 d 1 d 1 d 1 d 1 d 1 d 1 d 1 d 1 d 1 = G τ * .
F ̲ τ = 1 8 d 2 d 2 d 2 d 2 d 2 d 2 d 2 d 2 d 2 , F ¯ τ = 1 8 d 2 d 2 d 2 d 2 d 2 d 2 d 2 d 2 d 2 = F τ * .
where d 1 > 0 , d 2 > 0 .
We now identify the distinct norms in Lemmas 1 and 3 in the following manner.
G F 2 = G F 2 = 0.0100 T 2 2 ( G ) = T 2 2 ( F ) = T 3 2 ( G ) = T 3 2 ( F ) = 0.1406 = T 4 2 ( G ) = T 4 2 ( F )
We exhibit the results of Theorem 1 for the Frobenius upper bound; we obtain
Ψ F i = 95.25 0.24 3.375 d 1 2 = 95.0100 3.375 d 1 2 .
Since Ψ F i > 0 , i = 1 , 2 , 3 . Therefore, d 1 2 < 28.1511 .
Ω F j = 95.25 0.24 3.375 d 2 2 = 95.0100 3.375 d 2 2 .
Since Ω 1 j > 0 , j = 1 , 2 , 3 . Therefore, d 2 2 < 28.1511 . Similarly, we exhibit the results of Theorem 2 for the upper bounds T k 2 , k = 2 , 3 , 4 , and we obtain
Ψ k i = 95.25 3.3744 3.375 d 1 2 = 91.8756 3.375 d 1 2 .
Since Ψ k i > 0 , k = 2 , 3 , 4 and i = 1 , 2 , 3 . Therefore, d 1 2 < 27.2224 .
Ω k j = 95.25 3.3744 3.375 d 2 2 = 91.8756 3.375 d 2 2 .
Since Ω k j > 0 , k = 2 , 3 , 4 and j = 1 , 2 , 3 . Therefore, d 2 2 < 27.2224 .
The simulation figures for Example 1 can be observed in Figure 1, Figure 2 and Figure 3. Utilizing the randomized initial condition, the state trajectories of x ˇ i ( i = 1 , 2 , 3 ) and u ˇ i ( i = 1 , 2 , 3 ) are demonstrated in Figure 1, Figure 2 and Figure 3.
Remark 3. 
For Ψ F i and Ω F j , i , j = 1 , 2 , 3 , d 2 2 and d 1 2 , respectively, are valid in the domain 27.2224 < d q 2 < 28.1511 , q = 1 , 2 whereas Ψ k i and Ω k j , k = 2 , 3 , 4 , i , j = 1 , 2 , 3 are not valid in that domain. This is because of the minimum value of the Frobenius norm for the given network parameters. Hence, our new results in Theorem 1 will give better results for the proposed BAM NN model.
Example 2. 
Take into account the network parameters for the specified BAM NN model (1) that adhere to (2).
l 1 = l 2 = l 3 = l 4 = 1 , h 1 = h 2 = h 3 = h 4 = 1 , γ = 1 5 , δ = 1 5 ,
A ̲ = A = A ¯ = 10 0 0 0 0 10 0 0 0 0 10 0 0 0 0 10 = B ̲ = B = B ¯ .
G ̲ = F ̲ = 1 5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 , G ¯ = F ¯ = 1 5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ,
G * = F * = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 , G * = F * = 1 5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 , G ^ = F ^ = 1 5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 , G = F = 1 25 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 .
G ̲ τ = 1 8 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 , G ¯ τ = 1 5 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 1 = G τ * .
F ̲ τ = 1 5 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 , F ¯ τ = 1 5 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 e 2 = F τ * .
where e 1 > 0 , e 2 > 0 . We now identify the distinct norms in Lemmas 1 and 3 in the following manner.
G F 2 = F F 2 = 0.0256 T 2 2 ( G ) = T 2 2 ( F ) = T 3 2 ( G ) = T 3 2 ( F ) = 0.6400 = T 4 2 ( G ) = T 4 2 ( F ) .
We exhibit the results of Theorem 1 for the upper bound using the Frobenius norm, and we obtain
Ψ F i = 78.4 0.5120 12.8 e 1 2 = 77.888 12.8 e 1 2 .
Since Ψ F i > 0 , i = 1 , 2 , 3 , 4 . Therefore, e 1 2 < 6.0850 .
Ω F j = 78.4 0.5120 12.8 e 2 2 = 77.888 12.8 e 2 2 .
Since Ω F j > 0 , j = 1 , 2 , 3 , 4 . Therefore, e 2 2 < 6.0850 . Similarly, we exhibit the results of Theorems 2–4 for the upper bounds T k 2 , k = 2 , 3 , 4 and we obtain
Ψ k i = 78.4 12.8 12.8 e 1 2 = 65.6 12.8 e 1 2 .
Since Ψ k i > 0 , k = 2 , 3 , 4 and i = 1 , 2 , 3 , 4 . Therefore, e 1 2 < 5.1250 .
Ω k j = 78.4 12.8 12.8 e 2 2 = 65.6 12.8 e 2 2 .
Since Ω k j > 0 , q = 2 , 3 , 4 and j = 1 , 2 , 3 , 4 . Therefore, e 2 2 < 5.1250 .
The simulation outcomes of Example 2 are depicted in Figure 4, Figure 5 and Figure 6. Figure 4 show the state responses of the proposed BAM NNs in Example 2. Figure 5 and Figure 6 depict the state trajectories of x ˇ i ( i = 1 , 2 , 3 , 4 ) and u ˇ i ( i = 1 , 2 , 3 , 4 ) , respectively.
Remark 4. 
For Ψ F i and Ω F j , i , j = 1 , 2 , 3 , 4 , e 1 2 and e 2 2 , respectively, are valid in the domain 5.1250 < e q 2 < 6.0850 , q = 1 , 2 whereas Ψ k i and Ω k j , k = 2 , 3 , 4 , i , j = 1 , 2 , 3 , 4 are not valid in that domain. This is because of the minimum value of the Frobenius norm for the given network parameters. Hence, our new results in Theorem 1 will give better results for the proposed BAM NN model.

5. Conclusions

This article presented new findings on the GARS of time-delayed BAM NNs with uncertain parameters. Using the Frobenius norm, sufficient conditions for GARS in BAM NNs were established. In activities such as model quantization, where it is crucial to lower the precision of weights for use in devices with limited resources, the Frobenius norm serves as a measure to assess the effect on model performance. The charm of the Frobenius norm is in its capacity to reflect the total “magnitude” of a matrix while remaining computationally efficient. NNs are models of neural circuits, with each neuron representing a simple analog processor. Parallel communication lines in value-passing analog processor networks provide the connectivity found in real neural circuits through synapses. This NN is under the topic of BAM NNs discussed in this study. The results are more efficient than some previously published findings. Numerical examples are provided to illustrate how the proposed sufficient criteria differ from and improve upon previous results. In future work, this proposed work could be extended to include complex-valued NNs and quaternion-valued BAM NNs with practical applications like four-tank system and coupled circuit models.

Author Contributions

Conceptualization, N.M.T.; Software, R.V.; Validation, R.V.; Formal analysis, N.G.; Writing—original draft, N.M.T.; Writing—review & editing, S.S.; Supervision, N.G. All authors have read and agreed to the published version of the manuscript.

Funding

Saravanan Shanmugam would like to thank Easwari Engineering College, India, for their financial support, vide number SRM/EEC/RI/006.

Data Availability Statement

Data are contained within the article.

Acknowledgments

Saravanan Shanmugam would like to thank Easwari Engineering College, India, for their financial support, vide number SRM/EEC/RI/006.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Jiang, C.; Huang, Z.; Pedapati, T.; Chen, P.Y.; Sun, Y.; Gao, J. Network properties determine neural network performance. Nat. Commun. 2024, 15, 5718. [Google Scholar] [CrossRef] [PubMed]
  2. Borylo, P.; Biernacka, E.; Domzal, J.; Kadziolka, B.; Kantor, M.; Rusek, K.; Skala, M.; Wajda, K.; Wojcik, R.; Zabek, W. Neural Networks in Selected Aspects of Communications and Networking. IEEE Access 2024, 12, 132856–132890. [Google Scholar] [CrossRef]
  3. Gunasekaran, N.; Thoiyab, N.M.; Zhu, Q.; Cao, J.; Muruganantham, P. New Global Asymptotic Robust Stability of Dynamical Delayed Neural Networks via Intervalized Interconnection Matrices. IEEE Trans. Cybern. 2022, 52, 11794–11804. [Google Scholar] [CrossRef] [PubMed]
  4. Lan, J.; Wang, X.; Zhang, X. Global robust exponential synchronization of interval BAM neural networks with multiple time-varying delays. Circuits Syst. Signal Process. 2024, 43, 2147–2170. [Google Scholar] [CrossRef]
  5. Guo, P.; Cao, Y.; Liu, X. The Stabilization of Cohen-Grossberg BAM Neural Network System. In Proceedings of the 2024 IEEE 18th International Conference on Control & Automation (ICCA), Reykjavík, Iceland, 8–21 June 2024; pp. 968–974. [Google Scholar]
  6. Liu, M.; Jiang, H.; Hu, C.; Lu, B.; Li, Z. Novel Global Asymptotic Stability and Dissipativity Criteria of BAM Neural Networks With Delays. Front. Phys. 2022, 10, 898589. [Google Scholar] [CrossRef]
  7. Arslan, E. Novel criteria for global robust stability of dynamical neural networks with multiple time delays. Neural Netw. 2021, 142, 119–127. [Google Scholar] [CrossRef]
  8. Ma, Y.; Lin, Y.; Dai, Y. Stability and Hopf Bifurcation Analysis of A Fractional-Order BAM Neural Network with Two Delays Under Hybrid Control. Neural Process. Lett. 2024, 56, 82. [Google Scholar] [CrossRef]
  9. Naik, P.A.; Eskandari, Z. Nonlinear dynamics of a three-dimensional discrete-time delay neural network. Int. J. Biomath. 2024, 17, 2350057. [Google Scholar] [CrossRef]
  10. Arunagirinathan, S.; Lee, T.H. Generalized delay-dependent reciprocally convex inequality on stability for neural networks with time-varying delay. Math. Comput. Simul. 2024, 217, 109–120. [Google Scholar] [CrossRef]
  11. Thoiyab, N.M.; Muruganantham, P.; Rajchakit, G.; Gunasekaran, N.; Unyong, B.; Humphries, U.; Kaewmesri, P.; Lim, C.P. Global Stability Analysis of Neural Networks with Constant Time Delay via Frobenius Norm. Math. Probl. Eng. 2020, 2020, 4321312. [Google Scholar] [CrossRef]
  12. Arik, S. New Criteria for Stability of Neutral-Type Neural Networks with Multiple Time Delays. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 1504–1513. [Google Scholar] [CrossRef]
  13. Arik, S. A modified Lyapunov functional with application to stability of neutral-type neural networks with time delays. J. Frankl. Inst. 2019, 356, 276–291. [Google Scholar] [CrossRef]
  14. Singh, V. Global robust stability of delayed neural networks: Estimating upper limit of norm of delayed connection weight matrix. Chaos Solitons Fractals 2007, 32, 259–263. [Google Scholar] [CrossRef]
  15. Shao, J.L.; Huang, T.Z.; Zhou, S. Some improved criteria for global robust exponential stability of neural networks with time-varying delays. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 3782–3794. [Google Scholar] [CrossRef]
  16. Ozcan, N.; Arik, S. Global robust stability analysis of neural networks with multiple time delays. IEEE Trans. Circuits Syst. Regul. Pap. 2006, 53, 166–176. [Google Scholar] [CrossRef]
  17. Cao, J.; Ho, D.W. A general framework for global asymptotic stability analysis of delayed neural networks based on LMI approach. Chaos Solitons Fractals 2005, 24, 1317–1329. [Google Scholar] [CrossRef]
  18. Arik, S. New criteria for global robust stability of delayed neural networks with norm-bounded uncertainties. IEEE Trans. Neural Netw. Learn. Syst. 2013, 25, 1045–1052. [Google Scholar]
  19. Hu, B.; Song, Q.; Zhao, Z. Robust state estimation for fractional-order complex-valued delayed neural networks with interval parameter uncertainties: LMI approach. Appl. Math. Comput. 2020, 373, 125033. [Google Scholar] [CrossRef]
  20. Thoiyab, N.M.; Muruganantham, P.; Gunasekaran, N. Global Robust Stability Analysis for Hybrid BAM Neural Networks. In Proceedings of the 2021 IEEE Second International Conference on Control, Measurement and Instrumentation (CMI), Kolkata, India, 8–10 January 2021; pp. 93–98. [Google Scholar]
  21. Gan, B.; Yang, M. Further finite-time stability analysis of neural networks with proportional delay. Frankl. Open 2024, 8, 100159. [Google Scholar] [CrossRef]
  22. Fang, Y.; Kincaid, T.G. Stability analysis of dynamical neural networks. IEEE Trans. Neural Netw. 1996, 7, 996–1006. [Google Scholar] [CrossRef]
  23. Xu, H.; Luo, H.; Fan, X.Q. Stability of stochastic delay Hopfield neural network with Poisson jumps. Chaos Solitons Fractals 2024, 187, 115404. [Google Scholar] [CrossRef]
  24. Zhao, H. Global asymptotic stability of Hopfield neural network involving distributed delays. Neural Netw. 2004, 17, 47–53. [Google Scholar] [CrossRef] [PubMed]
  25. Liu, Y.; Zhou, K.; Zhong, S.; Shi, K.; Li, X. Parametric Stability Criteria for Delayed Recurrent Neural Networks via Flexible Delay-Dividing Method. IEEE Trans. Neural Netw. Learn. Syst. 2024. [Google Scholar] [CrossRef] [PubMed]
  26. Stamov, T. Discrete bidirectional associative memory neural networks of the Cohen–Grossberg type for engineering design symmetry related problems: Practical stability of sets analysis. Symmetry 2022, 14, 216. [Google Scholar] [CrossRef]
  27. Shao, S.; Du, B. Global Asymptotic Stability of Competitive Neural Networks with Reaction-Diffusion Terms and Mixed Delays. Symmetry 2022, 14, 2224. [Google Scholar] [CrossRef]
  28. Kosko, B. Adaptive bidirectional associative memories. Appl. Opt. 1987, 26, 4947–4960. [Google Scholar] [CrossRef]
  29. Shi, G. Optimal bidirectional associative memories. Int. J. Syst. Sci. 2000, 31, 751–757. [Google Scholar] [CrossRef]
  30. Li, M.; Maimaitiaili, G. New results on finite-time projective synchronization for memristor-based hybrid delayed BAM neural networks with applications to DNA image encryption. AIMS Math. 2024, 9, 9822–9846. [Google Scholar] [CrossRef]
  31. Chen, H.; Jiang, M.; Hu, J. Global exponential synchronization of BAM memristive neural networks with mixed delays and reaction–diffusion terms. Commun. Nonlinear Sci. Numer. Simul. 2024, 137, 108137. [Google Scholar] [CrossRef]
  32. Tuz, M. Global Asymptotic Stability of Anti-Periodic Solutions of Time-Delayed Fractional Bam Neural Networks. Neural Process. Lett. 2024, 56, 129. [Google Scholar] [CrossRef]
  33. Liao, X.; Wong, K.W. Robust stability of interval bidirectional associative memory neural network with time delays. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2004, 34, 1142–1154. [Google Scholar] [CrossRef] [PubMed]
  34. Yang, W.; Yu, W.; Cao, J. Global exponential stability of impulsive fuzzy high-order BAM neural networks with continuously distributed delays. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 3682–3700. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, D.; Huang, L.; Tang, L. Dissipativity and synchronization of generalized BAM neural networks with multivariate discontinuous activations. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 3815–3827. [Google Scholar] [CrossRef]
  36. Lou, X.; Cui, B. Stochastic Exponential Stability for Markovian Jumping BAM Neural Networks With Time-Varying Delays. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2007, 37, 713–719. [Google Scholar] [CrossRef]
  37. Senan, S.; Arik, S. Global Robust Stability of Bidirectional Associative Memory Neural Networks with Multiple Time Delays. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2007, 37, 1375–1381. [Google Scholar] [CrossRef]
  38. Arik, S. Global robust stability of delayed neural networks. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 2003, 50, 156–160. [Google Scholar] [CrossRef]
  39. Ensari, T.; Arik, S. New results for robust stability of dynamical neural networks with discrete time delays. Expert Syst. Appl. 2010, 37, 5925–5930. [Google Scholar] [CrossRef]
  40. Oliveira, J.J. Global stability criteria for nonlinear differential systems with infinite delay and applications to BAM neural networks. Chaos Solitons Fractals 2022, 164, 112676. [Google Scholar] [CrossRef]
  41. Muhammadhaji, A.; Teng, Z. Synchronization stability on the BAM neural networks with mixed time delays. Int. J. Nonlinear Sci. Numer. Simul. 2021, 22, 99–109. [Google Scholar] [CrossRef]
Figure 1. Response of x ˇ ( t ) , u ˇ ( t ) among the different initial states.
Figure 1. Response of x ˇ ( t ) , u ˇ ( t ) among the different initial states.
Symmetry 17 00183 g001
Figure 2. Response of x ˇ ( t ) among the different initial states.
Figure 2. Response of x ˇ ( t ) among the different initial states.
Symmetry 17 00183 g002
Figure 3. Response of u ˇ ( t ) among the different initial states.
Figure 3. Response of u ˇ ( t ) among the different initial states.
Symmetry 17 00183 g003
Figure 4. Response of x ˇ ( t ) , u ˇ ( t ) among the different initial states.
Figure 4. Response of x ˇ ( t ) , u ˇ ( t ) among the different initial states.
Symmetry 17 00183 g004
Figure 5. Response of x ˇ ( t ) among the different initial states.
Figure 5. Response of x ˇ ( t ) among the different initial states.
Symmetry 17 00183 g005
Figure 6. Response of u ˇ ( t ) among the different initial states.
Figure 6. Response of u ˇ ( t ) among the different initial states.
Symmetry 17 00183 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Thoiyab, N.M.; Shanmugam, S.; Vadivel, R.; Gunasekaran, N. Frobenius Norm-Based Global Stability Analysis of Delayed Bidirectional Associative Memory Neural Networks. Symmetry 2025, 17, 183. https://doi.org/10.3390/sym17020183

AMA Style

Thoiyab NM, Shanmugam S, Vadivel R, Gunasekaran N. Frobenius Norm-Based Global Stability Analysis of Delayed Bidirectional Associative Memory Neural Networks. Symmetry. 2025; 17(2):183. https://doi.org/10.3390/sym17020183

Chicago/Turabian Style

Thoiyab, N. Mohamed, Saravanan Shanmugam, Rajarathinam Vadivel, and Nallappan Gunasekaran. 2025. "Frobenius Norm-Based Global Stability Analysis of Delayed Bidirectional Associative Memory Neural Networks" Symmetry 17, no. 2: 183. https://doi.org/10.3390/sym17020183

APA Style

Thoiyab, N. M., Shanmugam, S., Vadivel, R., & Gunasekaran, N. (2025). Frobenius Norm-Based Global Stability Analysis of Delayed Bidirectional Associative Memory Neural Networks. Symmetry, 17(2), 183. https://doi.org/10.3390/sym17020183

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop