Next Article in Journal
Predicting the Deformation of a Slope Using a Random Coefficient Panel Data Model
Next Article in Special Issue
A Compartmental Approach to Modeling the Measles Disease: A Fractional Order Optimal Control Model
Previous Article in Journal
Novel Approach by Shifted Fibonacci Polynomials for Solving the Fractional Burgers Equation
Previous Article in Special Issue
Performance Analysis of Fully Intuitionistic Fuzzy Multi-Objective Multi-Item Solid Fractional Transportation Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fractional-Order Tabu Learning Neuron Models and Their Dynamics

1
Department of Mathematics, Changzhou University, Changzhou 213164, China
2
School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213159, China
3
College of Automation, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
*
Authors to whom correspondence should be addressed.
Fractal Fract. 2024, 8(7), 428; https://doi.org/10.3390/fractalfract8070428
Submission received: 7 June 2024 / Revised: 15 July 2024 / Accepted: 18 July 2024 / Published: 20 July 2024
(This article belongs to the Special Issue Advances in Fractional Modeling and Computation)

Abstract

:
In this paper, by replacing the exponential memory kernel function of a tabu learning single-neuron model with the power-law memory kernel function, a novel Caputo’s fractional-order tabu learning single-neuron model and a network of two interacting fractional-order tabu learning neurons are constructed firstly. Different from the integer-order tabu learning model, the order of the fractional-order derivative is used to measure the neuron’s memory decay rate and then the stabilities of the models are evaluated by the eigenvalues of the Jacobian matrix at the equilibrium point of the fractional-order models. By choosing the memory decay rate (or the order of the fractional-order derivative) as the bifurcation parameter, it is proved that Hopf bifurcation occurs in the fractional-order tabu learning single-neuron model where the value of bifurcation point in the fractional-order model is smaller than the integer-order model’s. By numerical simulations, it is shown that the fractional-order network with a lower memory decay rate is capable of producing tangent bifurcation as the learning rate increases from 0 to 0.4. When the learning rate is fixed and the memory decay increases, the fractional-order network enters into frequency synchronization firstly and then enters into amplitude synchronization. During the synchronization process, the oscillation frequency of the fractional-order tabu learning two-neuron network increases with an increase in the memory decay rate. This implies that the higher the memory decay rate of neurons, the higher the learning frequency will be.

1. Introduction

The human brain is a network connected by billions of neurons through synapses. Excited by external stimuli, the response of the brain is transmitted through the network in the form of electrical signals. It is therefore of significant importance to study neurons firing to disclose the function of the brain. To date, based on plenty of experiments and the experimental data, many classical neuron models have been constructed, such as the Hodgkin–Huxley (H-H) model [1,2], FitzHugh–Nagumo (FHN) model [3,4,5], Morris–Lecar (ML)model [6,7], Hindmarsh–Rose (HR) model [8,9], Chay model [10], Rulkov model [11], and Izhikevich model [12]. These models can emulate different neurons and display different neurodynamics, such as resting states, periodic oscillations, and chaos. The different neurodynamics, as mentioned earlier, play important roles in neural information encoding.
Tabu learning is the method of applying tabu search in neural networks to solve optimization problems [13]. Based on the energy distribution around the current state, tabu learning can avoid searched states and find new ones that are not searched, and then the search efficiency can be improved. In tabu learning searches, the neurons need some judgment and selection. This implies that the tabu learning neuron owns the memory. In existing models about tabu learning, the memory is described by the integration of state variable [13,14].
Tabu learning single-neuron models are two-dimensional [13] and are studied widely because of their simple mathematical structure [14,15,16,17,18,19,20,21]. Choosing the memory decay rate as the bifurcation parameter, Hopf bifurcations are shown in tabu learning neurons [14,15,17,19]. In [20], by replacing the resistive self-connection synaptic weight with a memristive self-connection synaptic weight, a memristive synaptic weight-based tabu learning neuron model is proposed. In the memristive synaptic weight-based tabu learning neuron model, there are infinitely many nonchaotic attractors composed of mono-periodic, multi-periodic, and quasi-periodic orbits. Additionally, in [18], hidden attractors are discovered in a non-autonomous tabu learning model with sinusoidal external excitation. Recently, based on the sinusoidal activation function, reference [21] proposed a two-dimensional non-autonomous tabu learning single-neuron model which can generate a class of multi-scroll chaotic attractors with parameters controlling the number of scrolls.
In the tabu learning single neuron models mentioned above, the exponential memory kernel function e α t is applied. Compared to the power-law memory kernel function t α , the exponential memory kernel function limits to zero more quickly as t + . Therefore, the exponential memory kernel function results in a lower memory capacity for the states. As stated in [22], memory capacity is limited if the memory states are not truly persistent over time. For improving memory capacity, it is reasonable to replace the exponential memory kernel function of the neuron by the power-law memory kernel function. In fact, the fractional-order derivative is defined in the power-law memory kernel function. And it has been proven that the fractional-order derivative owns the memory effect and is not a strictly local operator [23]. The order of the fractional-order derivative is related to the memory loss or the “proximity effect” of some characteristics [24]. Then, in the following discussion, the exponential memory kernel function of the neuron was replaced by the power-law memory kernel function, and a novel Caputo’s fractional-order tabu learning single-neuron model and a network of two interacting Caputo fractional-order tabu learning neurons are proposed. In these new fractional-order models, the physical meaning of the order of the fractional-order derivatives is the memory decay rate of the neuron.
To begin with, by choosing the memory decay rate (i.e., the order of the fractional-order derivative) as a bifurcation parameter, it is proved that Hopf bifurcation occurs in the Caputo’s fractional-order tabu leaning single-neuron model. Secondly, the dynamics of the network of two interacting Caputo’s fractional-order tabu learning neurons is discussed. With a lower memory decay rate, the fractional-order network showed tangent bifurcation as the learning rate increased from 0 to 0.4. Then, when the learning rate was fixed, the network entered into frequency synchronization firstly and then the amplitudes of two neurons gradually became consistent as the memory decay rate increased from 0 to 1. This study shows that the memory decay rate, i.e., the order of the fractional-order derivative, has a significant impact on the dynamics of fractional-order tabu learning neuron models.
The paper is organized as follows. The Caputo’s fractional-order tabu learning single-neuron model and the network of two interacting Caputo’s fractional-order tabu learning neurons are proposed in Section 2. In Section 3, the stabilities of the models are evaluated by the eigenvalues of the Jacobian matrix at the equilibrium point. In Section 4, numerical simulations of the fractional-order models are shown. Finally, conclusions are drawn in Section 5.

2. Preliminaries and Fractional-Order Tabu Learning Models

2.1. Preliminaries on Fractional-Order Systems

First, the α -order (0 < α < 1) integral is defined by [23] as
I t α 0 x ( t ) = 1 Γ ( α ) 0 t x ( τ ) ( t τ ) 1 α d τ
where Γ ( z ) = 0 e s s z 1 d s is the Gamma function. Corresponding to the fractional-order integral, there is a fractional-order derivative which has several different definitions such as Grunwald–Letnikov’s derivative, Caputo’s derivative, and Riemann-Liouville’s derivative. In this study, Caputo’s derivative is employed. The α -order ( 0 < α < 1 ) derivative is defined as
D t α 0 C f ( t ) = 1 Γ ( 1 α ) 0 t f ( τ ) ( t τ ) α d τ
where f ( τ ) is the first-order derivative of function f ( τ ) . The integration in Equation (2) indicates that the Caputo’s derivative is non-local. Consequently, a fractional-order mathematical model can contain the memory of system variables.
For the stability analysis of a fractional-order mathematical model, the following lemma is needed [25].
Lemma 1.
The fractional-order system
D t α 0 C X = f ( X )
is asymptotically stable at the equilibrium point E 0 = ( x 1 0 , x 2 0 , , x n 0 ) if all the eigenvalues λ of the Jacobian matrix M E 0 satisfy the condition:
| arg ( λ ) | > ( α π ) / 2
where arg ( λ ) is the argument of λ, X = ( x 1 , x 2 , , x n ) T , f ( X ) = ( f 1 ( X ) , f 2 ( X ) , , f n ( X ) ) T and f i ( X ) = f i ( x 1 , x 2 , , x n ) , i = 1 , 2 , , n .

2.2. A Fractional-Order Tabu Learning Single-Neuron Model

A classical tabu learning single-neuron model is described by [15] as
u ˙ = x + a f ( u ) + J , J ˙ = α J β f ( u ) .
where u is the action potential of the neuron, J is the tabu learning variable, f ( u ) is the activation function, and a is the self-connection strength of the neuron. In model (5), the tabu learning variable J is computed by
J ( t ) = β 0 t e α ( τ t ) f ( u ( τ ) ) d τ
where α > 0 is the memory decay rate and β > 0 is the learning rate. As t + , the exponential memory kernel function e t limits to zero more quickly than the power-law memory kernel function t α . That is to say, with the exponential memory kernel function e t , the memory capacity of the neuron is not truly persistent over time and so neurons will begin to relearn states that have been learned but forgotten. To make the memory time long enough, the exponential kernel function e t in Equation (6) is replaced by the power-law kernel function t α . By doing so, the tabu learning variable J is computed by
J ( t ) = β 0 t ( τ t ) α f ( u ( τ ) ) d τ = β 0 t ( t τ ) α f ( u ( τ ) ) d τ
Equation (7) can be rewritten as
J ( t ) = β 0 t f ( u ( τ ) ) ( t τ ) 1 ( 1 α ) d τ = β Γ ( 1 α ) 1 Γ ( 1 α ) 0 t f ( u ( τ ) ) ( t τ ) 1 ( 1 α ) d τ
Referring to Equation (1), the tabu learning variable J can be described as
J ( t ) = β Γ ( 1 α ) 0 I t ( 1 α ) f ( u ( t ) )
Based on the following relationship,
D t α 0 C ( 0 I t α x ( t ) ) = x ( t )
we can obtain
D t 1 α 0 C J ( t ) = β Γ ( 1 α ) f ( u ( t ) )
Then, a novel fractional-order tabu learning single neuron model is proposed as follows:
D t 1 α 0 C u = u + a f ( u ) + J , D t 1 α 0 C J = β Γ ( 1 α ) f ( u ) .
where α ( 0 < α < 1 ) is the memory decay rate and β > 0 is the learning rate.

2.3. A Fractional-Order Coupled Tabu Learning Two-Neuron Model

In this section, a network of two interacting fractional-order tabu learning neurons with the lower memory decay rate α is constructed as follows:
D t 1 α 0 C u 1 = 0.1 u 1 + T 11 f ( u 1 ) + T 12 f ( u 2 ) + J 1 , D t 1 α 0 C u 2 = 0.1 u 2 + T 21 f ( u 1 ) + T 22 f ( u 2 ) + J 2 , D t 1 α 0 C J 1 = β Γ ( 1 α ) f ( u 1 ) , D t 1 α 0 C J 2 = β Γ ( 1 α ) f ( u 2 ) .
where the learning rate β > 0 is changed in the interval ( 0 , 1 ] , the activation function f ( u i ) = tanh ( 5 u i ) , i = 1 , 2 , and the weight matrix Q between two neurons is
Q = T 11 T 12 T 21 T 22 = 0.1 0.5 1 2
The classical integer-order model corresponding to model (13) is displayed in [15].

3. Dynamics of the Fractional-Order Models

3.1. Stability Analysis of Model (12)

If f ( u ) = 0 has the root of u = u 0 , model (12) has an equilibrium point E = ( u 0 , u 0 ) . The Jacobian matrix M at E is
M = a f ( u 0 ) 1 1 β Γ ( 1 α ) f ( u 0 ) 0
The characteristic equation of matrix M is
λ 2 m 1 λ + m 2 = 0
where m 1 = a f ( u 0 ) 1 , m 2 = β Γ ( 1 α ) f ( u 0 ) . The eigenvalues of matrix M are
λ 1 = 1 2 ( m 1 + m 1 2 4 m 2 ) , λ 2 = 1 2 ( m 1 m 1 2 4 m 2 )
The eigenvalues λ i ( i = 1 , 2 ) changed with the parameters m 1 , m 2 are displayed in Table 1, where ( λ ) is the real part of the eigenvalue λ .
Remark 1. (1) If ( λ ) = 0 ( ( λ ) is the imaginary part of λ), λ is a real number. As λ < 0 , one has | arg ( λ ) | = π > π / 2 > ( α π ) / 2 ( 0 < α < 1 ) ; as λ > 0 , one has | arg ( λ ) | = 0 < ( α π ) / 2 ( 0 < α < 1 ) .
  • (2) If ( λ ) 0 , it is easy to know tan ( arg ( λ ) ) = ( λ ) / ( λ ) . Then as ( λ ) < 0 , one has | arg ( λ ) | > π / 2 > ( α π ) / 2 ( 0 < α < 1 ) ; as ( λ ) > 0 , one has | arg ( λ ) | < π / 2 .
By the location of eigenvalue λ on the complex plain, the stability of model (12) can be evaluated as following:
Case 1 . As m 2 < 0 , shown in Table 1, two eigenvalues are real numbers and one of them is in the positive real axis of the complex plane, i.e., there is | arg ( λ ) | = 0 < ( α π ) / 2 ( 0 < α < 1 ) . So as m 2 < 0 , model (12) at the equilibrium point E is unstable for any m 1 .
Case 2 . As m 2 = β Γ ( 1 α ) f ( u 0 ) = 0 , due to α > 0 and β > 0 , one has f ( u 0 ) = 0 . Thus, m 1 = a f ( u 0 ) 1 = 1 < 0 , λ 1 = 0 and λ 2 = 1 < 0 . So, as m 2 = 0 , model (12) at the equilibrium point E is stable.
Case 3 . As m 2 > 0 and m 1 < 0 , two eigenvalues are negative real numbers. And | arg ( λ ) | = π > ( α π ) / 2 ( 0 < α < 1 ) . In this case, model (12) at the equilibrium point E is stable.
Case 4 . As m 2 > 0 , m 1 > 0 and m 1 2 4 m 2 0 , two eigenvalues are positive real numbers. Then, | arg ( λ ) | = 0 < ( α π ) / 2 ( 0 < α < 1 ) . Model (12) at the equilibrium point E is unstable.
Case 5 . As m 2 > 0 , m 1 > 0 and m 1 2 4 m 2 < 0 , both eigenvalues have positive real parts. The argument of the eigenvalue is arg ( λ ) = arctan ( ( λ ) / ( λ ) ) . Referring to Lemma 1, model (12) at the equilibrium point E is stable if | arg ( λ ) | > ( α π ) / 2 and is unstable if | arg ( λ ) | < ( α π ) / 2 .
  • Therefore, the following conclusions can be drawn:
Theorem 1.
The stability of model (12) at the equilibrium E depends on the parameters m 1 and m 2 . It is stated as:
  • (1) If m 2 < 0 or m 2 > 0 , m 1 > 0 , m 1 2 4 m 2 0 , model (12) is unstable;
  • (2) If m 2 > 0 , m 1 0 or m 2 = 0 , model (12) is stable;
  • (3) As m 2 > 0 , m 1 > 0 , m 1 2 4 m 2 < 0 , model (12) is stable if 4 m 2 m 1 2 / m 1 > tan ( ( 1 α ) π / 2 ) and is unstable if 4 m 2 m 1 2 / m 1 < tan ( ( 1 α ) π / 2 ) . In this case, when increasing order α from 0 to 1, model (12) experiences Hopf bifurcation at α 0 = 1 ( 2 / π ) arctan ( 4 m 2 m 1 2 / m 1 ) .

3.2. Stabilty Analysis of Model (13) with the Decay Rate α = 0.01

The Jacobian matrix corresponding to model (13) at the equilibrium point ( 0 , 0 , 0 , 0 ) is
M * = 5 T 11 0.1 5 T 12 1 0 5 T 21 5 T 22 0.1 0 1 5 β Γ ( 0.99 ) 0 0 0 0 5 β Γ ( 0.99 ) 0 0
Thus, the eigenpolynomial for discriminating the stability of equilibrium point ( 0 , 0 , 0 , 0 ) can be yielded as
d e t ( λ I M * ) = ( λ 2 m 1 λ + m 0 ) ( λ 2 m 2 λ + m 0 ) 25 T 12 T 21 λ 2 = 0
where m 0 = 5 β Γ ( 0.99 ) , m 1 = 5 T 11 0.1 = 0.4 , m 2 = 5 T 22 0.1 = 9.9 , T 12 T 21 = 0.5 . Due to β > 0 , λ = 0 is not the root of Equation (17). Then, Equation (17) can be changed into
( λ + m 0 λ ) 2 ( m 1 + m 2 ) ( λ + m 0 λ ) + m 1 m 2 + 12.5 = 0
Thus,
λ + m 0 λ = ( m 1 + m 2 ) ± ( m 1 m 2 ) 2 50 2
When substituting m 1 = 0.4 , m 2 = 9.9 into Equation (19), we obtain
λ + m 0 λ = 5.15 161 4 = k 1 , λ + m 0 λ = 5.15 + 161 4 = k 2
where k 1 1.9779 , k 2 8.3221 . Furthermore, we can obtain
λ 2 k 1 λ + m 0 = 0 , λ 2 k 2 λ + m 0 = 0
Then, the roots of Equation (17) are
λ 1 , 2 = ( k 1 ± k 1 2 4 m 0 ) / 2 , λ 3 , 4 = ( k 2 ± k 2 2 4 m 0 ) / 2
If k 1 2 4 m 0 0 and k 2 2 4 m 0 0 , the eigenvalues λ i 0 ( i = 1 , 2 , 3 , 4 ) .
  • If k 1 2 4 m 0 < 0 , k 2 2 4 m 0 < 0 and 0 < β 1 , there are
4 m 0 k i 2 k i < tan ( 0.99 π 2 ) , i = 1 , 2
This implies that the eigenvalues lie in an unstable zone.
  • Due to k 1 < k 2 , with k 1 2 4 m 0 < 0 , k 2 2 4 m 0 > 0 and 0 < β 1 , we obtain
| arg ( λ 1 , 2 ) | = 4 m 0 k 1 2 k 1 < tan ( 0.99 π 2 ) , λ 3 > 0 , λ 4 > 0
In summary, the equilibrium point ( 0 , 0 , 0 , 0 ) of model (13) is unstable for any β ( 0 < β 1 ).

4. Numerical Simulations of the Fraction-Order Models

4.1. Numerical Simulations of Model (12)

In this section, the numerical simulations of model (12) are shown with a = 1.6 , β = 0.5 , f ( u ) = tanh ( u ) . In this case, the equilibrium E = ( 0 , 0 ) , m 1 = 0.6 , and m 2 = 0.5 Γ ( 1 α ) > 0 . As 0 < α < 1 , one has Γ ( 1 α ) > 1 and m 1 2 4 m 2 = 0.36 2 Γ ( 1 α ) < 0 . Referring to Theorem 1, the bifurcation point α 0 can be calculated by
α 0 = 1 2 π arctan 2 Γ ( 1 α 0 ) 0.36 0.6
By using Matlab, Equation (23) has the root α 0 0.2504 . Figure 1 is the time history of the action potential u. As α = 0.24 , the action potential u is periodic spiking; while α = 0.26 , the action potential u convergences to the quiescent state. These numerical results are consistent with the third conclusion shown in Theorem 1.
Remark 2.
It is shown in [15] that model (5) shows Hopf Bifurcation when the memory decay rate α = 0.6 , while in the fractional-order model (12), Hopf Bifurcation occurs when the memory decay rate α = 0.2504 . This implies that the memory kernel function has heavy effects on the dynamics of the tabu learning single-neuron model.

4.2. Dynamics of Model (13) with α = 0.01 Induced by the Learning Rate β

As shown in Section 3.2, model (13) is unstable for all β > 0 . In this case, periodic spiking and chaotic spiking occur in model (13). Figure 2 is the bifurcation diagram of the local maxima of the variable u 1 . It is found that model (13) changes between the periodic spiking and the chaotic spiking as β increases from 0 to 0.4. While β > 0.4 , model (13) goes into chaotic spiking. Figure 3 is the time history of state u 1 for different β values. There is periodic spiking when β = 0.05 , β = 0.305 , and β = 0.35 , and then there is chaotic spiking when β = 0.301 , β = 0.31 , β = 0.32 , β = 0.4 , and β = 0.5 . This implies that model (13) shows that tangent bifurcation increased β from 0 to 0.4 and shows only chaotic spiking when β > 0.4 .

4.3. Dynamic Transitions of Model (13) Induced by the Memory Decay Rate α

In Section 4.1, model (12) with different memory decay rates α shows different dynamics. Taking this into account, we chose a different memory decay rate α for model (13) with β = 0.5 .
When α = 0.01 and α = 0.1 , model (13) is chaotic (Figure 4a1,b1). When α = 0.5 or α = 0.9 , model (13) goes into periodic spiking (Figure 4c1,d1), and it is found that the frequency of the oscillation increases as the order α increases. This implies that the learning frequency of tabu learning neurons is very high when the memory decay rate is high. In theory, this is quite consistent with the actual phenomenon. In addition, when the memory decay rate α increased from 0.01 to 0.9, Figure 4a2,b2,c2,d2 show that model (13) enters frequency synchronization firstly and then the amplitudes of two neurons gradually become consistent. This implies that the memory decay rate α has a significant impact on the synchronization of the neurons connected by model (13).
Remark 3.
When β = 0.5 , the classical integer-order model with the memory decay rate α = 0.1 corresponding to model (13) produces periodic spiking [15], while the fractional-order model (13) with α = 0.1 shows chaotic spiking (Figure 4b2). The conclusion is drawn that the fractional-order model (13) has stronger nonlinearity than the corresponding classical integer-order model.

5. Conclusions

In this paper, a novel fractional-order tabu learning single-neuron mathematical model is proposed by introducing the exponential memory kernel function to the tabu learning variable. In the new model, the memory decay rate is measured by the order of the fractional-order derivative. Similar to the integer-order tabu learning neuron, the fractional-order tabu learning neuron model showed Hopf bifurcation as the memory decay rate increased from 0 to 1. It is interesting that the memory decay rate at which the fractional-order model showed Hopf bifurcation is numerically smaller than that of the integer-order tabu learning model. This indicates that the memory capacity has a significant impact on the neuron behavior. Based on this new fractional-order tabu learning model, the network of two interacting fractional-order tabu learning neurons is displayed. It is found that the network with a lower memory decay rate of 0.01 is unstable and shows tangent bifurcation as the learning rate increases from 0 to 1. However, when fixing the learning rate at 0.5 and increasing the memory decay from 0 to 1, the network enters into frequency synchronization firstly and then the amplitudes of two neurons gradually become consistent. At the same time, the numerical simulation shows that the bigger the memory decay rate is, the higher the learning frequency of the fractional-order tabu learning neuron network is, which coincides with the rule of fast forgetting and fast learning. This indicates that the memory decay rate takes an important role in the synchronization of a network connected by two fractional-order tabu learning neurons.
Of course, all results stated above are based only on mathematical models and theoretical analysis. In future research, it is needed to confirm that the actual firing of neurons matches our model.

Author Contributions

Conceptualization, Y.Y. and F.W.; methodology, Y.Y.; software, M.S.; validation, Z.G.; formal analysis, Y.Y.; investigation, Y.Y.; writing—original draft preparation, Y.Y.; supervision, F.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the grants from the National Natural Science Foundation of China under 11602035, 12172066, the Natural Science Foundation of Jiangsu Province, China under BK20201447, and the Science and Technology Innovation Talent Support Project of Jiangsu Advanced Catalysis and Green Manufacturing Collaborative Innovation Center under ACGM2022-10-02.

Data Availability Statement

All data were contained in the main text.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hodgkin, A.; Huxley, A. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef] [PubMed]
  2. Xu, Q.; Wang, Y.; Chen, B.; Li, Z.; Wang, N. Firing pattern in a memristive Hodgkin-Huxley circuit: Numerical simulation and analog circuit validation. Chaos Solitons Fractals 2023, 172, 113627. [Google Scholar] [CrossRef]
  3. Nagumo, J.; Arimoto, S.; Yoshizawa, S. An Active Pulse Transmission Line Simulating Nerve Axon. Proc. IRE 1962, 50, 2061–2070. [Google Scholar] [CrossRef]
  4. Njitacke, Z.T.; Ramadoss, J.; Takembo, C.N.; Rajagopal, K.; Awrejcewicz, J. An enhanced FitzHugh–Nagumo neuron circuit, microcontroller-based hardware implementation: Light illumination and magnetic field effects on information patterns. Chaos Solitons Fractals 2023, 167, 113014. [Google Scholar] [CrossRef]
  5. Yao, Z.; Sun, K.; He, S. Firing patterns in a fractional-order FithzHugh-Nagumo neuron model. Nonlinear Dyn. 2022, 110, 1807–1822. [Google Scholar] [CrossRef]
  6. Morris, C.; Lecar, H. Voltage oscillations in the barnacle giant muscle fiber. Biophys. J. 1981, 35, 193–213. [Google Scholar] [CrossRef]
  7. Fan, W.; Chen, X.; Wu, H.; Li, Z.; Xu, Q. Firing patterns and synchronization of Morris-Lecar neuron model with memristive autapse. Int. J. Electron. Commun. (AEÜ) 2023, 158, 154454. [Google Scholar] [CrossRef]
  8. Hindmarsh, J.; Rose, M. A model of neuronal bursting using three coupled first order differential equations. Proc. R. Soc. B 1984, 221, 87–102. [Google Scholar]
  9. Xie, Y.; Yao, Z.; Ren, G.; Ma, J. Estimate physical reliability in Hindmarsh-Rose neuron. Phys. Lett. A 2023, 464, 128693. [Google Scholar] [CrossRef]
  10. Chay, T.R. Chaos in a three-variable model of an excitable cell. Physica D 1985, 16, 233–242. [Google Scholar] [CrossRef]
  11. Bao, H.; Li, K.; Ma, J.; Hua, Z.; Xu, Q.; Bao, B. Memristive effects on an improved discrete Rulkov neuron model. Sci. China Technol. Sci. 2023, 66, 3153–3163. [Google Scholar] [CrossRef]
  12. Izhikevich, E.M. Simple model of spiking neurons. IEEE Trans. Neural Netw. 2015, 14, 1569–1572. [Google Scholar] [CrossRef]
  13. Beyer, D.A.; Ogier, R.G. Tabu learning: A neural network search method for solving nonconvex optimization problems. In Proceedings of the IEEE International Joint Conference on Neural Networks, Singapore, 18–21 November 1991; pp. 953–961. [Google Scholar]
  14. Bao, B.; Hou, L.; Zhu, Y.; Wu, H.; Chen, M. Bifurcation analysis and circuit implementation for a tabu learning neuron model. Int. J. Electron. Commun. (AEÜ) 2020, 121, 153235. [Google Scholar] [CrossRef]
  15. Li, C.G.; Chen, G.R.; Liao, X.F.; Yu, J. Hopf bifurcation and Chaos in tabu learning neuron models. Int. J. Bifurc. Chaos 2005, 15, 2633–2642. [Google Scholar] [CrossRef]
  16. Xiao, M.; Cao, J. Bifurcation analysis on a discrete-time tabu learning model. J. Comput. Appl. Math. 2008, 220, 725–738. [Google Scholar] [CrossRef]
  17. Li, Y.G. Hopf bifurcation analysis in a tabu learning neuron model with two delays. ISRN Appl. Math. 2011, 2011, 636732. [Google Scholar] [CrossRef]
  18. Bao, B.; Luo, J.; Bao, H.; Chen, C.; Wu, H.; Xu, Q. A simple non-autonomous hidden chaotic system with a switchable stable node-focus. Int. J. Bifurc. Chaos 2019, 29, 1950168. [Google Scholar] [CrossRef]
  19. Li, Y.; Zhou, X.; Wu, Y.; Zhou, M. Hopf bifurcation analysis of a tabu learning two-neuron model. Chaos Soliton Fract. 2006, 29, 190–197. [Google Scholar] [CrossRef]
  20. Hou, L.P.; Bao, H.; Xu, Q.; Chen, M.; Bao, B.C. Coexisting infinitely many nonchaotic attractors in a memristive weight-based tabu learning neuron. Int. J. Bifurc. Chaos 2021, 12, 2150189. [Google Scholar] [CrossRef]
  21. Bao, H.; Ding, R.Y.; Chen, B.; Xu, Q.; Bao, B.C. Two-dimensional non-autonomous neuron model with parameter-controlled multi-scroll chaotic attractors. Chaos Soliton Fract. 2023, 169, 113228. [Google Scholar] [CrossRef]
  22. Chaudhuri, R.; Fiete, I. Computational principles of memory. Nat. Neurosci. 2016, 19, 394–403. [Google Scholar] [CrossRef] [PubMed]
  23. Podlubny, I. Fractional Differential Equations; Academic Press: Cambridge, MA, USA, 1999. [Google Scholar]
  24. Petras, I. Fractional-order memristor-based Chua’s circuit. IEEE T. Circuits-II 2010, 57, 975–979. [Google Scholar] [CrossRef]
  25. Matignon, D. Stability results for fractional differential equations with applications to control processing. In Proceedings of the IMACS, IEEE-SMC, Lille, France, 9–12 July 1996; pp. 963–968. [Google Scholar]
Figure 1. The time history of u. (a) α = 0.24 ; (b) α = 0.26 .
Figure 1. The time history of u. (a) α = 0.24 ; (b) α = 0.26 .
Fractalfract 08 00428 g001
Figure 2. Bifurcation diagram of the local maxima of the variable u 1 of model (13) regarding β .
Figure 2. Bifurcation diagram of the local maxima of the variable u 1 of model (13) regarding β .
Fractalfract 08 00428 g002
Figure 3. The time history of u 1 . (a) β = 0.05 ; (b) β = 0.301 ; (c) β = 0.305 ; (d) β = 0.31 ; (e) β = 0.32 ; (f) β = 0.35 ; (g) β = 0.4 ; (h) β = 0.5 .
Figure 3. The time history of u 1 . (a) β = 0.05 ; (b) β = 0.301 ; (c) β = 0.305 ; (d) β = 0.31 ; (e) β = 0.32 ; (f) β = 0.35 ; (g) β = 0.4 ; (h) β = 0.5 .
Fractalfract 08 00428 g003
Figure 4. The time histories of model (13): (a1c1,d1) are the time histories of u 1 ; (a2d2) are the time histories of u 1 and u 2 .
Figure 4. The time histories of model (13): (a1c1,d1) are the time histories of u 1 ; (a2d2) are the time histories of u 1 and u 2 .
Fractalfract 08 00428 g004aFractalfract 08 00428 g004b
Table 1. The eigenvalue λ .
Table 1. The eigenvalue λ .
Parameters m 1 < 0 m 1 = 0 m 1 > 0
m 2 < 0 λ 1 < 0 , λ 2 > 0 λ 1 > 0 , λ 2 < 0 λ 1 > 0 , λ 2 < 0
m 2 = 0 λ 1 = 0 , λ 2 = m 1 < 0 λ 1 = λ 2 = 0 λ 1 = m 1 > 0 , λ 2 = 0
m 2 > 0 , m 1 2 4 m 2 0 λ 1 < 0 , λ 2 < 0 / λ 1 > 0 , λ 2 > 0
m 2 > 0 , m 1 2 4 m 2 0 ( λ 1 ) = ( λ 2 ) = ( 1 / 2 ) m 1 ( λ 1 ) = ( λ 2 ) = 0 ( λ 1 ) = ( λ 2 ) = ( 1 / 2 ) m 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, Y.; Gu, Z.; Shi, M.; Wang, F. Fractional-Order Tabu Learning Neuron Models and Their Dynamics. Fractal Fract. 2024, 8, 428. https://doi.org/10.3390/fractalfract8070428

AMA Style

Yu Y, Gu Z, Shi M, Wang F. Fractional-Order Tabu Learning Neuron Models and Their Dynamics. Fractal and Fractional. 2024; 8(7):428. https://doi.org/10.3390/fractalfract8070428

Chicago/Turabian Style

Yu, Yajuan, Zhenhua Gu, Min Shi, and Feng Wang. 2024. "Fractional-Order Tabu Learning Neuron Models and Their Dynamics" Fractal and Fractional 8, no. 7: 428. https://doi.org/10.3390/fractalfract8070428

APA Style

Yu, Y., Gu, Z., Shi, M., & Wang, F. (2024). Fractional-Order Tabu Learning Neuron Models and Their Dynamics. Fractal and Fractional, 8(7), 428. https://doi.org/10.3390/fractalfract8070428

Article Metrics

Back to TopTop