1. Introduction
Fractional calculus has been a 300-year-old topic. Although it has a long mathematical history, the applications of fractional calculus to physics and engineering are only a recent focus of interest. Recent monographs and symposia proceedings have highlighted the application of fractional calculus in physics, continuum mechanics, signal processing, bioengineering, diffusion wave and electromagnetics [
1,
2,
3,
4]. The major advantage of the fractional-order derivatives is that they provide an excellent instrument for the description of memory and hereditary properties of various materials and processes. As such, some researchers introduced fractional calculus to neural networks to form fractional-order neural networks, which can better describe the dynamical behavior of the neurons, such as “memory". It was pointed out that fractional derivatives provide neurons with a fundamental and general computation ability that can contribute to efficient information processing, stimulus anticipation and frequency-independent phase shifts of oscillatory neuronal firing [
5]. It is suggested that the oculomotor integrator, which converts eye velocity into eye position commands, may be of a fractional order [
6]. It was demonstrated that neural network approximation taken at the fractional level resulted in higher rates of approximation [
7]. Furthermore, note that fractional-order recurrent neural networks might be expected to play an important role in parameter estimation. Therefore, the incorporation of memory terms (a fractional derivative or integral operator) into neural network models is an important improvement [
8], and it will be of important significance to study fractional-order neural networks.
Chaos has been a focus of intensive discussion in numerous fields during the last four decades. Moreover, it has been verified that some neural networks can exhibit chaotic dynamics. For example, experimental and theoretical studies have revealed that a mammalian brain not only can display in its dynamical behavior strange attractors and other transient characteristics for its associative memories, but also can modulate oscillatory neuronal synchronization by selective visual attention optimization problems [
9,
10]. In recent years, the study on synchronization of chaotic neural networks has attracted considerable attention, due to the potential applications in many fields, including secure communication, parallel image processing, biological systems, information science,
etc. As we know, there are many synchronization results about integer-order neural networks; see [
11,
12,
13] and references therein. On the other hand, since bifurcations and chaos of fractional-order neural networks were investigated firstly in [
14,
15], some important and interesting results about fractional-order neural networks have been obtained. For instance, in [
16], a fractional-order Hopfield neural model was proposed, and its stability was investigated by an energy-like function. Chaos and hyperchaos in fractional-order cellular neural networks was discussed in [
17]. Yu
et al. [
18] investigated
α-stability and
α-synchronization for fractional-order neural networks. Several recent results concerning chaotic synchronization in fractional-order neural networks have been reported in [
19,
20,
21,
22].
Due to the complexity of fractional-order systems, to the best of our knowledge, there are few theoretical results on the synchronization of fractional-order neural networks; most of the existing results are only numerical simulation [
19,
20,
21,
22]. Although there have been many synchronization results about integer-order neural networks in the past few decades, these results and methods could not be extended and applied easily to the fractional-order case. Therefore, to establish some theoretical sufficient criteria for the synchronization of fractional-order neural networks is very necessary and challenging. Motivated by the above discussions, by using the Mittag-Leffler function, some properties of fractional calculus and linear feedback control, a simple and efficient criterion in terms of the M-matrix for synchronization of such neural network is derived. Numerical simulations also demonstrate the effectiveness and feasibility of the proposed technique.
The rest of the paper is organized as follows. Some necessary definitions and lemmas are given, and the fractional-order network model is introduced in
Section 2. A sufficient criterion ensuring the synchronization of such neural networks is presented in
Section 3. An example and simulation are obtained in
Section 4. Finally, the paper is concluded in
Section 5.
2. Preliminaries and System Description
In this section, some definitions of fractional calculation are recalled and some useful lemmas are introduced.
Definition 1 [
1]. The fractional integral (Riemann-Liouville integral),
, with fractional order,
, of function
is defined as:
where
is the gamma function,
.
Definition 2 [
1]. The Riemann-Liouville derivative of fractional order
α of function
is given as:
where
.
Definition 3 [
1]. The Caputo derivative of fractional order
α of function
is defined as follows:
where
.
Note from Equations (2) and (3) that the fractional derivative is related to all the history information of a function, while the integer one is only related to its nearby points. That is, the next state of a system not only depends upon its current state, but also upon its historical states starting from the initial time. As a result, a model described by fractional-order derivatives possesses memory and inheritance and will be more precise to describe the states of neurons. In the following, the notation, , is chosen as the Caputo derivative, . For , the norm is defined by .
Definition 4 [
1]. The Mittag-Leffler function with two parameters appearing is defined as:
where
, and
. When
, one has
, further,
.
Lemma 1. Let
be a continuous function on
and satisfy:
Then:
where
and
λ are positive constant.
Proof. It follows from Equation (5) that there exists a nonnegative function,
, such that:
Taking the Laplace transform on Equation (7), then one has:
where
. It then follows that:
Taking the inverse Laplace transform in Equation (9), one obtains:
Note that both
and
are nonnegative functions; it follows that:
Lemma 2 [
1]. If
,
β is an arbitrary real number,
μ is such that
and
C is a real constant, then:
Definition 4 [
23]. A real
matrix,
, is said to be a
M-matrix if
, and all successive principal minors of
A are positive.
Lemma 3 [
23]. Let
be an
matrix with non-positive off-diagonal elements. Then, the following statements are equivalent:
- (1)
A is a nonsingular M-matrix;
- (2)
there exists a vector, ξ, such that ;
- (3)
there exists a vector, ξ, such that .
The dynamic behavior of a continuous fractional-order cellular neural networks can be described by the following system:
which can also be written in the following compact form:
where
,
,
n is the number of units in a neural network,
corresponds to the state vector at time
t,
denotes the activation function of the neurons and
diag
represents the rate with which the
ith unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs. The weight matrix,
, is referred to as the connection of the
jth neuron to the
ith neuron at time
t;
is an external bias vector.
Here, in order to obtain the main results, the following assumption is presented firstly.
A1. The neuron activation functions,
, are Lipschitz continuous, that is, there exist positive constants,
, such that:
3. Main Results
In this section, a sufficient condition for synchronization of fractional-order neural networks is derived.
Based on the drive-response concept, we refer to system Equation (13) as the drive cellular neural network and consider a response network characterized as follows:
or, equivalently:
where
is the state vector of the slave system,
and
are the same as Equation (13) and
is the external control input to be designed later.
Defining the synchronization error signal as
, the error dynamics between the master system Equation (14) and the slave system Equation (17) can be expressed by:
where
; therefore, synchronization between master system Equation (13) and slave Equation (16) is equivalent to the asymptotic stability of error system Equation (18) with the suitable control law,
. To this end, the external control input,
, can be defined as
, where
K=diag
is the controller gain matrix. Then, error system Equation (18) can be rewritten as:
or can be described by the following compact form:
Theorem 1. For the master-slave fractional-order chaotic neural networks Equations (14) and (17), which satisfy Assumption 1, if the controller gain matrix, K, satisfies as a M matrix(L=diag), then the synchronization between systems Equations (14) and (17) is achieved.
Proof. If
, then
. If
, then:
Similarly, if
, then:
Therefore, it follows that:
Due to
being an
M matrix, it follows from Lemma 3 that there are a set of positive constants,
, such that:
Define functions:
Obviously:
Therefore, there exists a constant,
, such that:
Consider an auxiliary function defined by
, where
are chosen as those in Equation (27). The Caputo derivative of
along the solution of system Equation (19) is:
One can see that:
Based on Lemma 1, it yields:
That is:
Let
in Lemma 2,
; it follows from Lemma 2 that there exists a real constant
C, such that:
which implies that
converges asymptotically to zero as
t tends to infinity, namely, the fractional-order chaotic neural network Equation (14) is globally synchronized with Equation (17). □
Remark 1. Up to now, with the help of the traditional Lyapunov direct theory, there are many results about synchronization of integer-order chaotic neural networks, but the method and these results are not suitable for fractional-order chaotic neural networks.
Remark 2. [
19,
20,
21,
22] discussed chaos and synchronization of the fractional-order neural networks, but these are only numerical simulations. Here, theoretical proof is proposed.
Remark 3. [
18] considered
α-synchronization for fractional-order neural networks; unfortunately, the obtained results are not correct [
24].
4. Numerical Example
An illustrative example is given to demonstrate the validity of the proposed controller.
Consider a fractional-order Hopfield neural chaotic network with neurons as follows [
25]:
where
,
diag
,
, and
. The system satisfies Assumption 1 with
. As is shown in
Figure 1, the fractional-order Hopfield neural network possesses a chaotic behavior when
.
Figure 1.
Chaotic behaviors of fractional-order Hopfield neural network Equation (34) with fractional-order, .
Figure 1.
Chaotic behaviors of fractional-order Hopfield neural network Equation (34) with fractional-order, .
The controlled response fractional Hopfield neural network is designed as follows:
The controller gain matrix,
, is chosen as
K= diag
, and it can be easily verified that
is an
M matrix. According to Theorem 1, the synchronization between Equations (34) and (35) can be achieved. In the numerical simulations, the initial states of the drive and response systems are taken as
and
, respectively.
Figure 2 shows the state synchronization trajectory of the drive and response systems; the synchronization error response is depicted in
Figure 3.
Figure 2.
State synchronization trajectories of drive system Equation (34) and response system Equation (35).
Figure 2.
State synchronization trajectories of drive system Equation (34) and response system Equation (35).
Figure 3.
Synchronization error time response of drive system Equation (34) and response system Equation (35).
Figure 3.
Synchronization error time response of drive system Equation (34) and response system Equation (35).