1. Introduction
Rhythms are ubiquitous and have been shown to play an important role in living organisms [
1,
2]. Lines of research on the roles of rhythms are studied, including synchronous flashing of fireflies [
3], pacemaker cells of the heart [
4], synchronization of pulse-coupled oscillators [
5,
6], synchronous neural activity propagating in scale-free networks, random networks, and cortical neural networks [
7,
8]. The central questions in the above research concern the origin and the control of rhythms, that is, the issue of understanding what causes the rhythms and how the rhythms interact with each other. Especially in neural network dynamics, an important issue is to probe activity-dependent plasticity of neural connections entwined with rhythms [
9,
10,
11]. Such a kind of plasticity is claimed to be the heart of the organization of behavior [
12,
13,
14,
15].
Classic models of neural networks have achieved state-of-the-art performances in classification [
16,
17,
18], object detection [
19,
20,
21], and instance segmentation [
22,
23,
24]. One of the key structures involved in network architecture is convolution. Many operators used for edge detection or pattern recognition are convolved with input images to get different types of derivative measurement in the feature-extracting process. Input images are usually structural with geometric features. Thus, convolution can be applied to get features from input images, which are then processed by other parts of network architecture to determine a maximal score for outcome prediction. Here, instead of concerning the geometric feature extraction, our focus is on the neural network models receiving and reactivating with rhythms. Rhythms can be the features of inputs sending to neural works and also be the features of outputs activating by assemblies of neurons in neural networks. We search for a rule to derive operators from rhythmic firing of neurons. Such operators are acting on the adjustment of neural connections and hence can be convolved with the input rhythm to get features and to propagate features for rhythm formation.
Our point of departure comes from the decirculation process in neural network dynamics [
25,
26]. The decirculation process describes adaptation of network structure that is crucial for evolutionary neural networks to proceed from one circulating state to another. Two sorts of plasticity operators are derived to modify network structure for decirculation process [
27]. One is to measure synchronous neural activity in a circulating state, whereas the other is to measure self-sustaining neural activity in a circulating state. They meet the neurobiological concept called Hebbian synaptic plasticity [
12,
13,
14]. Such plasticity operation reflects an internal control in neural network dynamics, enabling neurons to produce rhythmic, oscillatory activity [
27]. Recently, the concept of decirculation process has been extended to the concept of neural path pruning [
28]. There, by setting the beginning and the ending of a desired flow of neural firing states, a pruning map can be conducted to indicate critical neuronal connections which have significant controllability in eliminating the flow of neural firing states. On this basis, a simulation result shows that a mobile robot guided by neural network dynamics can be exploited to change neuronal connections determined critically by neural path pruning with rewards and punishment. The mobile robot may result in a good performance of fault-tolerant behavior of tracking (adaptive matching of sensory inputs to motor outputs) [
28].
Neural path pruning paves an alternative way to derive operators convolving with signals and controlling neural network dynamics. This motivates us to formulate pruning maps with rhythmic neural firing, which may induce a framework of feedforward neural networks for rhythm formation.
2. Theoretical Framework
The model description follows. Let
be the
n-dimensional state space consisting of all vectors
, with each component
being 0 or 1. Consider a dynamical system of
n coupled neurons, which is modeled by the equation [
25,
26]:
where
is a vector of neural firing states at time
t, with each component
representing the firing state of neuron
i at time
t;
is a coupling weight matrix of the
n coupled neurons (be they agents in a recurrent or feedforward neural network), with each entry
representing the coupling weight underlying the connection from
j to
i;
denotes an ensemble of neurons which adjust their states at time
t; and
is a transition function whose
ith component is defined by
otherwise
, where
is a firing threshold of neuron
i for
and the function 𝟙 is the Heaviside function. Consider a flow
of neural firing states in
, where
,
, and
for some
. Specifically, if
within
, then
is said to be a loop of neural firing states in
. For each
, an integer, denoted
, is assigned according to the rule:
The matrix
is called the pruning map of
. Thus,
The pruning map
induces the linear functional
, which is defined on the Hilbert space
of all real
matrices endowed with the Hilbert–Schmidt inner product
, where
for each
. Let
be the usual inner product of
x and
y in
. With these notions, a basic theorem reveals the determination of whether the neural connections identified by the pruning map are crucial to eliminate the flow of neural firing states in the dynamics of the neural network. It was stated in [
27,
28] as follows.
Theorem 1. Let be a flow of neural firing states in . If and satisfythen for any initial neural firing state and any updating , the flow encoded by Equation (1)
cannot behave infor each Theorem 1 is based on the concept of neural path pruning. It points out a regulatory regime for network formation. To show this, consider a flow
and a coupling weight matrix
. If we choose a coupling operator
such that
(respectively,
), then
Since the quantity
is determined only by the flow
and the threshold
b, Theorem 1 coupled with the inequality in Equation (7) (respectively, the inequality in Equation (8)) suggests that, given the dynamical system
the change of the coupling weight matrix from
A to
may enhance (respectively, inhibit) the effect of flow elimination on
.
We go further to probe the pruning map with a desired flow of rhythmic neural firing. Each entry in the pruning map may be represented in terms of a quantity related to the firing rhythms of individual neurons. In so doing, we can directly extract information from the pruning map concerning how the rhythms interact with each other. Specifically, for feedforward neural networks, the pruning map provides a recipe to probe the interaction of rhythms between layers, which may reveal a consolidated rule to keep neurons firing in their specific rhythms consistently.
3. Main Results
Assume that neuron
i fires in rhythm
for
Let
, where
denotes the least common multiple of
. For each
, let
be a nonnegative integer less than
, denoting the initial phase of rhythmic neural firing. Thus, the flow with neurons firing in rhythm
and in phase
can be described as
where
if
for
and
; otherwise,
. Here, the notation
denotes the largest integer less than or equal to
. Theorem 2 represents a matrix decomposition formula of the pruning map underlying rhythmic neural firing.
Theorem 2. Let be a flow of rhythmic neural firing states given in Equation (10). For each denote by the least common multiple of and , and the greatest common divisor of and . Let denote the nonnegative integer less than such thatThen, the pruning map can be formulated by where is a symmetric matrix whose non-zero entries are given byand is a non-symmetric matrix whose non-zero entries are given by Proof. To prove Theorem 2, note that if
, then
and
Denote by
for
. Since
if and only if
for
, the
-entry of the pruning map
is reduced by
To compute Equation (18) for each pair of
, we estimate the numbers of elements in the intersections
and
Denote by
. Consider the spanning class
(respectively,
) consisting of all
(respectively,
), where
k runs through all the integers. Let
and
Define the mapping
by
Then,
is a homomorphism of
onto
. We claim that the kernel of
, denoted by
, is
It is readily seen that
for each
, so
, which implies that
On the other hand, suppose
. Then,
for some
. Since
is the greatest common divisor of
and
, there exist
such that
This shows that
and hence
, proving the Equation (24). Thus,
is isomorphic to
. Denote by
. Since
and the order of the kernel of
is
, we may explicitly rewritten
by
On the other hand, since
there exists a bijective mapping
on
with
such that
for each
and
. Recall that
for some
. By Equation (30), we have
and hence by Equation (31), we have
which implies that there exists
such that
for each
and
. Since
and
for each
and
, it follows from Equation (34) that
Since
, we may rewrite Equation (35) by
With the inequalities in Equations (34) and (36) established by the construction of
, we now proceed to compute Equations (19) and (20). Let
. For the case of
, since
we have
Furthermore, since
we have
Combining Equations (38) and (40) shows that
Consider a partition of
given by
Fix
. Since
for each
, we have
Furthermore, since
for each
, we have
By Equations (42), (44) and (46), we may rearrange the intersection of elements in Equation (41) by
Denote by
the number of elements in the set
E. Since
and
for each
, it follows from Equations (34) and (36) that
for each
. It is readily seen that if
then
. Conversely, if
, then
for some
. When
, it follows that
When
, it follows that
Since
, we conclude that there exists only one
such that
. Hence,
Thus, the equality in Equation (48) can be rewritten by
for each
. Combining Equations (47) and (53) gives
for the case of
. Now, we turn to the case of
. Since
and
, we have
Since Equations (40) and (44) hold also for the case
, they imply that
The partition in Equation (42) and the inclusions in Equations (55) and (56) together imply that
Fix
. Since
and
for each
, it follows from Equations (34) and (36) that
It is readily seen that if
then
. Conversely, if
, then
for some
. When
, it follows that
When
, it follows that
This implies that
. Hence,
Thus, the equality in Equation (58) can be rewritten by
Combining Equations (57) and (63) gives
for the case of
. Now, we split the argument into three cases.
By Equations (18), (19), (20), (54),
and (64),
we have Case 2.. Then, exactly one of the following situations holds: Suppose that . Then, By Equations (18), (19), (20), (54),
and (64),
we have Suppose that . Then, By Equations (18), (19), (20), (54),
and (64),
we have Case 3.. Suppose that . Then, By Equations (18), (19), (20), (54),
and (64),
we have Suppose that . Then, By Equations (18), (19), (20), (54),
and (64),
we have Suppose that and . Then, by Equations (18), (19), (20), (54), and (64), we have
This completes the proof of Theorem 2. □
As an implication, we consider a feedforward neural network consisting of many layers of neurons firing in rhythm. Suppose that there are
neurons in layer
k for each
, where layer 1 denotes the input layer and layer
denotes the output layer of the feedforward neural network. Thus, the coupling weight underlying the connection from the
ℓth neuron in layer
to the
th neuron in layer
is referred to the entry
, where
and
, in the coupling weight matrix
A. Thus, the coupling weight matrix
A can be rewritten by
Specifically, we may equip the input layer with a two-layer substructure, given by
with the firing threshold satisfying
for each neuron
and the updating
satisfying
if
; otherwise,
for each
and
Let
and
. Then, neuron
i in the input layer can fire in rhythm
and in phase
for each
. Neurons in the posterior layer receive signals from neurons in the prior layer and fire correspondingly if the incoming signals are greater than the firing thresholds. Note that since neurons in the prior layer fire in rhythm, the firing thresholds can be adjusted to generate certain rhythmic firing patterns or combination of rhythmic firing patterns in the posterior layer.
Figure 1 depicts three layers of such a feedforward neural network, in which the coupling weight matrix
A is defined by
the firing threshold is defined by
, the updating is defined initially by
, and
for each
Given an initial neural firing state
, the dynamical Equation (1) ensures that neurons
in the input layer fire in rhythm
and in phase
, respectively. In addition, since
neurons
in layer 2 fire in rhythm
and in phase
, respectively. This is feasible when the summation of incoming signals from neurons
to 6 (respectively, from neurons
to 7) is largest, and then the firing threshold of neuron 6 (respectively, neuron 7) can be adjusted to sieve out such a rhythmic firing pattern. Similar results can be found on neuron 8 in layer 3, which fires in rhythm
and in phase
according to
Denote by
the resulting flow of rhythmic neural firing states from time
to
. According to the formula of the pruning map proved in Theorem 2, we see that
and hence
Then, by neural path pruning described in Equations (7) and (8), the coupling operator can be given by
which maintain the rhythmic firing pattern of neuron 6 (respectively, neuron 7 and neuron 8) by keeping the largest summation of incoming signals from neurons
to 6 (respectively, from neurons
to 7 and from neuron 7 to 8). As illustrated by the orange, green, and purple boxes in
Figure 1, the coupling operator in Equation (95) fits well with the neurobiological concept of Hebbian synaptic plasticity, meaning that when neurons
(respectively, neurons
and neuron 7) repeatedly or persistently take part in firing neuron 6 (respectively, neuron 7 and neuron 8), the coupling weights between them are strengthened. Such activity-dependent plasticity between layers of neurons is prone to keep neurons firing in rhythm in the posterior layer.
An experimental setting is defined as follows, showing the performance of rhythm formation in feedforward neural networks. The input rhythm and the layer architecture of feedforward neural networks are specified as in
Figure 1. Initially, the coupling weights from prior layers to posterior layers and the firing thresholds of neurons are selected randomly from the interval
. When a neuron in layer 2 or 3 fires dynamically, its firing threshold will be adjusted by a positive value 1, otherwise a negative value
. In addition, the coupling weights are adjusted according to the coupling operators (with intensity 1 or
) derived from neural path pruning. The transition function of a neuron
i is selected to be the Heaviside function defined by Equation (2) or the sigmoid function
defined by
where
is a positive real number. For each round of simulation, we say that rhythm formation occurs in the feedforward neural network if the neuron in layer 3 can persistently fire in rhythm for certain time steps. Specifically, a criterion is defined by
k times of neural firing (with the firing state
) in rhythm
r with
during time steps
to 2000.
Figure 2 shows the successful rates for rhythm formation in feedforward neural networks, with neurons activating via the Heaviside functions (red line,
rounds of simulation) or the sigmoid functions with a selected
(blue line, 1000 rounds of simulation per
). It reveals that rhythm formation can robustly occur in feedforward neural networks, even under the change of the transition functions from the Heaviside functions to the sigmoid functions.