Next Article in Journal
On the Necessary Conditions for Non-Equivalent Solutions of the Rotlet-Induced Stokes Flow in a Sphere: Towards a Minimal Model for Fluid Flow in the Kupffer’s Vesicle
Previous Article in Journal
A Guaranteed Deterministic Approach to Superhedging—The Case of Convex Payoff Functions on Options
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Formulation of Pruning Maps with Rhythmic Neural Firing

1
Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung 40402, Taiwan
2
Research Center for Interneural Computing, China Medical University Hospital, Taichung 40447, Taiwan
3
Department of Information Management, Yuan Ze University, Chung-Li 32003, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(12), 1247; https://doi.org/10.3390/math7121247
Submission received: 22 November 2019 / Revised: 13 December 2019 / Accepted: 13 December 2019 / Published: 17 December 2019

Abstract

:
Rhythmic neural firing is thought to underlie the operation of neural function. This triggers the construction of dynamical network models to investigate how the rhythms interact with each other. Recently, an approach concerning neural path pruning has been proposed in a dynamical network system, in which critical neuronal connections are identified and adjusted according to the pruning maps, enabling neurons to produce rhythmic, oscillatory activity in simulation. Here, we construct a sort of homomorphic functions based on different rhythms of neural firing in network dynamics. Armed with the homomorphic functions, the pruning maps can be simply expressed in terms of interactive rhythms of neural firing and allow a concrete analysis of coupling operators to control network dynamics. Such formulation of pruning maps is applied to probe the consolidation of rhythmic patterns between layers of neurons in feedforward neural networks.

1. Introduction

Rhythms are ubiquitous and have been shown to play an important role in living organisms [1,2]. Lines of research on the roles of rhythms are studied, including synchronous flashing of fireflies [3], pacemaker cells of the heart [4], synchronization of pulse-coupled oscillators [5,6], synchronous neural activity propagating in scale-free networks, random networks, and cortical neural networks [7,8]. The central questions in the above research concern the origin and the control of rhythms, that is, the issue of understanding what causes the rhythms and how the rhythms interact with each other. Especially in neural network dynamics, an important issue is to probe activity-dependent plasticity of neural connections entwined with rhythms [9,10,11]. Such a kind of plasticity is claimed to be the heart of the organization of behavior [12,13,14,15].
Classic models of neural networks have achieved state-of-the-art performances in classification [16,17,18], object detection [19,20,21], and instance segmentation [22,23,24]. One of the key structures involved in network architecture is convolution. Many operators used for edge detection or pattern recognition are convolved with input images to get different types of derivative measurement in the feature-extracting process. Input images are usually structural with geometric features. Thus, convolution can be applied to get features from input images, which are then processed by other parts of network architecture to determine a maximal score for outcome prediction. Here, instead of concerning the geometric feature extraction, our focus is on the neural network models receiving and reactivating with rhythms. Rhythms can be the features of inputs sending to neural works and also be the features of outputs activating by assemblies of neurons in neural networks. We search for a rule to derive operators from rhythmic firing of neurons. Such operators are acting on the adjustment of neural connections and hence can be convolved with the input rhythm to get features and to propagate features for rhythm formation.
Our point of departure comes from the decirculation process in neural network dynamics [25,26]. The decirculation process describes adaptation of network structure that is crucial for evolutionary neural networks to proceed from one circulating state to another. Two sorts of plasticity operators are derived to modify network structure for decirculation process [27]. One is to measure synchronous neural activity in a circulating state, whereas the other is to measure self-sustaining neural activity in a circulating state. They meet the neurobiological concept called Hebbian synaptic plasticity [12,13,14]. Such plasticity operation reflects an internal control in neural network dynamics, enabling neurons to produce rhythmic, oscillatory activity [27]. Recently, the concept of decirculation process has been extended to the concept of neural path pruning [28]. There, by setting the beginning and the ending of a desired flow of neural firing states, a pruning map can be conducted to indicate critical neuronal connections which have significant controllability in eliminating the flow of neural firing states. On this basis, a simulation result shows that a mobile robot guided by neural network dynamics can be exploited to change neuronal connections determined critically by neural path pruning with rewards and punishment. The mobile robot may result in a good performance of fault-tolerant behavior of tracking (adaptive matching of sensory inputs to motor outputs) [28].
Neural path pruning paves an alternative way to derive operators convolving with signals and controlling neural network dynamics. This motivates us to formulate pruning maps with rhythmic neural firing, which may induce a framework of feedforward neural networks for rhythm formation.

2. Theoretical Framework

The model description follows. Let { 0 , 1 } n be the n-dimensional state space consisting of all vectors x = ( x 1 , x 2 , , x n ) , with each component x i being 0 or 1. Consider a dynamical system of n coupled neurons, which is modeled by the equation [25,26]:
x ( t + 1 ) = H A ( x ( t ) , s ( t ) ) , t = 0 , 1 , ,
where x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) { 0 , 1 } n is a vector of neural firing states at time t, with each component x i ( t ) representing the firing state of neuron i at time t; A = ( a i j ) M n ( R ) is a coupling weight matrix of the n coupled neurons (be they agents in a recurrent or feedforward neural network), with each entry a i j representing the coupling weight underlying the connection from j to i; s ( t ) { 1 , 2 , , n } denotes an ensemble of neurons which adjust their states at time t; and H A ( · , s ( t ) ) is a transition function whose ith component is defined by
[ H A ( x , s ( t ) ) ] i = 𝟙 j = 1 n a i j x j b i if i s ( t ) ,
otherwise [ H A ( x , s ( t ) ) ] i = x i , where b i R is a firing threshold of neuron i for i = 1 , 2 , , n and the function 𝟙 is the Heaviside function. Consider a flow Ω = [ x 0 , x 1 , , x p ] of neural firing states in { 0 , 1 } n , where p > 1 , x 0 , x 1 , , x p { 0 , 1 } n , and x 0 x i for some i { 1 , 2 , , p } . Specifically, if x 0 = x p within Ω , then Ω is said to be a loop of neural firing states in { 0 , 1 } n . For each i , j = 1 , 2 , , n , an integer, denoted c i j ( Ω ) , is assigned according to the rule:
c i j ( Ω ) = x j 0 ( x i 0 x i 1 ) + x j 1 ( x i 1 x i 2 ) + + x j p 1 ( x i p 1 x i p ) .
The matrix C ( Ω ) = ( c i j ( Ω ) ) is called the pruning map of Ω . Thus,
C ( Ω ) = m = 1 p x 1 m 1 ( x 1 m 1 x 1 m ) m = 1 p x n m 1 ( x 1 m 1 x 1 m ) m = 1 p x 1 m 1 ( x n m 1 x n m ) m = 1 p x n m 1 ( x n m 1 x n m ) .
The pruning map C ( Ω ) induces the linear functional A A , C ( Ω ) , which is defined on the Hilbert space M n ( R ) of all real n × n matrices endowed with the Hilbert–Schmidt inner product · , · , where A , C ( Ω ) = i , j a i j c i j ( Ω ) for each A = ( a i j ) M n ( R ) . Let x , y be the usual inner product of x and y in R n . With these notions, a basic theorem reveals the determination of whether the neural connections identified by the pruning map are crucial to eliminate the flow of neural firing states in the dynamics of the neural network. It was stated in [27,28] as follows.
Theorem 1.
Let Ω = [ x 0 , x 1 , , x p ] be a flow of neural firing states in { 0 , 1 } n . If A M n ( R ) and b R n satisfy
A , C ( Ω ) > b , x 0 x p ,
then for any initial neural firing state x ( 0 ) { 0 , 1 } n and any updating s ( t ) { 1 , 2 , , n } , t = 0 , 1 , , the flow x ( t ) encoded by Equation (1) cannot behave in
x ( T ) = x 0 , x ( T + 1 ) = x 1 , , x ( T + p ) = x p
for each T = 0 , 1 , .
Theorem 1 is based on the concept of neural path pruning. It points out a regulatory regime for network formation. To show this, consider a flow Ω = [ x 0 , x 1 , , x p ] and a coupling weight matrix A = ( a i j ) M n ( R ) . If we choose a coupling operator D = ( d i j ) M n ( R ) such that D , C ( Ω ) 0 (respectively, D , C ( Ω ) 0 ), then
A + D , C ( Ω ) A , C ( Ω )
( r e s p . , A + D , C ( Ω ) A , C ( Ω ) ) .
Since the quantity b , x 0 x p is determined only by the flow Ω and the threshold b, Theorem 1 coupled with the inequality in Equation (7) (respectively, the inequality in Equation (8)) suggests that, given the dynamical system
x ( t + 1 ) = H A + D ( x ( t ) , s ( t ) ) , t = 0 , 1 , ,
the change of the coupling weight matrix from A to A + D may enhance (respectively, inhibit) the effect of flow elimination on Ω .
We go further to probe the pruning map with a desired flow of rhythmic neural firing. Each entry in the pruning map may be represented in terms of a quantity related to the firing rhythms of individual neurons. In so doing, we can directly extract information from the pruning map concerning how the rhythms interact with each other. Specifically, for feedforward neural networks, the pruning map provides a recipe to probe the interaction of rhythms between layers, which may reveal a consolidated rule to keep neurons firing in their specific rhythms consistently.

3. Main Results

Assume that neuron i fires in rhythm r i N for i = 1 , 2 , , n . Let p = l c m ( r 1 , r 2 , , r n ) , where l c m ( r 1 , r 2 , , r n ) denotes the least common multiple of r 1 , r 2 , , r n . For each i = 1 , 2 , , n , let m i be a nonnegative integer less than r i , denoting the initial phase of rhythmic neural firing. Thus, the flow with neurons firing in rhythm r 1 , r 2 , , r n and in phase m 1 , m 2 , , m n can be described as
Ω = [ x 0 , x 1 , , x p ] ,
where x i m = 1 if m = m i + k r i for k = 0 , 1 , , ( p m i ) / r i and i = 1 , 2 , , n ; otherwise, x i m = 0 . Here, the notation r denotes the largest integer less than or equal to r R . Theorem 2 represents a matrix decomposition formula of the pruning map underlying rhythmic neural firing.
Theorem 2.
Let Ω = [ x 0 , x 1 , , x p ] be a flow of rhythmic neural firing states given in Equation (10). For each i , j = 1 , 2 , , n , denote by l c m ( r i , r j ) the least common multiple of r i and r j , and g c d ( r i , r j ) the greatest common divisor of r i and r j . Let [ m i ] g c d ( r i , r j ) denote the nonnegative integer less than g c d ( r i , r j ) such that
[ m i ] g c d ( r i , r j ) m i m o d g c d ( r i , r j ) .
Then, the pruning map C ( Ω ) can be formulated by C ( Ω ) = C ^ ( Ω ) + C ( Ω ) , where C ^ ( Ω ) = ( c ^ i j ( Ω ) ) is a symmetric matrix whose non-zero entries are given by
c ^ i j ( Ω ) = c ^ j i ( Ω ) = p / l c m ( r i , r j ) i f   g c d ( r i , r j ) = 2 a n d [ m j + 1 ] g c d ( r i , r j ) = [ m i ] g c d ( r i , r j ) ,
c ^ i j ( Ω ) = c ^ j i ( Ω ) = p / l c m ( r i , r j ) i f   g c d ( r i , r j ) = 2 a n d [ m j ] g c d ( r i , r j ) = [ m i ] g c d ( r i , r j ) ,
and C ( Ω ) = ( c i j ( Ω ) ) is a non-symmetric matrix whose non-zero entries are given by
c i j ( Ω ) = p / l c m ( r i , r j ) i f   g c d ( r i , r j ) > 2 a n d [ m j + 1 ] g c d ( r i , r j ) = [ m i ] g c d ( r i , r j ) ,
c i j ( Ω ) = c j i ( Ω ) = p / l c m ( r i , r j ) i f   g c d ( r i , r j ) > 2 a n d [ m j ] g c d ( r i , r j ) = [ m i ] g c d ( r i , r j ) .
Proof. 
To prove Theorem 2, note that if m j > 0 , then 1 r j > m j > 0 and
( l c m ( r 1 , r 2 , , r n ) 1 ) / r j ( p m j ) / r j ( l c m ( r 1 , r 2 , , r n ) r j + 1 ) / r j .
Thus, we have
( p m j ) / r j = l c m ( r 1 , r 2 , , r n ) / r j 1 if m j 0 , l c m ( r 1 , r 2 , , r n ) / r j if m j = 0 .
Denote by k j = l c m ( r 1 , r 2 , , r n ) / r j 1 for j = 1 , 2 , , n . Since x j m = 1 if and only if m = m j + k r j for k = 0 , 1 , , ( p m j ) / r j , the i j -entry of the pruning map C ( Ω ) is reduced by
c i j ( Ω ) = m = 1 p x j m 1 x i m 1 x i m = k = 0 k j x j m j + k r j x i m j + k r j x i m j + k r j + 1 = k = 0 k j x i m j + k r j x i m j + k r j + 1 .
To compute Equation (18) for each pair of i , j = 1 , 2 , , n , we estimate the numbers of elements in the intersections
m j + k r j ; k = 0 , 1 , , k j { m i + k r i ; k = 0 , 1 , , ( p m i ) / r i }
and
m j + k r j + 1 ; k = 0 , 1 , , k j { m i + k r i ; k = 0 , 1 , , ( p m i ) / r i } .
Denote by g i j = g c d ( r i , r j ) . Consider the spanning class a r i (respectively, a g i j ) consisting of all a + k r i (respectively, a + k g i j ), where k runs through all the integers. Let
r i = { 0 r i , 1 r i , , r i 1 r i }
and
g i j = { 0 g i j , 1 g i j , , g i j 1 g i j } .
Define the mapping Ψ : r i g i j by
Ψ ( a r i ) = a g i j .
Then, Ψ is a homomorphism of r i onto g i j . We claim that the kernel of Ψ , denoted by K e r Ψ , is
K e r Ψ = k r j r i ; k .
It is readily seen that k r j 0 mod g i j for each k , so k r j g i j = 0 g i j , which implies that
K e r Ψ k r j r i ; k .
On the other hand, suppose a r i K e r Ψ . Then, a = δ g i j for some δ . Since g i j is the greatest common divisor of r i and r j , there exist α , β such that
g i j = α r i + β r j .
This shows that
a = δ α r i + δ β r j ,
and hence a δ β r j mod r i , proving the Equation (24). Thus, r i / k r j r i ; k is isomorphic to g i j . Denote by l i j = l c m ( r i , r j ) . Since
l i j = ( r i r j / g i j ) | p
and the order of the kernel of Ψ is | Ker Ψ | = | r i | / | g i j | = r i / g i j , we may explicitly rewritten K e r Ψ by
K e r Ψ = { 0 r i , r j r i , 2 r j r i , , ( r i / g i j 1 ) r j r i } = { l i j r i , l i j + r j r i , l i j + 2 r j r i , , l i j + ( r i / g i j 1 ) r j r i } = = p l i j r i , p l i j + r j r i , p l i j + 2 r j r i , , p l i j + ( r i / g i j 1 ) r j r i .
On the other hand, since
K e r Ψ = { 0 r i , g i j r i , 2 g i j r i , , ( r i / g i j 1 ) g i j r i } ,
there exists a bijective mapping σ on { 0 , 1 , , r i / g i j 1 } with σ ( 0 ) = 0 such that
h l i j + k r j r i = σ ( k ) g i j r i
for each h = 0 , 1 , , p / l i j 1 and k = 0 , 1 , , r i / g i j 1 . Recall that m j = [ m j ] g i j + δ g i j for some δ Z . By Equation (30), we have
m j r i + K e r Ψ = [ m j ] g i j + δ g i j r i + K e r Ψ = [ m j ] g i j r i + K e r Ψ ,
and hence by Equation (31), we have
m j + h l i j + k r j r i = [ m j ] g i j + σ ( k ) g i j r i ,
which implies that there exists λ such that
m j + h l i j + k r j = [ m j ] g i j + σ ( k ) g i j + λ r i
for each h = 0 , 1 , , p / l i j 1 and k = 0 , 1 , , r i / g i j 1 . Since h l i j m j + h l i j + k r j < ( h + 1 ) l i j and 0 [ m j ] g i j + σ ( k ) g i j < r i for each h = 0 , 1 , , p / l i j 1 and k = 0 , 1 , , r i / g i j 1 , it follows from Equation (34) that
( h l i j r i ) / r i < ( m j + h l i j + k r j [ m j ] g i j σ ( k ) g i j ) / r i = λ < ( h + 1 ) l i j / r i .
Since λ , we may rewrite Equation (35) by
h l i j / r i λ < ( h + 1 ) l i j / r i .
With the inequalities in Equations (34) and (36) established by the construction of Ψ , we now proceed to compute Equations (19) and (20). Let τ { 0 , 1 } . For the case of m i 0 , since
( p m i ) / r i = p / r i 1 = ( p / l i j 1 + 1 ) l i j / r i 1 ,
we have
{ m i + k r i ; k = 0 , 1 , , ( p m i ) / r i } = h = 0 , 1 , , p / l i j 1 { m i + λ h r i ; λ h = h l i j / r i , h l i j / r i + 1 , , ( h + 1 ) l i j / r i 1 } .
Furthermore, since
k j r j = p r j = ( p / l i j 1 ) l i j + ( r i / g i j 1 ) r j ,
we have
m j + τ + k r j ; k = 0 , 1 , , k j = h = 0 , 1 , , p / l i j 1 m j + τ + h l i j + k r j ; k = 0 , 1 , , r i / g i j 1 .
Combining Equations (38) and (40) shows that
m j + τ + k r j ; k = 0 , 1 , , k j { m i + k r i ; k = 0 , 1 , , ( p m i ) / r i } = h = 0 , 1 , , p / l i j 1 m j + τ + h l i j + k r j ; k = 0 , 1 , , r i / g i j 1 h = 0 , 1 , , p / l i j 1 { m i + λ h r i ; λ h = h l i j / r i , h l i j / r i + 1 , , ( h + 1 ) l i j / r i 1 } .
Consider a partition of { τ , τ + 1 , , τ + p 1 } given by
{ τ , τ + 1 , , τ + p 1 } = h = 0 , 1 , , p / l i j 1 { h l i j + τ , h l i j + 1 + τ , , ( h + 1 ) l i j 1 + τ } .
Fix h { 0 , 1 , , p / l i j 1 } . Since
0 m j m j + k r j m j + ( r i / g i j 1 ) r j = m j + l i j r j l i j 1
for each k = 0 , 1 , , r i / g i j 1 , we have
{ m j + τ + h l i j + k r j ; k = 0 , 1 , , r i / g i j 1 } { h l i j + τ , h l i j + 1 + τ , , ( h + 1 ) l i j 1 + τ } .
Furthermore, since
1 + h l i j m i + h l i j m i + λ h r i m i + ( ( h + 1 ) l i j / r i 1 ) r i = m i + ( h + 1 ) l i j r i ( h + 1 ) l i j 1
for each λ h = h l i j / r i , h l i j / r i + 1 , , ( h + 1 ) l i j / r i 1 , we have
{ m i + λ h r i ; λ h = h l i j / r i , h l i j / r i + 1 , , ( h + 1 ) l i j / r i 1 } { h l i j + 1 , h l i j + 2 , , ( h + 1 ) l i j 1 } = { h l i j , h l i j + 1 , , ( h + 1 ) l i j 1 } { h l i j + 1 , h l i j + 2 , , ( h + 1 ) l i j } { h l i j + τ , h l i j + 1 + τ , , ( h + 1 ) l i j 1 + τ } .
By Equations (42), (44) and (46), we may rearrange the intersection of elements in Equation (41) by
m j + τ + k r j ; k = 0 , 1 , , k j { m i + k r i ; k = 0 , 1 , , ( p m i ) / r i } = h = 0 , 1 , , p / l i j 1 ( m j + τ + h l i j + k r j ; k = 0 , 1 , , r i / g i j 1 { m i + λ h r i ; λ h = h l i j / r i , h l i j / r i + 1 , , ( h + 1 ) l i j / r i 1 } ) .
Denote by E the number of elements in the set E. Since 0 < m i < r i and 0 [ m j ] g i j + σ ( k ) g i j < r i for each k = 0 , 1 , , r i / g i j 1 , it follows from Equations (34) and (36) that
( m j + τ + h l i j + k r j ; k = 0 , 1 , , r i / g i j 1 { m i + λ h r i ; λ h = h l i j / r i , h l i j / r i + 1 , , ( h + 1 ) l i j / r i 1 } ) = k = 0 , 1 , , r i / g i j 1 { [ m j ] g i j + τ + σ ( k ) g i j } { m i } = { [ m j ] g i j + τ , [ m j ] g i j + τ + g i j , , [ m j ] g i j + τ + r i g i j } { m i }
for each h = 0 , 1 , , p / l i j 1 . It is readily seen that if
{ [ m j ] g i j + τ , [ m j ] g i j + τ + g i j , , [ m j ] g i j + τ + r i g i j } { m i } = 1
then [ m i ] g i j = [ [ m j ] g i j + τ ] g i j = [ m j + τ ] g i j . Conversely, if [ m i ] g i j = [ m j + τ ] g i j , then m i = [ m j ] g i j + τ + δ g i j for some δ Z . When τ = 0 , it follows that
0 [ m j ] g i j + τ + δ g i j < r i h o l d s   o n l y   f o r δ = 0 , 1 , , r i / g i j 1 .
When τ = 1 , it follows that
0 < [ m j ] g i j + τ + δ g i j r i h o l d s   o n l y   f o r δ = 0 , 1 , , r i / g i j 1 .
Since 0 < m i < r i , we conclude that there exists only one δ { 0 , 1 , , r i / g i j 1 } such that m i = [ m j ] g i j + τ + δ g i j . Hence,
{ [ m j ] g i j + τ , [ m j ] g i j + τ + g i j , , [ m j ] g i j + τ + r i g i j } { m i } = 1 if [ m i ] g i j = [ m j + τ ] g i j .
Thus, the equality in Equation (48) can be rewritten by
( m j + τ + h l i j + k r j ; k = 0 , 1 , , r i / g i j 1 { m i + λ h r i ; λ h = h l i j / r i , h l i j / r i + 1 , , ( h + 1 ) l i j / r i 1 } ) = 1 if [ m i ] g i j = [ m j + τ ] g i j , 0 otherwise
for each h = 0 , 1 , , p / l i j 1 . Combining Equations (47) and (53) gives
m j + τ + k r j ; k = 0 , 1 , , k j { m i + k r i ; k = 0 , 1 , , ( p m i ) / r i } = p / l i j if [ m i ] g i j = [ m j + τ ] g i j , 0 otherwise
for the case of m i 0 . Now, we turn to the case of m i = 0 . Since ( p m i ) / r i = p / r i and τ { 0 , 1 } , we have
{ m i + k r i ; k = 0 , 1 , , ( p m i ) / r i } = h = 0 , 1 , , p / l i j 1 { ( λ h + τ ) r i ; λ h = h l i j / r i , h l i j / r i + 1 , , ( h + 1 ) l i j / r i 1 } { ( 1 τ ) p } h = 0 , 1 , , p / l i j 1 { h l i j + τ , h l i j + 1 + τ , , ( h + 1 ) l i j 1 + τ } { ( 1 τ ) p } .
Since Equations (40) and (44) hold also for the case m i = 0 , they imply that
m j + τ + k r j ; k = 0 , 1 , , k j = h = 0 , 1 , , p / l i j 1 m j + τ + h l i j + k r j ; k = 0 , 1 , , r i / g i j 1 h = 0 , 1 , , p / l i j 1 { h l i j + τ , h l i j + 1 + τ , , ( h + 1 ) l i j 1 + τ } .
The partition in Equation (42) and the inclusions in Equations (55) and (56) together imply that
m j + τ + k r j ; k = 0 , 1 , , k j { m i + k r i ; k = 0 , 1 , , ( p m i ) / r i } = h = 0 , 1 , , p / l i j 1 ( m j + τ + h l i j + k r j ; k = 0 , 1 , , r i / g i j 1 { ( λ h + τ ) r i ; λ h = h l i j / r i , h l i j / r i + 1 , , ( h + 1 ) l i j / r i 1 } ) .
Fix h { 0 , 1 , , p / l i j 1 } . Since τ r i { 0 , r i } and 0 [ m j ] g i j + σ ( k ) g i j < r i for each k = 0 , 1 , , r i / g i j 1 , it follows from Equations (34) and (36) that
( m j + τ + h l i j + k r j ; k = 0 , 1 , , r i / g i j 1 { ( λ h + τ ) r i ; λ h = h l i j / r i , h l i j / r i + 1 , , ( h + 1 ) l i j / r i 1 } ) = k = 0 , 1 , , r i / g i j 1 { [ m j ] g i j + τ + σ ( k ) g i j } { τ r i } = { [ m j ] g i j + τ , [ m j ] g i j + τ + g i j , , [ m j ] g i j + τ + r i g i j } { τ r i } .
It is readily seen that if
{ [ m j ] g i j + τ , [ m j ] g i j + τ + g i j , , [ m j ] g i j + τ + r i g i j } { τ r i } = 1
then [ m j + τ ] g i j = [ [ m j ] g i j + τ ] g i j = [ τ r i ] g i j = 0 . Conversely, if [ m j + τ ] g i j = 0 , then [ m j ] g i j + τ = δ g i j for some δ Z . When τ = 0 , it follows that
0 [ m j ] g i j = δ g i j < g i j h o l d s   o n l y   f o r δ = 0 .
When τ = 1 , it follows that
0 < [ m j ] g i j + 1 = δ g i j g i j h o l d s   o n l y   f o r δ = 1 .
This implies that [ m j ] g i j + τ = τ g i j . Hence,
{ [ m j ] g i j + τ , [ m j ] g i j + τ + g i j , , [ m j ] g i j + τ + r i g i j } { τ r i } = { τ g i j , ( τ + 1 ) g i j , , r i + ( τ 1 ) g i j } { τ r i } = 1 if [ m j + τ ] g i j = 0 .
Thus, the equality in Equation (58) can be rewritten by
( m j + τ + h l i j + k r j ; k = 0 , 1 , , r i / g i j 1 { ( λ h + τ ) r i ; λ h = h l i j / r i , h l i j / r i + 1 , , ( h + 1 ) l i j / r i 1 } ) = 1 if [ m j + τ ] g i j = 0 , 0 otherwise .
Combining Equations (57) and (63) gives
m j + τ + k r j ; k = 0 , 1 , , k j { m i + k r i ; k = 0 , 1 , , ( p m i ) / r i } = p / l i j if [ m i ] g i j = [ m j + τ ] g i j , 0 otherwise
for the case of m i = 0 . Now, we split the argument into three cases.
Case 1.   g i j = 1 . Then
[ m j ] g i j = [ m j + 1 ] g i j = [ m i + 1 ] g i j = [ m i ] g i j .
By Equations (18), (19), (20), (54), and (64), we have
c i j ( Ω ) = c j i ( Ω ) = p / l i j p / l i j = 0 .
Case 2.   g i j = 2 . Then, exactly one of the following situations holds:
[ m j ] g i j = [ m i ] g i j o r [ m j + 1 ] g i j = [ m i ] g i j .
Suppose that [ m j ] g i j = [ m i ] g i j . Then,
[ m j + 1 ] g i j [ m i ] g i j a n d [ m i + 1 ] g i j [ m j ] g i j .
By Equations (18), (19), (20), (54), and (64), we have
c i j ( Ω ) = c j i ( Ω ) = p / l i j .
Suppose that [ m j + 1 ] g i j = [ m i ] g i j . Then,
[ m j ] g i j [ m i ] g i j a n d [ m i + 1 ] g i j = [ m j ] g i j .
By Equations (18), (19), (20), (54), and (64), we have
c i j ( Ω ) = c j i ( Ω ) = p / l i j .
Case 3.   g i j > 2 . Suppose that [ m j ] g i j = [ m i ] g i j . Then,
[ m j + 1 ] g i j [ m i ] g i j a n d [ m i + 1 ] g i j [ m j ] g i j .
By Equations (18), (19), (20), (54), and (64), we have
c i j ( Ω ) = c j i ( Ω ) = p / l i j .
Suppose that [ m j + 1 ] g i j = [ m i ] g i j . Then,
[ m j ] g i j [ m i ] g i j a n d [ m i + 1 ] g i j [ m j ] g i j .
By Equations (18), (19), (20), (54), and (64), we have
c i j ( Ω ) = p / l i j a n d c j i ( Ω ) = 0 .
Suppose that [ m j ] g i j [ m i ] g i j and [ m j + 1 ] g i j [ m i ] g i j . Then, by Equations (18), (19), (20), (54), and (64), we have c i j ( Ω ) = 0 .
This completes the proof of Theorem 2. □
As an implication, we consider a feedforward neural network consisting of many layers of neurons firing in rhythm. Suppose that there are k neurons in layer k for each k = 1 , 2 , k ¯ , where layer 1 denotes the input layer and layer k ¯ denotes the output layer of the feedforward neural network. Thus, the coupling weight underlying the connection from the th neuron in layer k ˜ to the th neuron in layer k ˜ + 1 is referred to the entry a i j , where i = k = 1 , 2 , , k ˜ k + and j = k = 1 , 2 , , k ˜ 1 k + , in the coupling weight matrix A. Thus, the coupling weight matrix A can be rewritten by
a 11 a 1 1 0 0 0 0 0 0 a 1 1 a 1 1 0 0 0 0 0 0 a 1 + 1 , 1 a 1 + 1 , 1 0 0 0 0 0 0 a 1 + 2 , 1 a 1 + 2 , 1 0 0 0 0 0 0 0 0 * * 0 0 0 0 0 0 * * 0 0 0 0 0 0 0 0 a 1 + + k ¯ 1 + 1 , 1 + + k ¯ 2 + 1 a 1 + + k ¯ 1 + 1 , 1 + + k ¯ 1 0 0 0 0 0 0 a 1 + + k ¯ , 1 + + k ¯ 2 + 1 a 1 + + k ¯ , 1 + + k ¯ 1 0 0 .
Specifically, we may equip the input layer with a two-layer substructure, given by
a 11 a 1 1 a 1 1 a 1 1 = 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0 1 ,
with the firing threshold satisfying 1 > b i > 0 for each neuron i = 1 , 2 , , 1 and the updating s ( t ) satisfying i s ( t ) s ( t + 1 ) if t = m i + λ r i 1 ; otherwise, i s ( t ) s ( t + 1 ) for each i = 1 , 2 , , 1 and λ = 0 , 1 , . Let m 1 = 0 and r 1 = 1 . Then, neuron i in the input layer can fire in rhythm r i and in phase m i for each i = 1 , 2 , , 1 . Neurons in the posterior layer receive signals from neurons in the prior layer and fire correspondingly if the incoming signals are greater than the firing thresholds. Note that since neurons in the prior layer fire in rhythm, the firing thresholds can be adjusted to generate certain rhythmic firing patterns or combination of rhythmic firing patterns in the posterior layer. Figure 1 depicts three layers of such a feedforward neural network, in which the coupling weight matrix A is defined by
A = 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 5 3 1 1 0 0 0 0 1 2 2 0 0 0 0 0 0 0 0 0 1 3 0 ,
the firing threshold is defined by b = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 , 5 , 2.5 , 2.5 ) , the updating is defined initially by s ( 0 ) = { 1 , 2 , 6 , 7 , 8 } , and
s ( 12 τ + 1 ) = { 1 , 2 , 3 , 6 , 7 , 8 } , s ( 12 τ + 2 ) = { 1 , 3 , 4 , 6 , 7 , 8 } , s ( 12 τ + 3 ) = { 1 , 2 , 4 , 5 , 6 , 7 , 8 } ,
s ( 12 τ + 4 ) = { 1 , 2 , 3 , 5 , 6 , 7 , 8 } , s ( 12 τ + 5 ) = { 1 , 3 , 6 , 7 , 8 } , s ( 12 τ + 6 ) = { 1 , 2 , 4 , 6 , 7 , 8 } ,
s ( 12 τ + 7 ) = { 1 , 2 , 3 , 4 , 6 , 7 , 8 } , s ( 12 τ + 8 ) = { 1 , 3 , 6 , 7 , 8 } , s ( 12 τ + 9 ) = { 1 , 2 , 5 , 6 , 7 , 8 } ,
s ( 12 τ + 10 ) = { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 } , s ( 12 τ + 11 ) = { 1 , 3 , 4 , 6 , 7 , 8 } , s ( 12 τ + 12 ) = { 1 , 2 , 6 , 7 , 8 }
for each τ = 0 , 1 , .
Given an initial neural firing state x ( 0 ) = ( 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) , the dynamical Equation (1) ensures that neurons 1 , 2 , , 5 in the input layer fire in rhythm r 1 = 1 , r 2 = 3 , r 3 = 3 , r 4 = 4 , r 5 = 6 and in phase m 1 = 0 , m 2 = 1 , m 3 = 2 , m 4 = 3 , m 5 = 4 , respectively. In addition, since
x 6 ( t + 1 ) = 𝟙 5 x 2 ( t ) + 3 x 3 ( t ) x 4 ( t ) + x 5 ( t ) 5 ,
x 7 ( t + 1 ) = 𝟙 x 2 ( t ) 2 x 3 ( t ) + 2 x 4 ( t ) 2.5 ,
neurons 6 , 7 in layer 2 fire in rhythm r 6 = 6 , r 7 = 12 and in phase m 6 = 5 , m 7 = 8 , respectively. This is feasible when the summation of incoming signals from neurons 2 , 5 to 6 (respectively, from neurons 2 , 4 to 7) is largest, and then the firing threshold of neuron 6 (respectively, neuron 7) can be adjusted to sieve out such a rhythmic firing pattern. Similar results can be found on neuron 8 in layer 3, which fires in rhythm r 8 = 12 and in phase m 8 = 9 according to
x 8 ( t + 1 ) = 𝟙 x 6 ( t ) + 3 x 7 ( t ) 2.5 .
Denote by Ω the resulting flow of rhythmic neural firing states from time τ to τ + 11 . According to the formula of the pruning map proved in Theorem 2, we see that
g c d ( r 6 , r 2 ) = 3 a n d [ m 2 + 1 ] g c d ( r 6 , r 2 ) = [ m 6 ] g c d ( r 6 , r 2 ) = 2 ,
g c d ( r 6 , r 3 ) = 3 a n d [ m 3 ] g c d ( r 6 , r 3 ) = [ m 6 ] g c d ( r 6 , r 3 ) = 2 ,
g c d ( r 6 , r 4 ) = 2 a n d [ m 4 ] g c d ( r 6 , r 4 ) = [ m 6 ] g c d ( r 6 , r 4 ) = 1 ,
g c d ( r 6 , r 5 ) = 6 a n d [ m 5 + 1 ] g c d ( r 6 , r 5 ) = [ m 6 ] g c d ( r 6 , r 5 ) = 5 ,
g c d ( r 7 , r 2 ) = 3 a n d [ m 2 + 1 ] g c d ( r 7 , r 2 ) = [ m 7 ] g c d ( r 7 , r 2 ) = 2 ,
g c d ( r 7 , r 3 ) = 3 a n d [ m 3 ] g c d ( r 7 , r 3 ) = [ m 7 ] g c d ( r 7 , r 3 ) = 2 ,
g c d ( r 7 , r 4 ) = 4 a n d [ m 4 + 1 ] g c d ( r 7 , r 4 ) = [ m 7 ] g c d ( r 7 , r 4 ) = 0 ,
g c d ( r 8 , r 7 ) = 12 a n d [ m 7 + 1 ] g c d ( r 8 , r 7 ) = [ m 8 ] g c d ( r 8 , r 7 ) = 9 ,
and hence
c 62 ( Ω ) < 0 , c 63 ( Ω ) > 0 , c 64 ( Ω ) > 0 , c 65 ( Ω ) < 0 ,
c 72 ( Ω ) < 0 , c 73 ( Ω ) > 0 , c 74 ( Ω ) < 0 , c 87 ( Ω ) < 0 .
Then, by neural path pruning described in Equations (7) and (8), the coupling operator can be given by
d 62 > 0 , d 63 < 0 , d 64 < 0 , d 65 > 0 a n d d 72 > 0 , d 73 < 0 , d 74 > 0 , d 87 > 0
which maintain the rhythmic firing pattern of neuron 6 (respectively, neuron 7 and neuron 8) by keeping the largest summation of incoming signals from neurons 2 , 5 to 6 (respectively, from neurons 2 , 4 to 7 and from neuron 7 to 8). As illustrated by the orange, green, and purple boxes in Figure 1, the coupling operator in Equation (95) fits well with the neurobiological concept of Hebbian synaptic plasticity, meaning that when neurons 2 , 5 (respectively, neurons 2 , 4 and neuron 7) repeatedly or persistently take part in firing neuron 6 (respectively, neuron 7 and neuron 8), the coupling weights between them are strengthened. Such activity-dependent plasticity between layers of neurons is prone to keep neurons firing in rhythm in the posterior layer.
An experimental setting is defined as follows, showing the performance of rhythm formation in feedforward neural networks. The input rhythm and the layer architecture of feedforward neural networks are specified as in Figure 1. Initially, the coupling weights from prior layers to posterior layers and the firing thresholds of neurons are selected randomly from the interval [ 0.5 , 0.5 ] . When a neuron in layer 2 or 3 fires dynamically, its firing threshold will be adjusted by a positive value 1, otherwise a negative value 0.1 . In addition, the coupling weights are adjusted according to the coupling operators (with intensity 1 or 1 ) derived from neural path pruning. The transition function of a neuron i is selected to be the Heaviside function defined by Equation (2) or the sigmoid function S defined by
x i ( t + 1 ) = S j = 1 n a i j x j ( t ) b i = 1 1 + e α ( j = 1 n a i j x j ( t ) b i ) , t = 0 , 1 , ,
where α is a positive real number. For each round of simulation, we say that rhythm formation occurs in the feedforward neural network if the neuron in layer 3 can persistently fire in rhythm for certain time steps. Specifically, a criterion is defined by k times of neural firing (with the firing state x 8 ( t ) > 0.8 ) in rhythm r with k · r > 500 during time steps t = 1001 to 2000. Figure 2 shows the successful rates for rhythm formation in feedforward neural networks, with neurons activating via the Heaviside functions (red line, 100 , 000 rounds of simulation) or the sigmoid functions with a selected α = 0.01 , 0.02 , , 0.99 , 1 , 2 , , 500 (blue line, 1000 rounds of simulation per α ). It reveals that rhythm formation can robustly occur in feedforward neural networks, even under the change of the transition functions from the Heaviside functions to the sigmoid functions.

4. Conclusions

For neurons firing in rhythm, the pruning map can be formulated in terms of quantities related to rhythmic neural firing. This formulation has the potential to feed crucial control of coupling weights to support the consolidation of rhythms in feedforward neural networks. It reveals a learning rule of feedforward neural networks, which fits well with Hebbian synaptic plasticity for rhythmic pattern formation. Such neural network models can be implemented in relating different kinds of rhythms from input signals to output signals, and hence may be applied to the issues concerning the robot control or pattern recognition with rhythm. We hope that the analysis of pruning maps will stimulate further studies of underlying principles of neural networks concerning how the rhythms interact with each other and organize themselves, providing a theoretical framework mimicking the roles of rhythms in the information processing systems of living organisms.

Author Contributions

Conceptualization, F.-S.T. and S.-Y.H.; Formal analysis, F.-S.T. and S.-Y.H.; and Writing—review and editing, F.-S.T., Y.-L.S., C.-T.P., and S.-Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Ministry of Science and Technology, Taiwan and in part by the China Medical University under Grant CMU106-N-21.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Strogatz, S.H.; Stewart, I. Coupled oscillators and biological synchronization. Sci. Am. 1993, 269, 68–75. [Google Scholar] [CrossRef] [PubMed]
  2. Glass, L. Synchronization and rhythmic processes in physiology. Nature 2001, 410, 277–284. [Google Scholar] [CrossRef] [PubMed]
  3. Smith, H.M. Synchronous flashing of fireflies. Science 1935, 82, 151–152. [Google Scholar] [CrossRef] [PubMed]
  4. Peskin, C.S. Mathematical Aspects of Heart Physiology; Courant Institute of Mathematical Sciences, New York University: New York, NY, USA, 1975. [Google Scholar]
  5. Mirollo, R.E.; Strogatz, S.H. Synchronization of pulse-coupled biological oscillators. SIAM J. Appl. Math. 1990, 50, 1645–1662. [Google Scholar] [CrossRef]
  6. Golubitsky, M.; Stewart, I.; Török, A. Patterns of synchrony in coupled cell networks with multiple arrows. SIAM J. Appl. Dyn. Syst. 2005, 4, 78–100. [Google Scholar] [CrossRef] [Green Version]
  7. Diesmann, M.; Gewaltig, M.-O.; Aertsen, A. Stable propagation of synchronous spiking in cortical neural networks. Nature 1999, 402, 529–533. [Google Scholar] [CrossRef] [PubMed]
  8. Grinstein, G.; Linsker, R. Synchronous neural activity in scale-free network models versus random network models. Proc. Natl. Acad. Sci. USA 2005, 102, 9948–9953. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Schechter, B. How the brain gets rhythm. Science 1996, 274, 339. [Google Scholar] [CrossRef] [PubMed]
  10. Soto-Treviño, C.; Thoroughman, K.A.; Marder, E.; Abbott, L.F. Activity-dependent modification of inhibitory synapses in models of rhythmic neural networks. Nat. Neurosci. 2001, 4, 297–303. [Google Scholar] [CrossRef] [PubMed]
  11. Buzsáki, G. Rhythms of the Brain; Oxford University Press: New York, NY, USA, 2006; ISBN 0-19-530106-4. [Google Scholar]
  12. Hebb, D.O. The Organization of Behavior; Wiley: New York, NY, USA, 1949; ISBN 978-0805843002. [Google Scholar]
  13. Adams, P. Hebb and Darwin. J. Theoret. Biol. 1998, 195, 419–438. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Sejnowski, T.J. The book of Hebb. Neuron 1999, 24, 773–776. [Google Scholar] [CrossRef] [Green Version]
  15. Sumbre, G.; Muto, A.; Baier, H.; Poo, M. Entrained rhythmic activities of neuronal ensembles as perceptual memory of time interval. Nature 2008, 456, 102–106. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  17. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  18. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  19. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  20. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
  21. He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  22. Zhang, W.; Li, R.; Deng, H.; Wang, L.; Lin, W.; Ji, S.; Shen, D. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. Neuroimage 2015, 108, 214–224. [Google Scholar] [CrossRef] [PubMed]
  23. Moeskops, P.; Viergever, M.A.; Mendrik, A.M.; de Vries, L.S.; Benders, M.J.; Išgum, I. Automatic segmentation of MR brain images with a convolutional neural network. IEEE Trans. Med. Imaging 2016, 35, 1252–1261. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Dolz, J.; Desrosiers, C.; Ayed, I.B. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study. Neuroimage 2018, 170, 456–470. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Shih, M.-H.; Tsai, F.-S. Growth dynamics of cell assemblies. SIAM J. Appl. Math. 2009, 69, 1110–1161. [Google Scholar] [CrossRef]
  26. Shih, M.-H.; Tsai, F.-S. Decirculation process in neural network dynamics. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1677–1689. [Google Scholar] [CrossRef] [PubMed]
  27. Shih, M.-H.; Tsai, F.-S. Operator control of interneural computing machines. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 1986–1998. [Google Scholar] [CrossRef] [PubMed]
  28. Tsai, F.-S.; Hsu, S.-Y.; Shih, M.-H. Adaptive tracking control for robots with an interneural computing scheme. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 832–844. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Rhythmic firing in a feedforward neural network.
Figure 1. Rhythmic firing in a feedforward neural network.
Mathematics 07 01247 g001
Figure 2. Successful rates for generating rhythmic firing in feedforward neural networks.
Figure 2. Successful rates for generating rhythmic firing in feedforward neural networks.
Mathematics 07 01247 g002

Share and Cite

MDPI and ACS Style

Tsai, F.-S.; Shih, Y.-L.; Pang, C.-T.; Hsu, S.-Y. Formulation of Pruning Maps with Rhythmic Neural Firing. Mathematics 2019, 7, 1247. https://doi.org/10.3390/math7121247

AMA Style

Tsai F-S, Shih Y-L, Pang C-T, Hsu S-Y. Formulation of Pruning Maps with Rhythmic Neural Firing. Mathematics. 2019; 7(12):1247. https://doi.org/10.3390/math7121247

Chicago/Turabian Style

Tsai, Feng-Sheng, Yi-Li Shih, Chin-Tzong Pang, and Sheng-Yi Hsu. 2019. "Formulation of Pruning Maps with Rhythmic Neural Firing" Mathematics 7, no. 12: 1247. https://doi.org/10.3390/math7121247

APA Style

Tsai, F. -S., Shih, Y. -L., Pang, C. -T., & Hsu, S. -Y. (2019). Formulation of Pruning Maps with Rhythmic Neural Firing. Mathematics, 7(12), 1247. https://doi.org/10.3390/math7121247

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop