Next Article in Journal
Off-Design Modeling of Natural Gas Combined Cycle Power Plants: An Order Reduction by Means of Thermoeconomic Input–Output Analysis
Next Article in Special Issue
An Evolutionary Game Theoretic Approach to Multi-Sector Coordination and Self-Organization
Previous Article in Journal / Special Issue
Measuring the Complexity of Continuous Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Organization with Constraints—A Mathematical Model for Functional Differentiation

1
Department of Mathematics, Faculty of Science, Hokkaido University, Sapporo 060-0810, Japan
2
Department of Computer Science and Engineering, Fukuoka Institute of Technology, Fukuoka 811-0214, Japan
*
Author to whom correspondence should be addressed.
Entropy 2016, 18(3), 74; https://doi.org/10.3390/e18030074
Submission received: 21 October 2015 / Revised: 10 February 2016 / Accepted: 22 February 2016 / Published: 26 February 2016
(This article belongs to the Special Issue Information and Self-Organization)

Abstract

:
This study proposes mathematical models for functional differentiations that are viewed as self-organization with external constraints. From the viewpoint of system development, the present study investigates how system components emerge under the presence of constraints that act on a whole system. Cell differentiation in embryos and functional differentiation in cortical modules are typical examples of this phenomenon. In this paper, as case studies, we deal with three mathematical models that yielded components via such global constraints: the genesis of neuronal elements, the genesis of functional modules, and the genesis of neuronal interactions. The overall development of a system may follow a certain variational principle.

1. Introduction

In this study, we propose a theory for the differentiation of system components (i.e., elements) caused by a constraint that acts on a whole system. We describe three mathematical models for functional differentiation in the brain; the first model for the genesis of neuron-like excitable components, the second for the genesis of cortical modules, and the third for the genesis of neuronal interactions, thereby emphasizing the importance of constrained dynamics of self-organization.
First, let us briefly review the conventional theory of self-organization. Here, we will omit the long-term discussions regarding the significance of self-organization that occurred in the fields of philosophy and social sciences. We will describe only the phenomena and theories of self-organization in the areas of natural science and engineering. As far as we know, such studies of self-organization started, in association with the movement of cybernetics that took place between 1940 and 1950, in which the theory of self-organization developed in the construction of control theory [1]. For example, Ashby proposed the principle of the self-organizing dynamic system [2], in which a dynamic system is a different concept from an (autonomous) dynamical system, in the sense that the former includes input, whereas the latter has no input, in particular, situations of input changes are treated only as bifurcations in a family of dynamical systems. He stated that the asymptotic state of any deterministic dynamic system is an attractor, where each subsystem interacts with other subsystems that play a role in the environment of such a subsystem, thus forming a controlled overall system. Von Foerster proposed the principle of order out of noise, emphasizing the importance of random fluctuations for producing a macroscopic-ordered and controlled motion [3].
From the 1960s to the 1980s, the appearance of two scientific leaders in physics and chemistry, i.e., Haken and Prigogine led to the scientific revolution of self-organization in far-from-equilibrium systems. Those scientists faced the challenge of constructing theories for nonequilibrium statistical physics and nonequilibrium thermodynamics, respectively, in which “nonequilibrium” meant “far from equilibrium”. In fact, Prigogine extended thermodynamics to the area of nonlinear and far-from-equilibrium systems in terms of the variational principle of entropy production minimum [4]. Because energy dissipation is a prerequisite in far-from-equilibrium systems, the concept of entropy flow associated with energy dissipation should be introduced. In this respect, the entropy production σ is defined as the sum of the change in the internal entropy of the system in question and the outflow of the entropy from the system to the environment, as follows:
σ =   d S i n t d t +   J S o u t
Another significant approach was taken by Haken regarding the extension of equilibrium phase transitions to far-from-equilibrium systems, thus introducing the slaving principle [5]. In fact, Haken extended the Ginzburg–Landau (GL) formula to far-from-equilibrium and multicomponent systems; the original GL equation is given in the form of the equation of motion of order parameter D (see [5] for more details):
D t =   α D   W 2   | D | 2 D
Many modes appear in each bifurcation point, which corresponds to the critical point of phase transition; however, a few modes enslave the many other modes, which implies the appearance of order parameters out of fluctuations. Therefore, the slaving principle extends the center manifold theorem in bifurcation theory to systems with noise [6]. However, the slaving principle does not hold in the appearance of chaos; thus, it is applicable to the index of chaotic motion.
The occurrence of macroscopic-ordered motion via cooperative and/or competitive interactions between the microscopic components of the system, namely atomic or molecular level interactions, is a characteristic of the self-organizing phenomena that are addressed by these theories. In other words, the theories mentioned above treat the manner via which spatiotemporal patterns emerge as macroscopic-ordered states from microscopic random motion under far-from-equilibrium conditions. They succeeded in describing the phenomena, for example, of target patterns, spiral patterns, propagating waves, periodic and chaotic oscillations in chemical reaction systems (such as the Belousov–Zhabotinsky system), hydrodynamic systems (such as the Bénard thermal convection system and the Taylor–Quette flow system), and optical systems (such as the laser oscillations system).
Another aspect may be highlighted when considering typical communication problems, as the brain activity in each communicating person may change according to the purpose of the communication (see, for example, [7,8,9,10]). It seems that this aspect can be formulated within a framework of functional differentiation in which the functional elements (or components, or subsystems) are produced by a certain constraint that acts on the whole system [11,12,13]. In fact, the functional differentiation of the brain occurs via not only genetic factors, but also dynamic interactions between the brain and the environments [14,15,16,17]. Pattee [18] discriminates constraints from dynamics. He stated that dynamics occurs via interactions of elements of the systems and also via external forces, whereas a constraint is given intentionally by the outside of the system, thereby controlling the system dynamics. Furthermore, he introduced a conceptual test to discriminate dynamics from constraints in terms of the rate dependence and the rate independence, respectively. In this respect, here, we use the term, constraints in a sense that is similar to that of Pattee. In neonatal and successive periods of development, an individual brain experiences structural and functional differentiation, which can be promoted by environmental factors such as the intentions and actions of surrounding people, as well as physical stimuli, such as object patterns. These intentions and actions, which convey a part of the environmental meaning, can become constraints of self-organization of neural dynamics in individual brains. Conversely, physical stimuli can be treated as time-dependent inputs (see, Equation (3)).
Based on these ideas of constraints, we will introduce constraints, by which the system dynamics is optimized. Let ( ϕ ,   Ω ) be a dynamical system, where Ω is a m-dimensional state space or phase space and ϕ is a flow or a group action acting on the space Ω . Let x be a state variable defined in Ω . Then, a dynamical system can be represented by the equations of motion: d x d t = f ( x ) , where t is a parameter defined outside the space Ω , which is usually interpreted as time, and f is a dynamical law or a vector field. A dynamical system may possess other parameters called bifurcation parameters, which can be controlled from outside, and those bifurcation parameters indicate environmental conditions, which should be viewed as being different from constraints [19]. Then, the equations are rewritten as d x d t = f ( x ; λ ) , including the bifurcation parameters, thus representing a family of dynamical systems. However, the system that we wish to consider here should include environmental variables, which may have a feedback from the system variables, thus expressed as G ( x , t ) . The feedback may happen via system order parameters (see, for example, [19]), or constraints. A possible formulation is given by Equations (3) and (4).
d x d t = f ( x ,   λ ) + G ( x ,   t )
under the given constraint, , which denotes the intention of the outside. In particular, here we consider the restricted cases of the constraint to integrable functions, such as certain information quantities, which are supposed to be derived from intention. If this supposition is allowed, an overall formula can be provided by the variational principle:
δ L = δ 0 T { +   μ ( d x d t f ( x ,   λ ) G ( x ,   t ) ) } d t = 0
where μ is a Lagrange multiplier. In usual mechanics with a physical constraint such as the movement of a ball on a playground slide, a Lagrange multiplier is introduced to allow a particle to move along such a boundary. However, in the case of optimal control problems, a Lagrange multiplier can be introduced to satisfy a dynamical system with external inputs, and variations can be adopted to optimize the external constraint under the restriction of the dynamics given by Equation (3) [20,21]. Thus, a Lagrange multiplier may be a function not only of a state variable x, but also of its rate x ˙ and time t, and equations of motion of such a multiplier may be derived. In the present paper, we use the maximum transmission of information as a constraint. In relation to the present approach, in the recent development of a synergetic computer, Haken and Portugali [22] introduced the notion of information adaptation to realize pattern recognition based on pattern formation in synergetic networks using attention parameters as a constraint.
It should be noted that in the self-organization process, in addition to the constrained dynamics in the sense mentioned above, various types of synaptic learning may take place, such as Hebbian learning and winner-take-all algorithms. These learning algorithms may provide internal constraints to the dynamics. This type of internal constraints is supposed to be contained in the dynamics given by Equation (3). A typical internal constraint to the neural dynamics was intensively investigated by von der Malsburg in studies of self-organization of orientation-sensitive cells in the primary visual cortex [23]. In the model, decisive effects on the neural differentiation of orientation specificity may be provided by a synaptic constraint: for each neuron the total strength of synaptic couplings is kept constant, the internal constraint of which may be reasonable when considering the case of almost the same quantity of nutrition supplied to each neuron in a spatially uniform circumstance. Furthermore, Kohonen invented the self-organizing map (SOM), with at least two different subsystems: one is a competitive neural network with a winner-take-all algorithm and the other called a plasticity-control subsystem, which produces feature-specific differentiation of input data [24]. Amari also developed a mathematical model for topographic mapping by introducing a generalized Hebb synapse, a plasticity of inhibitory interneurons as well as excitatory neurons and competitive neural networks by mutual inhibition [25]. Thus, competitive neural networks impose a local constraint on the activity of elementary neurons, thereby enhancing the feature differences of input data, which gives rise to differentiation. In addition to the neural systems, a similar differentiation triggered by internal constraints has been observed in bacterial populations and was simulated by its theoretical models [26,27]. In these studies, Kaneko and Yomo, and Furusawa and Kaneko, found that fluctuations are enhanced to realize cell differentiation by cell divisions, which are triggered by an internal constraint, such as keeping the total nutrition in each cell constant.
Based on these aspects, different features of self-organization from a typical macroscopic pattern formation formulated by, for instance, coupled dynamical systems and reaction-diffusion systems may exist (see Figure 1). In this paper, we propose mathematical models that show the differentiation of system components from a constraint acting on a whole system, i.e., at the macroscopic level. In our model study, a genetic algorithm was used for the computation of the development of both the interactions and the states of the dynamical systems. In this computational process, a maximum transmission of information constraint was applied as a “variational” principle to operate the development of the system. In the subsequent sections, we will show the computational results of our mathematical models for neural differentiation, and will review (with some comments) a mathematical formula for ephaptic couplings, which may provide a possible neural mechanism of self-consistent dynamics with constraints.

2. Mathematical Model for the Differentiation of Neurons

To elucidate the neural mechanism underlying the genesis of neurons, we have tried to generate a mathematical model of the differentiation of neurons in terms of the development of dynamical systems under a constraint [28,29]. As a case study, we used the one-dimensional map given by Equation (5), which is viewed as an elementary unit of the system.
x ( t + 1 ) = tanh ( γ 1 ( x ( t ) α 1   ) ) ω tanh ( γ 2 ( x ( t ) α 2 ) ) + J
where x ( , ) is a state variable, t is a discrete time step, and the parameters γ i ( i = 1 , 2 ) ,   α i ( i = 1 , 2 ) ,   ω , and J are real numbers.
The total system computed here is constituted by the unidirectional nearest neighbor coupling of these units, but can be extended to include feedback couplings. We show in this paper only the computation results of the case of unidirectional couplings with an open boundary condition. Input time series are added to this system. Then the coupled system is described by Equation (6).
x ( i ) ( t + 1 ) =   tanh ( γ 1 ( x ( i ) ( t ) α 1   ) ) ω tanh ( γ 2 ( x ( i ) ( t ) α 2 ) ) + d ( x ( i 1 ) x ( i ) )
where x ( i ) ( t ) denotes the state of the i-th unit at time t, d is the coupling strength (which was kept constant in each simulation).
We adopted a genetic algorithm for these networks, with an external input. We used a chaotic time series as the external input. Because bifurcation parameters of dynamical systems are provided as environmental variables that are kept constant during the state changes of dynamical systems, a set of these parameters ( γ 1 ,   γ 2 ,   α 1 ,   α 2 ,   ω ,   J ) is viewed as a “gene” of the dynamical system in question. In this model, as an external constraint, we used the maximum transmission of information of input data to all units of the coupled system. In the present model, the external constraint in Equation (4) changes in time much slower than the system dynamics itself.
Then, for each network, we calculated the time-dependent mutual information [30] between the input chaotic time series and the time series of each structural unit of the network. The time-dependent mutual information between input time series g ( t ) and i-th unit is defined in the following way.
I ( g ( t ) ; x ( i ) ( t ) ) ( t ) = k p ( k ) log p ( k ) +   l ,   k p ( l ) p ( t ) ( k / l ) log p ( t )   ( k / l )
where k denotes a state of x ( i ) ( t ) and p(k) represents a stationary probability of x ( i ) ( t ) , l denotes a state of the input time series g ( t ) , and p ( t ) ( k / l ) is a conditional probability of x ( i ) ( t ) taking a state k at t time steps after g ( t ) taking a state l. In general, mutual information between two states calculates the information shared between these states. Thus mutual information does not necessarily imply the transmitted information. However, in the unidirectionally coupled system with inputs, such as the present system, shared information calculated by mutual information includes mainly the information transmitted from one to the other.
Then, we recorded its maximum value over time, and a maximum value over units. Subsequently, in the genetic algorithm, we adopted a copy, crossover, and mutations for the above-mentioned set of parameters ( γ 1 ,   γ 2 ,   α 1 ,   α 2 ,   ω ,   J ) , and followed by the selection of a dynamical system that allowed more information transmission compared with the previous step of the development. The coupling strength was fixed over all simulations. However, depending on the coupling strength, the computational results were classified as three types of information channels in the following way.
(a) The case of strong couplings
A dynamical system that was constructed using a constant function was finally selected (see Figure 2a). This type of dynamical system possesses a single stable fixed point. Any external signals were successfully transmitted over all elementary units of the network without any deformation. Thus, this type of network can be viewed as a static channel.
(b) The case of intermediate strength couplings
An excitable dynamical system that was constructed using a step function was finally selected (see Figure 2b). This type of dynamical system possesses three fixed points: one is stable and the other two are unstable. One of the unstable fixed points plays a role as a threshold in the sense that if the initial condition is set below the threshold, a dynamical trajectory is monotonously attracted to the stable fixed point; otherwise, it is attracted to this point only after a dynamical trajectory experiences a large excursion around another unstable fixed point. Thus, the stable fixed point can be viewed as an equilibrium membrane potential, and the large excursion of the dynamical trajectory can also be viewed as a transient impulse. Furthermore, the latter type of dynamical trajectory can overshoot the stable fixed point after a large excursion, and is then attracted to the fixed point. Therefore, this excitable dynamical system can be viewed as an active channel, such as those observed in conventional neurons.
(c) The case of weak couplings
An oscillatory dynamical system was finally selected (see Figure 2c). In the present model, this type of dynamical system possessed a period two periodic trajectory. Thus, it can be viewed as an oscillatory neuron.
The evolution model treated here was restricted to a certain subspace of functional space, which spanned only a restricted functional space, even compared with polynomials. However, it seems to be reasonable to propose the present model as showing rather universal characteristics, despite this restriction. One explanation for this observation is that eventually evolved dynamical systems are optimal to produce weak chaotic behaviors in the overall dynamics of coupled systems, such as allowing effective information transmission, regardless of coupling strength. The other explanation is that dynamical systems constructed using polynomial functions cannot survive under the present constraint as their coupled systems cannot transmit information of dynamically changing input because of the extremely narrow parameter range that allows an effective transmission of input information. One can extend the present model further to develop the dynamics in wider functional subspace; however, this is associated with several technical problems, such that it is not easy to define automatically the domain of definition that is necessary to restrict the overall dynamics in a certain finite domain.
In the present model, which was constructed using a coupled one-dimensional map, the elementary unit of the overall system was a one-dimensional map. The present simulations showed an evolution dynamics in which the dynamical law in each elementary unit was changed according to the external constraint, such as the maximum transmission of the input information. The dynamical system finally selected was an excitable map or oscillatory map, with the exception of a trivial case. The simulation results may provide an explanation for the generation of excitable or oscillatory systems, such as neurons in biological evolution.

3. Mathematical Model for the Differentiation of Cortical Modules

In this section, we describe a mathematical model that produced distinct modules from probabilistically uniform modules because of the symmetry breaking caused by the appearance of chaotic behaviors. In the proposed mathematical model, a phase model of oscillators was used as an elementary unit of the system; however, we show that these elementary units do not necessarily evolve as functional units. This study aimed at elucidating the neural mechanism underlying the functional differentiation of cortical areas which are known as Brodmann’s areas [31], and consists of about 10,000 functional modules [32,33]. It is well known that the mammalian neocortex consists of almost uniform modules, each of which contains about 5,000 excitatory pyramidal neurons and several kinds of inhibitory neurons. In spite of this uniformity, cortical modules have distinct functions; hence, it is highly probable that functional differentiation is caused by the presence of asymmetric couplings between modules. Such cortical asymmetric couplings possess the following generic characteristics: ascending couplings project from superficial layers, such as layers I–III, to a middle layer, such as layer IV, whereas descending couplings project from deep layers, such as layers V and VI, to both superficial and deep layers [34].
To observe the process described above, we constructed a mathematical model for the structural differentiation of two modules, consisting of a number of units that is assumed to be uniform in a probabilistic sense [35]. In fact, before the development of the modules, the coupling probabilities between units within each module and between modules were determined randomly, so that the system was viewed as simply one module at the beginning of the development. For the sake of brevity, we formally divided the system into two probabilistically identical modules and observed the formation of feature differences of couplings between elementary units. We adopted a Weyle transformation as a unit, and the couplings were provided by the sinusoidal function, which is similar to a discrete time version of the Kuramoto model [36], as follows:
θ t + 1 ( i , k ) = ω ( i , k ) + θ t ( i , k ) + α N p c ( j , l ) G ( i , k ) sin ( θ t ( j , l ) θ t ( i , k ) ψ k l i j ) + σ β β t ( i , k )
for the k-th unit in the ith modules. Here, ψ is assigned one of four possible values { 0 , π 2 , π , 3 π 2 } randomly, according to the probabilities, and β denotes Gaussian noise and σ β is the strength of the Gaussian noise. Moreover, α is the coupling strength of units, N is the total number of units, and p c is the overall coupling probability of units, i.e., the ratio of coupled units to the total number of units. In this model, as an external constraint, we used the maximum transmission of information. In the present model, the external constraint in Equation (4) changes in time much slower than the system dynamics itself.
Then, by changing these probabilities, the product of transfer entropies defined by Equations (9) and (10) was calculated, and the system with the maximum product of transfer entropies was selected at each stage of the development.
T T Θ ( 1 ) Θ ( 2 ) T Θ ( 2 ) Θ ( 1 )
T X Y ( τ ) = H ( Y ( t + τ ) | Y ( t ) ) H ( Y ( t + τ ) | Y ( t ) , X ( t ) )
where H(A|B) denotes the conditional entropy of A under the condition of B.
During the development of the system, we fixed the overall coupling probability of units, the number of units, and the additive Gaussian noise, whereas other coupling probabilities and probabilities for the phase shift,   ψ , in the coupling terms were subjects for change. A set of these changeable parameters was viewed as a gene in the genetic algorithm. Subsequently, we observed the differentiation of the physical properties of modules, as follows. In one module, say module 1, most couplings were in phase, although other coupling types remained; in contrast, in the other module, say module 2, all surviving couplings were in phase. All the couplings from modules 1 to 2 were in phase, whereas the opposite direction of couplings was antiphase. The number of couplings from modules 1 to 2 was much larger than that observed for the opposite direction of couplings. The final form of differentiation is evolutionally stable in the sense that the system may evolve in a similar way and reach the same selected states as long as the present fixed values of parameters or varieties of fixed parameters are not changed. For example, the dynamics that were finally selected could change if some of the fixed parameters were changed after the evolution dynamics once became stable. Furthermore, it is expected that the number of elementary units in each module and the number of modules itself will change under the presence of input data. This problem will be the subject of future studies.
These computational results imply the appearance of a hierarchical structure of modules: module 1 governs module 2, i.e., module 1 for upper layers and module 2 for lower layers (see Figure 3). This implication comes from another observation in synergetics; that slaving modes behave cooperatively to form a few order parameters and slaved modes do not behave in such a way, showing more varieties of interactions [5,6]. Thus, the developed system may express a conscious mind for module 1 and a more unconscious mind (which might play a role in the direct interaction with the environment) for module 2. In other words, modules 1 and 2 may behave like higher and lower levels of functional units of the brain-body complex, respectively.
We also investigated the overall dynamics of the coupled module system. To observe the macroscopic dynamics, we defined two order parameters in the following way:
  R ( i ) ( t ) exp ( 1 Θ ( i ) ( t ) ) = 1 N k = 1 N exp ( 1 θ t ( i , k ) )
Φ ( t ) = Θ ( 2 ) ( t ) Θ ( 1 ) ( t ) ( mod 2 π )
Here, phase coherence, R ( i ) ( t ) and mean phase, Θ ( i ) ( t ) , can be order parameters, although we used the relative mean phase defined by Equation (12) instead of a mean phase in each module. The dynamics observed via these order parameters showed weak “chaotic” behaviors, but included extremely slow oscillations, which covered periods of the order of a few seconds, compared with the time scale of around 200 ms of fundamental coherent oscillations. Similar slow oscillations of a period of a few seconds have been observed in the hippocampal CA1 of rats in several sleeping and running states [37]. Another similar interesting behavior has been observed in the dynamics of the default mode network, although such a time scale of modulation spanned around 20 min [38]. A typical functional differentiation via structural differentiation is observed in the hippocampus [39]: in reptiles, the hippocampus consists of unstructured, probabilistically uniform couplings of small and large neurons, whereas the hippocampus of mammals consists of mainly differentiated CA1 and CA3 areas. Compared with uniform couplings, the CA3 area possesses recurrent connections, whereas the CA1 area receives synaptic connections from the CA3 and sends its axons to other areas in the limbic systems and to the neocortex, rather than back to the CA3.

4. Redefined Neural Behaviors via Ephaptic Couplings: A Tractable System Governed by Self-Organization with Constraints

The two mathematical models described in Section 2 and Section 3 provide a possible neural mechanism by which components, or subsystems emerge via a constraint operating on a whole system. For the systems described above, only numerical studies have been conducted. The following mathematical model of ephaptic couplings between neurons represents an example of a similar system with more definite descriptions of the structure of the system in terms of the Fredholm condition. Moreover, a study of neural systems with ephaptic couplings reveals the importance of the role of neural fields in the formation of functional components that are not necessarily identified with the elementary units of self-organizing systems [40,41,42]. For example, a neuron can be an elementary unit in a neural system, but may not become a functional unit, as functional units may be formed in a structure with a larger size, e.g., in cell assemblies [42].
Markin provided a model for the ephaptic coupling between two identical neurons based on the equivalent electric circuit that presents such a physical coupling, as follows [43]:
r 2 + r 3 γ 2 V 1 x 2   c 1 V 1 t = j i o n 1 + r 3 γ 2 V 2 x 2  
r 1 + r 3 γ 2 V 2 x 2 c 2 V 2 t = j i o n 2 + r 3 γ 2 V 1 x 2
where V 1 and V 2   denote the membrane potentials of two neurons, respectively; r 1 and r 2 are the resistivity of these neurons; r 3 is the ionic resistivity of the external medium; γ = r 1 r 2 + r 2 r 3 + r 3 r 1 ; c 1 and c 2 are the capacitance; j i o n 1 , and j i o n 2 are the ionic currents; and t and x denote time and space variables, respectively. Because the two neurons are assumed to be identical, we set r 1 = r 2 and c 1 = c 2 .
In the following equations, we provide an essential part of the formulation, according to Scott [44]. The coupling constant can be expressed as α r 3 r 1 + r 3 = r 3 r 2 + r 3 . Simple calculations of the coefficients in Equations (13) and (14) with r 1 = r 2 lead the following equations: r 1 + r 3 γ = r 1 + r 3 { r 1 ( r 1 + r 3 ) +   r 1 r 3 } =   1 r 1 ( 1 1 + α ) and r 3 γ = 1 r 1 ( α 1 + α ). The rewriting of Equations (13) and (14) yields the following equations:
1 1 + α 2 V 1 x 2   α 1 + α 2 V 2 x 2   r 1 c 1 V 1 t = r 1   j i o n 1  
1 1 + α 2 V 2 x 2   α 1 + α 2 V 1 x 2   r 1 c 1 V 1 t = r 1   j i o n 2  
Without loss of generality, changing the time scale as t r 1 c 1 t brings the coefficients of time derivatives in Equations (15) and (16) to one. We used f ( V k ) =   r k j i o n   k (k = 1, 2), as r k j i o n   k has a dimension of voltage that indicates the membrane potential of each neuron. By expanding the abovementioned coefficients in terms of α under the assumption that α is small, we obtain the following equations up to the first order of α .
( 1 α ) 2 V 1 x 2   α 2 V 2 x 2 V 1 t f ( V 1 )
( 1 α ) 2 V 2 x 2 α 2 V 1 x 2 V 2 t f ( V 2 )
Let us assume the existence of two traveling waves that are moving synchronously at two leading edges, expressed as:
V k ( x ,   t ) =   V k ( z ) =   V k ( x v t ) ,     k = 1 ,   2
Here, we performed the transformation of the variables: z = x v t .   Then, we obtained the following equations by replacing partial derivatives with respect to x and t with a derivative with respect to the single variable z:
( 1 α ) d 2 V 1 d z 2 α d 2 V 2 d z 2 + v d V 1 d z = f ( V 1 )
( 1 α ) d 2 V 2 d z 2 α d 2 V 1 d z 2 + v d V 2 d z = f ( V 2 )
For a sufficiently small α ,   we expanded the equations in terms of powers of α such as V k = V k 0 + α V k 1 + α 2 V k 2 + ( k   = 1 ,   2 ) , and v = v 0 + α v 1 + α 2 v 2 + . We substituted these power series expansion for V k ( k   = 1 ,   2 ) and v in both Equations (20) and (21), and equated the terms of the same order of powers of α .
For the zeroth-order equation, we obtained:
d 2 V k 0 d z 2 + v 0 d V k 0 d z = f ( V k 0 )   ( k = 1 , 2 )
which can be solved if the functional form of f is explicitly given.
For the first-order equation, we obtained:
d 2 V k 1 d z 2 + v 0 d V k 1 d z f ( V k 0 ) V k 1 = d 2 V 10 d z 2 + d 2 V 20 d z 2 v 1 d V k 0 d z ( k = 1 , 2 )
These equations can be written in the following way, using the differential operators, L 1 and L 2 :
L 1 V 11 = F 1 ( V 10 , V 20   , v 1 )
L 2 V 21 = F 2 ( V 10 , V 20   , v 1   )
We introduced the adjoint operator L + of the differential operator L, which is defined by ( L u , w ) = ( u , L + w), where (x, y) denotes an inner product of x and y that is defined by the integral under the condition that u ( z ) ,   w ( z ) 0 as z . Taking partial integration of the integral ( L u , w ) under the condition that   u ( z ) ,   w ( z ) 0 as z , this adjoint operator is explicitly expressed as:
L k + = d 2 d z 2   v 0 d d z f ( V k 0 )
In this model, as an external constraint, we demanded the existence of travelling waves. Therefore, the external constraint in Equation (4) will be the following Fredholm conditions.
The solvability condition, i.e., the Fredholm condition, of the equations of motion is provided by:
g k F k d x = 0
for g k ( z ) such as L k + g k ( z ) = 0 with g k ( z ) = 0 as z .
In other words, the interaction terms given by the right-hand sides of the equations of motion, Equations (24) and (25), must be orthogonal to the null space of the adjoint operator, thereby allowing traveling waves. In the context of the selective development of dynamical systems mentioned above, the interactions cannot be free; rather, they must change to satisfy the constraint of the solvability condition. Thus, each neuronal equation must change to subserve this constraint of the neural interactions. The constraint of this case may stem from the demand or intention of the outside that traveling wave solutions should exist in this interacting system. This example shows how the dynamics of neural subsystems change according to the change in interactions caused by constraints that act on a whole neural system. The mathematical formulation of the generation of components under constraints that act on a whole system may also possess a structure that is similar to that of the present formulation.

5. Summary and Discussion

We described the dynamics of functional differentiation by investigating three mathematical models of self-organization with external constraints. We obtained the distinct characteristics of self-organization behaviors that appeared in the genesis of neuronal components, such as neurons and cortical modules. In Section 2, we showed numerically the dynamic behaviors of the emergence of neuronal components in the evolutionary development of unidirectionally coupled dynamical systems. In Section 3, we also showed numerically the emergence of functional modules caused by symmetry breaking in randomly coupled neural networks. The numerical results obtained from these models may lead to a neural mechanism of functional differentiation. An overall dynamics was identified that exhibited weak chaotic behaviors, including chaotic itinerancy (see, for example, [45] for neural chaotic itinerancy), as well as showing chaotic transitions between synchronization and desynchronization [46]. Such specific transitions, which have been observed in neural systems and were typically described by Gray et al. [47], may represent one of the fundamental dynamic processes that underlie cognitive behaviors in terms of the correlation principle [48]. This principle was proposed by Singer [48] based on his finding of coherent neural oscillations. The clarification of the relationship between cortical coherent oscillations and constrained dynamics, as treated in this paper, will be very valuable.
In these evolution processes, we used a variational principle, such as the selection of dynamical systems that allows the maximum transmission of information: conditional mutual information or transfer entropy. A similar dynamical development has been addressed in other texts: neural Darwinism by Edelman [49] and optimum free energy by Friston [50]. Finally, we discussed a solvability condition for a coupled neuron system with ephaptic coupling, which suggests the existence of a formalism of the variational principle that can be solved to satisfy constraints that act on a whole system.

Acknowledgments

This work was partially supported by JSPS KAKENHI, 26280093 and 26540123. This work was also supported by a Grant-in-Aid for Scientific Research on Innovative Areas “Understanding of Human Nature with a Basis of Nonlinear Oscillations” 15H05878 of The Ministry of Education, Culture, Sports, Science, and Technology, Japan.

Author Contributions

Ichiro Tsuda prepared the general research plan, and wrote the article. Yutaka Yamaguti conducted the computer simulation described in Section 3. Hiroshi Watanabe conducted the computer simulation described in Section 2. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Von Claus Pias, H. (Ed.) Cybernetics|Kybernetik. The Macy—Conferences 1946–1953. Bd.1 Transactions/Protokolle; Diaphanes: Zurich, Switzerland; Berlin, Germany, 2003; (In English/German).
  2. Ashby, W.R. Dynamics of the cerebral cortex: Automatic Development of Equilibrium in Self-Organizing Systems. Psychometrika 1947, 12, 135–140. [Google Scholar] [CrossRef]
  3. Von Foerster, H. On Self-organizings systems and their environments. In Self-organizing Systems; Yovits, M.C., Cameron, S., Eds.; Pergamon Press: London, UK, 1960; pp. 31–50. [Google Scholar]
  4. Nicolis, G.; Prigogine, I. Self-organization in Nonequilibrium Systems; Wiley: New York, NY, USA, 1977. [Google Scholar]
  5. Haken, H. Synergetics; Springer-Verlag: Berlin, Germany, 1977. [Google Scholar]
  6. Haken, H. Advanced Synergetics; Springer-Verlag: Berlin, Germany, 1983. [Google Scholar]
  7. Kelso, J.A.S.; Dumas, G.; Tognoli, E. Outline of a general theory of behavior and brain coordination. Neural. Netw. 2013, 37, 120–131. [Google Scholar] [CrossRef] [PubMed]
  8. Tognoli, E.; Kelso, J.A.S. The metastable brain. Neuron 2014, 81, 35–48. [Google Scholar] [CrossRef] [PubMed]
  9. Kawasaki, M.; Yamada, Y.; Ushiku, Y.; Miyauchi, E.; Yamaguchi, Y. Inter-brain synchronization during coordination of speech rhythm in human-to-human social interaction. Sci. Rep. 2013, 3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Tsuda, I.; Yamaguchi, Y.; Hashimoto, T.; Okuda, J.; Kawasaki, M.; Nagasaka, Y. Study of the neural dynamics for understanding communication in terms of complex hetero systems. Neurosci. Res. 2015, 90, 51–55. [Google Scholar] [CrossRef] [PubMed]
  11. Tsuda, I. A hermeneutic process of the brain. Prog. Theor. Phys. 1984, 79, 241–259. [Google Scholar] [CrossRef]
  12. Tsuda, I. Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. Behav. Brain Sci. 2001, 24, 793–810. [Google Scholar] [CrossRef] [PubMed]
  13. Rosen, R. Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life; Columbia University Press: New York, NY, USA, 1991. [Google Scholar]
  14. Van Essen, D.C. Visual Cortex. In Cerebral Cortex; Peters, A., Jones, E.G., Eds.; Prenum Press: New York, NY, USA, 1985; Volume 3, pp. 259–330. [Google Scholar]
  15. Sur, M.; Palla, S.L.; Roe, A.W. Cross-modal plasticity in cortical development: Differentiation and Specification of Sensory Neocortex. Trends Neurosci. 1990, 13, 227–233. [Google Scholar] [CrossRef]
  16. Treves, A. Phase transitions that made us mammals. Lect. Notes Comp. Sci. 2004, 3146, 55–70. [Google Scholar]
  17. Szentagothai, J.; Erdi, P. Self-organization in the nervous system. J. Social Biol. Struct. 1989, 12, 367–384. [Google Scholar] [CrossRef]
  18. Pattee, H.H. The complementarity principle in biological and social structures. J. Social Biol. Struct. 1978, 1, 191–200. [Google Scholar] [CrossRef]
  19. Tschacher, W.; Haken, H. Intentionality in non-equilibrium systems? The functional aspects of self-organized pattern formation. New Ideas Psychol. 2007, 25, 1–15. [Google Scholar] [CrossRef]
  20. Wilson, D.; Moehlis, J. Extending phase reduction to excitable media: Theory and Applications. SIAM Rev. 2015, 57, 201–222. [Google Scholar] [CrossRef]
  21. Yoshimura, Y.; Tomita, N.; Makino, Y.; Yano, M. Autonomous control of reaching movement by “mobility” measure. J. Robot. Mechatr. 2007, 19, 448–458. [Google Scholar]
  22. Haken, H.; Portugali, J. Information Adaptation: The Interplay between Shannon Information and Semantic Information in Cognition; Springer-Verlag: Cham, Switzerland; Heidelberg, Germany; New York, NY, USA; Dordrecht, The Netherlands; London, UK, 2015. [Google Scholar]
  23. Von der Malsburg, C. Self-organization of orientation sensitive cells in the striate cortex. Kybernetik 1973, 14, 85–100. [Google Scholar] [CrossRef] [PubMed]
  24. Kohonen, T. Self-organized formation of topologically correct feature maps. Biol. Cybern. 1982, 43, 59–69. [Google Scholar] [CrossRef]
  25. Amari, S. Topographic organization of nerve fields. Bull. Math. Biol. 1980, 42, 339–364. [Google Scholar] [CrossRef] [PubMed]
  26. Kaneko, K.; Yomo, T. Isologous Diversification for Robust Development of Cell Society. J. Theor. Biol. 1999, 199, 243–256. [Google Scholar] [CrossRef] [PubMed]
  27. Furusawa, C.; Kaneko, K. Theory of Robustness of Irreversible Differentiation in a Stem Cell System: Chaos Hypothesis. J. Theor. Biol. 2001, 209, 395–416. [Google Scholar] [CrossRef] [PubMed]
  28. Watanabe, H.; Ito, T.; Tsuda, I. Making a Neuron Model: A Mathematical Approach. In Proceedings of the 11th Meeting of “Mechanisms of Brain and Mind”, Niseko, Japan, 11–13 January 2011.
  29. Tsuda, I.; Yamaguti, Y.; Watanabe, H. Modeling the Genesis of Components in the Networks of Interacting Units. In Proceedings of the ICCN 2013, Sigtuna, Sweden, 23–27 June 2013.
  30. Matsumoto, K.; Tsuda, I. Calculation of information flow rate from mutual information. J. Phys. A Math. Gen. 1988, 21, 1405–1414. [Google Scholar] [CrossRef]
  31. Brodmann, K. Vergleichende. Lokalisationslehre. der Grosshirnrind; Barth: Leipzig, Germany, 1909. (In German) [Google Scholar]
  32. Szentagothai, J. The neuron network of the cerebral cortex: A Functional Interpretation, the Ferrier Lecture in 1977. Proc. R. Soc. Lond. 1978, 201, 219–248. [Google Scholar] [CrossRef] [PubMed]
  33. Eccles, J.C. The modular operation of the cerebral neocortex considered as the material basis of mental events. Neuroscience 1981, 6, 1839–1856. [Google Scholar] [CrossRef]
  34. Felleman, D.C.; van Essen, D.J. Distributed hierarchical processing in the primate cerebral cortex. Cereb. Cortex 1991, 1, 1–47. [Google Scholar] [CrossRef] [PubMed]
  35. Yamaguti, Y.; Tsuda, I. Mathematical Modeling for Evolution of Heterogeneous Modules in the Brain. Neural Netw. 2015, 62, 3–10. [Google Scholar] [CrossRef] [PubMed]
  36. Kuramoto, Y. Chemical Oscillations, Waves and Turbulence. Springer Series in Synergetics; Springer-Verlag: New York, NY, USA, 1984. [Google Scholar]
  37. Molter, C.; O’Neill, J.; Yamaguchi, Y.; Hirase, H.; Leinekugel, X. Rhythmic modulation of theta oscillations supports encoding of spatial and behavioral information in the rat hippocampus. Neuron 2012, 75, 889–903. [Google Scholar] [CrossRef] [PubMed]
  38. Fox, M.D.; Snyder, A.Z.; Vincent, J.L.; Corbetta, M.; van Essen, D.C.; Raichle, M.E. The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proc. Natl. Acad. Sci. USA 2005, 102, 9673–9678. [Google Scholar] [CrossRef] [PubMed]
  39. Treves, A. Computational constraints between retrieving the past and predicting the future, and the CA3-CA1 differentiation. Hippocampus 2004, 14, 539–556. [Google Scholar] [CrossRef] [PubMed]
  40. Pribram, K.H. Languages of the Brain: Experimental Paradoexes and Principles in Neuropsychology; Prentice-Hall, Inc.: Englewood Cliffs, NJ, USA, 1971. [Google Scholar]
  41. Freeman, W.J.; Kozma, R. Scale-free cortical planar networks. In Handbook of Large-Scale Random Networks; Springer-Verlag: Heidelberg, Germany, 2015; pp. 1–48. [Google Scholar]
  42. Hebb, D.O. The Organization of Behavior: A Neuropsychological Theory; Lawrence Erlbaum Associates, Inc.: Mahwah, NJ, USA, 2002. [Google Scholar]
  43. Markin, V.S. Electrical interactions of parallel nonmyelinated fibers I. Change in excitability of the adjacent fiber. Biophysics 1970, 15, 122–133. [Google Scholar]
  44. Scott, A. Neuroscience. A Mathematical Primer; Springer-Verlag: New York, NY, USA, 2002. [Google Scholar]
  45. Tsuda, I. Chaotic itinerancy and its roles in cognitive neurodynamics. Curr. Opin. Neurobiol. 2015, 31, 67–71. [Google Scholar] [CrossRef] [PubMed]
  46. Tsuda, I.; Fujii, H.; Tadokoro, S.; Yasuoka, T.; Yamaguti, Y. Chaotic itinerancy as a mechanism of irregular changes between synchronization and desynchronization in a neural network. J. Integr. Neurosci. 2004, 3, 159–182. [Google Scholar] [CrossRef] [PubMed]
  47. Gray, C.M.; Engel, A.K.; Koenig, P.; Singer, W. Synchronization of oscillatory neuronal responses in cat striate cortex: Temporal Properties. Vis. Neurosci. 1992, 8, 337–347. [Google Scholar] [CrossRef] [PubMed]
  48. Singer, W.; Gray, C. Visual feature integration and the temporal correlation hypothesis. Ann. Rev. Neurosci. 1995, 18, 555–586. [Google Scholar] [CrossRef] [PubMed]
  49. Edelman, G. Neural Darwinism: The Theory of Neuronal Group Selection; Basic Books: New York, NY, USA, 1987. [Google Scholar]
  50. Friston, K. The free-energy principle: A Unified Brain Theory? Nat. Rev. Neurosci. 2010, 11, 127–138. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) A typical feature of the emergence of order parameters at the macroscopic level via interactions between elementary units at the microscopic level; (b) Another feature of self-organization, which promotes the emergence of components or subsystems at the microscopic or mesoscopic level via constraints at the macroscopic level. In this paper, we treat this type of feature of self-organization.
Figure 1. (a) A typical feature of the emergence of order parameters at the macroscopic level via interactions between elementary units at the microscopic level; (b) Another feature of self-organization, which promotes the emergence of components or subsystems at the microscopic or mesoscopic level via constraints at the macroscopic level. In this paper, we treat this type of feature of self-organization.
Entropy 18 00074 g001
Figure 2. Dynamical systems that were finally selected. (a) The most stable dynamical system; (b) An excitable dynamical system; (c) An oscillatory dynamical system. See text for a detailed description.
Figure 2. Dynamical systems that were finally selected. (a) The most stable dynamical system; (b) An excitable dynamical system; (c) An oscillatory dynamical system. See text for a detailed description.
Entropy 18 00074 g002
Figure 3. Randomly coupled oscillators are developed to be differentiated into two distinct modules after the full development of couplings under a constraint of maximum transmission of information. In-phase couplings are denoted by blue arrowed-solid lines, anti-phase couplings by red arrowed dotted lines, and other types of couplings by arrowed dotted lines in other colors.
Figure 3. Randomly coupled oscillators are developed to be differentiated into two distinct modules after the full development of couplings under a constraint of maximum transmission of information. In-phase couplings are denoted by blue arrowed-solid lines, anti-phase couplings by red arrowed dotted lines, and other types of couplings by arrowed dotted lines in other colors.
Entropy 18 00074 g003

Share and Cite

MDPI and ACS Style

Tsuda, I.; Yamaguti, Y.; Watanabe, H. Self-Organization with Constraints—A Mathematical Model for Functional Differentiation. Entropy 2016, 18, 74. https://doi.org/10.3390/e18030074

AMA Style

Tsuda I, Yamaguti Y, Watanabe H. Self-Organization with Constraints—A Mathematical Model for Functional Differentiation. Entropy. 2016; 18(3):74. https://doi.org/10.3390/e18030074

Chicago/Turabian Style

Tsuda, Ichiro, Yutaka Yamaguti, and Hiroshi Watanabe. 2016. "Self-Organization with Constraints—A Mathematical Model for Functional Differentiation" Entropy 18, no. 3: 74. https://doi.org/10.3390/e18030074

APA Style

Tsuda, I., Yamaguti, Y., & Watanabe, H. (2016). Self-Organization with Constraints—A Mathematical Model for Functional Differentiation. Entropy, 18(3), 74. https://doi.org/10.3390/e18030074

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop