Next Article in Journal
Adaptive Digital Hologram Binarization Method Based on Local Thresholding, Block Division and Error Diffusion
Next Article in Special Issue
Microsaccades, Drifts, Hopf Bundle and Neurogeometry
Previous Article in Journal
Use of Low-Cost Spherical Cameras for the Digitisation of Cultural Heritage Structures into 3D Point Clouds
Previous Article in Special Issue
Liouville Integrability in a Four-Dimensional Model of the Visual Cortex
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Retinal Processing: Insights from Mathematical Modelling

France INRIA Biovision Team and Neuromod Institute, Université Côte d’Azur, 2004 Route des Lucioles, BP 93, 06902 Valbonne, France
J. Imaging 2022, 8(1), 14; https://doi.org/10.3390/jimaging8010014
Submission received: 23 November 2021 / Revised: 11 January 2022 / Accepted: 12 January 2022 / Published: 17 January 2022

Abstract

:
The retina is the entrance of the visual system. Although based on common biophysical principles, the dynamics of retinal neurons are quite different from their cortical counterparts, raising interesting problems for modellers. In this paper, I address some mathematically stated questions in this spirit, discussing, in particular: (1) How could lateral amacrine cell connectivity shape the spatio-temporal spike response of retinal ganglion cells? (2) How could spatio-temporal stimuli correlations and retinal network dynamics shape the spike train correlations at the output of the retina? These questions are addressed, first, introducing a mathematically tractable model of the layered retina, integrating amacrine cells’ lateral connectivity and piecewise linear rectification, allowing for computing the retinal ganglion cells receptive field together with the voltage and spike correlations of retinal ganglion cells resulting from the amacrine cells networks. Then, I review some recent results showing how the concept of spatio-temporal Gibbs distributions and linear response theory can be used to characterize the collective spike response to a spatio-temporal stimulus of a set of retinal ganglion cells, coupled via effective interactions corresponding to the amacrine cells network. On these bases, I briefly discuss several potential consequences of these results at the cortical level.

1. Introduction

Let us start with a very simple experiment. Look around you... That’s it, the experiment is over. A very ordinary experience, isn’t it? Is it really though? Let us first point out that looking around you to see, that is, having the sense of sight, is indeed ordinary—except for those who have partially or totally lost their ability to see. We will come back to this point at the end of the paper. Now, excluding visual impairments, vision is everything but ordinary.
Think of it. A flux of photons, with frequencies in the visible spectrum range, emitted by the external world around us enters into our eyes, then “something” happens, and we see. Thanks to constant progress in experimental and theoretical neuroscience, we understand better and better this “something”, the mechanisms of vision, although our view of it is far from being complete. In particular, in these times of artificial intelligence, bio-inspired computing, computer vision, it might be helpful to understand how our brain is able to handle the complex visual information coming from the external world so rapidly and efficiently with an energy consumption of the order of a few Watts.
Certainly, the retina plays a central role in this process. It has been known for long time that this is definitely not a camera. The retina is smart [1] and it has to be. Think especially of the difference of scale between the retina and the visual cortex, in terms of size but also numbers of neurons and synapses. As everything that goes to the visual cortex comes from the retina, this little membrane, at the back of the eye, half a millimetre thick, with an area of order a cm 2 (for humans), has to some extent to filter the visual information, leaving out “irrelevant details” and capturing crucial events, and then, signal them appropriately to the brain via spike trains. As a matter of fact, the question(s) of “efficiently” encoding information by spikes has been the subject of many fascinating papers [2,3,4], especially in the seminal paper from Barlow [5] with concepts such as reducing redundancy, information compression and efficient coding. These concepts are regularly updated with recent experimental and theoretical investigations [6,7,8,9,10,11,12,13,14,15]. We come back to this point at the end of the paper too.
The retina has, roughly, the following structure. For more details, see, e.g., [16] or https://webvision.med.utah.edu/book/part-i-foundations/simple-anatomy-of-the-retina/ (accessed on 22 November 2021). It is organized in five neuronal types: photo-receptors, rods and cones (P), horizontal cells (H cells), bipolar cells (B cells), amacrine cells (A cells), retinal ganglion cells (RG cells), to which are added glial cells (Mueller’s cells). These neuron types are connected by chemical and electric synapses, in specific functional circuits or “pathways” (like the rod–cone pathway [17,18]), which are a key in the retinal capacity to convert the light coming from a visual scene into spike patterns sent to the visual cortex, through the Lateral Geniculate Nucleus (LGN), via the optic nerve made of RG cells axons. In particular, in the retina, there are very specific synapses like the ribbon synapse enabling neurons to transmit light signals from photoreceptors to B cells over a dynamic range of several orders of magnitude in intensity [19]. Roughly, two main connectivity structures can be distinguished: feed-forward, the P-B-G path which leads from the photo transduction to the spike trains emitted by the RG cells towards the cortex. There is also a lateral connectivity through H cells, at the origin of the Center-Surround structure of the receptive fields, and the A cells whose role is still poorly understood and which are one of the main objects of study of this paper.
The structure of the retina and its behaviour are thus well studied on the experimental side. There are comparatively fewer modelling studies although important work has been done on retinal coding [20,21,22,23,24], biophysically detailed models [25,26,27] and generalized linear models applied to retinal coding [28,29,30]. Several powerful software has been designed to model the retina at different scales such as COREM [31], Convis [32], Isetbio https://github.com/isetbio/isetbio/wiki (accessed on 22 November 2021). The Virtual Retina simulator, developed by A. Wohrer and P. Kornprobst [33] at INRIA, was one of the first of these simulators and has given rise to subsequent simulators in our group, the platform PRANAS [34], https://team.inria.fr/biovision/pranas-software/ (accessed on 22 November 2021) and more recently Macular https://team.inria.fr/biovision/macular-software/ (accessed on 22 November 2021). There are quite less mathematical results on how retinal structure, especially lateral A cells’ connectivity, shapes the spike response to spatio-temporal stimuli [35,36,37].
One of the goals of this paper is to elicit reflections in this direction, grounded on mathematical developments fed by the recent progress in the knowledge of retina physiology and structure. This is a humble and partial point of view, resulting from my collaboration with neurobiologists experts in the retina. The paper contains new results, essentially the mathematically tractable model of the layered retina integrating amacrine cells lateral connectivity and the mathematical framework to handle piecewise linear rectification presented in Section 2, the study of rectification effects on retinal ganglion cells receptive field (Section 3.1), the study of voltage and spike correlations of retinal ganglion cells (Section 3.2) and the discussion about the mixed effect of network and stimulus on spike correlations in Section 3.3. It also contains already published material, essentially the framework and results dealing with Gibbs distributions and linear response (Section 2.2.1 and Section 3.3).
The goal is to draw a common thread about the potential role of amacrine cells from retinal spatio-temporal stimuli response to spike coding. More precisely, I am addressing the following problems on mathematical grounds. In the main text, I focus on the neuroscience modelling perspective, whereas, in the Conclusions section, I discuss potential consequences of these results out of the field of neuroscience.
  • Problem 1. How does the structure of the retina, in particular, amacrine lateral connectivity, condition the retinal response to dynamic stimuli?
The problem can be addressed at two levels. Level 1. Single cell response to stimuli. The individual response of ganglion cells is usually expressed in terms of their receptive field. This notion is on the one hand phenomenological: it is observed that each ganglion cell responds preferentially to stimuli, localized in space, with a characteristic spatio-temporal structure. For example, an ON-Center cell preferentially responds to an increase in luminance in a circular area corresponding to the central part of the receiving field. This notion is also expressed mathematically by a kernel K G , i.e., a function of space and time, so that the response of an RG cell to a spatio-temporal stimulus S ( x , y , t ) , takes the form:
K G x , y , t S ( t ) = x = + y = + s = t K G ( x x C , y y C , t s ) S ( x , y , s ) d x d y d s ,
where x , y , t means space ( x , y )-time (t) convolution. x C , y C are the coordinates of the RF center. The integrals are well defined since the kernel decreases fast enough to infinity, in space and time, to guarantee convergence. The upper bound in time, t, expresses causality, whereas the lower bound, , implicitly assumes that the stimulus has been applied in a distant past compared to t, quite longer than the characteristic times involved in RG cell response.
Equation (1) corresponds to a linear response. It is therefore only valid for stimuli of low amplitude in voltage. More generally, the voltage response to the stimulus is a functional of the stimulus that one can, under well posed mathematical conditions, write as a Volterra expansion [21], (1) being the lowest order (linear) term. Unfortunately, higher-order terms are essentially inaccessible experimentally and one usually constrains instead the nonlinearity of the response under other modalities. In particular, taking into account that the response of a ganglion cell to a stimulus is, ultimately, a sequence of spikes, one writes the probability density of emitting a spike between t and t + d t in the form f K G x , y , t S ( t ) + b , where f is a nonlinear positive increasing function (typically, a sigmoid), and b is a threshold constraining the level of activity of the RG cell in the absence of stimulation. This procedure defines an inhomogeneous Poisson process called the linear-nonlinear Poisson (LNP) model [38,39,40]. Experimentally, the kernel K G is determined by Spike-Triggered Average or Spike-Triggered Correlation technique, studying the response to a white noise [38]. Nonlinearity is then determined, typically by the Levenberg–Marquardt method [41]. This modelling asks, however, the following questions:
(i)
How is the kernel K G of the RG cells constrained by the structure/dynamics of the upper layers of retinal cells?
(ii)
The forms (1) implicitly assumes that K G does not depend on the stimulus. Can one write mathematical conditions that guarantee such an independence?
(iii)
To which extent is the notion of Ganglion cells Receptive Field compatible with nonlinear effects reported in retinal neurons and synapses, such as voltage rectification or gain control?
Level 2. Collective response to stimuli and spike statistics. RG cells do not interact directly, but amacrine connectivity induces an effective interaction between them. What is therefore the structure of the spatio-temporal correlations induced by the conjunction of the spatio-temporal stimulus and the response of the retinal network, in particular, the amacrine lateral connectivity? A classical paradigm in neural coding is to assume that the retina decorrelates RG cell outputs to maximize information transfer [6,7,8,9,10,11,13,14,15]. It is in particular believed that A cells play a central role in this decorrelation process (see [15] and references therein). What can be, at the mathematical level, the conditions, on the stimulus and dynamics that allow a network of neurons interacting with each other to produce vanishing, or at least, weak correlations? When does weak mean negligible? These questions are actually closely related to the second problem.
  • Problem 2. How do retina network and dynamics shape spike statistics in the response to stimuli?
More generally, considering the retina as a dynamical system forced by non-stationary, spatially inhomogeneous stimuli, what could be a general form for the (non-stationary) statistics of spike trains emitted by ganglion cells, taking into account that spike trains emitted by the retina are all that the LGN and cortex see? One can attempt to construct a canonical form of probability distributions of the retinal spike trains taking into account that:
(i)
Stimuli, thus statistics, are not stationary;
(ii)
The cortex (and, before, the LGN) only receive spikes, thus have no information about the biophysical processes which have generated those spikes and no information on the underlying dynamics of the retina (voltages, activation variables, conductances). All the information is contained in the spatio-temporal structure of spikes;
(iii)
Spike train distributions may exhibit long time scale dependence (i.e., have a long memory).
In this paper, I address these problems with the help of two models. The first, presented in Section 2.1, grounded on biology and e.g., the papers [42,43,44,45] mimics the Bipolar-Amacrine-Ganglion cells network and is used, in Section 3.1 and Section 3.2, to make progress in elucidating problem 1. I first show how one can obtain an explicit form for the kernel (1) featuring the A cells lateral connectivity. This RF explicitly depends on the BC cells-A cells network through the eigenvalues and eigenvectors of an operator I call “transport operator”. I discuss some consequences of this result, especially in terms of response to propagating stimulus. This result is valid when cells act as linear integrators. However, cells are in general rectified by nonlinearities. I propose piecewise-linear rectifications (as used in several retina model) and I discuss how rectification acts on the RF of Equation (1). A striking conclusion is that, if the convolution from (1) is preserved, this is the price of having a RF depending on the stimulus. A consequence of this analysis is that spike correlations may depend on the stimulus and are expected to be quite different when considering e.g., objects moving along trajectories in comparison to static images.
The second model, introduced in Section 2.2 and analysed in Section 3.3, attempts to propose a canonical form of probability distributions of the retinal spike trains based on the constraints (i), (ii), (iii) above. These sections essentially present the conclusions of works published elsewhere [46,47,48,49,50,51]. As I argue, these constraints lead to a natural notion of spike probabilities, somewhat extending the statistical physics notion of Gibbs distribution to the non-stationary case. In this setting, one establishes a linear response for a network of interacting spiking cells that can mimic a set of RG cells coupled via effective interactions corresponding to the A cells network influence. This linear response theory not only gives the effect of a non-stationary stimulus to first order spike statistics (firing rates) but also its effect on higher order correlations. Indeed, spike correlations are modified by a spatio-temporal stimulus and can be computed thanks to the knowledge of spontaneous correlations. The linear response formula is expressed as a convolution where the kernel can be explicitly computed for an Integrate and Fire conductance based model [51]. Moreover, as I argue, these spike train distributions have close links with information geometry. In particular, they induce a natural metric in an abstract space of probabilities, with close potential links with the neuro-geometry introduced by Sarti, Citti, and Petitot et al. [52,53,54,55]. This is discussed in the Conclusions section.
More generally, the application and discussion sections shortly propose the possible extension of this work to several domains: retinal prostheses, Section 4.1; Convolutional networks, Section 4.2; Implications for cortical models, Section 5.1; and Neuro-geometry, Section 5.2.

2. Materials and Methods

2.1. Modelling the Retinal Network

2.1.1. Specifics of the Retina

Neurons in the retina have the same biophysics as their cortical counterparts. However, they operate under different modalities. Remarkably, with the exception of the RG cells, the retinal neurons do not emit action potentials. Their activity and interactions therefore take place through graded (continuous) membrane potentials as opposed to the sharp peak of an action potential. Furthermore, there is no long-term synaptic plasticity in the retina. Finally, the main “computational” elements in the retina are functional circuits [18] made of a few neurons and synapses, in large contrast with “computational” units in the visual cortex, such as cortical columns, involving thousands of neurons. A modelling consequence is that mean-field or neural masses description used in the cortex might not be relevant to study the retina.
The goal of this paper is to address mathematical questions about the dynamics and behaviour of the retina embedded in the visual system. To instantiate these questions on a firm mathematical ground, we are going to consider a model of the retinal network, based on a few fundamental facts briefly exposed in the previous section:
  • The retina is a high dimensional, non autonomous and noisy dynamical system, layered and structured, with non-stationary and spatially inhomogeneous entries (visual scenes).
  • Most retinal neurons are not spiking, except RG cells. Thus, the retina performs analogic computing.
  • Local retinal circuits efficiently process the local visual information. These local circuits are connected together, spanning the whole retina in a regular tiling. From this perspective, it is important to consider individual neurons and synapses, in contrast, e.g., to cortical modelling, where it is relevant to consider mean-field approaches averaging over populations.
Thus, the model presented below and in Figure 1 is non-stationary, with a layered retina like structure, where dynamics ruling B cells, A cells, and RG cells voltage are piecewise linear. As we discuss, the model affords additional nonlinearities like gain control. For RG cells, the spiking process is mimicked by a nonlinear firing rate so that our model enters in the class of LNP models.

2.1.2. Structure of the Retina Model

We assimilate the retina to a superimposition of 3 layers, each one being a flat, two-dimensional square of edge length L mm where spatial coordinates are noted x , y (Figure 1). Each layer corresponds to a cell population (B cells, A cells, RG cells) where the density of cells is taken to be uniform. We note δ p the lattice spacing in mm, and N p the total number of cells in the layer p. Without loss of generality, we assume that L, the retina’s edge size, is a multiple of δ p . We note L p = L δ p , the number of cells p per row or column so that N p = L p 2 . Each cell in the population p thus has Cartesian coordinates ( x , y ) = ( i x δ p , i y δ p ) , ( i x , i y ) 1 , , L p 2 . To avoid multiples indices, we associate to each pair ( i x , i y ) a unique index i = i x + ( i y 1 ) L p . The cell of population p, located at coordinates ( i x δ p , i y δ p ) , is then denoted by p i .
One can roughly subdivide the real retina into two blocks (Figure 1). The first we name in short, for modelling purposes OPL, (Outer Plexiform Layer), includes the P, H cells, B cells, and the related synapses. (Note that the terminology OPL and IPL actually refers to synaptic layers. “The outer plexiform layer has a wide external band composed of inner fibres of rods and cones and a narrower inner band consisting of synapses between photoreceptor cells and cells from the inner nuclear layer.” “The inner plexiform layer consists of synaptic connections between the axons of bipolar cells and dendrites of ganglion cells” (ref https://www.sciencedirect.com/topics/medicine-and-dentistry/, accessed on 22 November 2021). In our model, these naming are short cuts to distinguish the network input (OPL) and the network processing (IPL)). An “input” of this block is the flow of photons emitted by the outside world and picked up by the photo-receptors. In our model, this corresponds to a “stimulus”, i.e., a function S ( x , y , t ) where x , y are (two-dimensional) space coordinates and t is the time coordinate. As we do not consider color sensitivity here, S characterizes a black and white scene, with a control on the level of contrast [ 0 , 1 ] . The “output” of the OPL is sent to B cells in the form of a “drive” voltage, defined in Equation (2) below. In the real retina, the voltage of each BCell integrates, spatially and temporally, the local visual information of the photo-receptors which are connected to it, with a lateral modulation due to the H cells. Each B cells is thus sensitive to specific local characteristics of the visual scene, defining its Receptive Field (RF). Thus, B cells, like RG cells, have a receptive field. However, as they are earlier in the vertical pathway, they integrate less features. Note that the RF of distinct B cells usually overlap creating correlations between B cells voltages (see Section 3.2.1).
We label B cells (layer 1) with the index i = 1 , , N B and we model the RF of B cells by a convolution kernel, K B i , such that the voltage of BCell i is stimulus-driven by the term:
V i d r i v e ( t ) = K B i x , y , t S ( t ) .
The center of the RF, located at x i , y i , also corresponds to the coordinates of the BCell i. A typical shape for the RF of B cells is illustrated in Figure 2, although the explicit form does not play a role in the subsequent mathematical developments.
The second block that we name in short IPL (Inner Plexiform Layer) comprises the A cells and RG cells and the afferent synapses. Its “input” is the output of the OPL, and its output the trains of action potentials emitted by the RG cells. A cells are difficult to study experimentally because they are hardly accessible from electrophysiology measurements. There are also a large number of cell subtypes in the A cells class (around 40), of which only a small number have duly identified functions. It is, however, recognized that they play an essential role in the treatment of motion [17,56,57]. Here, we address mathematically the question of the RG cell receptive field form, resulting from the pooling of B cells, as illustrated in Figure 1, each with a specific RF as exemplified in Figure 2, and modulated by A cells lateral connectivity.

2.1.3. B Cells–A Cells Interactions

We label A cells (second layer) with the index j = 1 N A . We note W B i A j as the synaptic weight from A cell j to B cell i and W A j B i the synaptic weight from B cell i to A cell j. We set W B i A j 0 , (A cells are in general inhibitory although some excitatory A cells exist, not considered here), whereas W A j B i 0 . The synaptic weight matrices B cells to A cells and A cells to B cells are noted W A B , W B A . They are not squared in general. Electric synapses (gap junctions) between B cells and A cells also exist (e.g., in the Rod Cone pathway [17], see also [58] and https://www.ncbi.nlm.nih.gov/books/NBK549947/ (accessed on 22 November 2021)), but we will not consider them in first place, for simplicity. Note, however, as briefly discussed in Section 2.1.8, that adding gap junctions would simply result in adding linear terms to Equations (3), (6) and (9) (when considering passive gap junctions) and modify characteristic time scales, without changing the global analysis.
The voltage of B cell i, V B i , evolves according to:
d V B i d t = 1 B i V B i + j = 1 N A W B i A j N A V A j + F B i ( t ) .
Here, B i is the characteristic time scale of B cell i response (in ms). The function:
N A ( V ) = V θ A , i f V A > θ A ; 0 , otherwise ,
is a linear rectifier ensuring that the synapse j i becomes silent when the voltage of the pre-synaptic A cell j, V A j , is lower than a threshold θ A . This corresponds to a biophysical fact: a synapse cannot change its sign. For simplicity, we consider θ A to be the same for all A cells, although the present formalism can be extended, e.g., to several families of A cells having different thresholds. Note that linear rectifiers of type (4) rectify cell’s voltage “from below”. Rectification “from above” also exist, ensuring that the cell’s voltage does not increase without bounds. A typical mechanism is gain control, where an additional variable, called the activity, increasing as voltage increases, triggers a gain function nonlinearly dropping down the voltage when it exceeds an upper threshold [42,44]. Under some mild assumptions, gain control can also be implemented as a linear function of the activity. This is discussed in Section 2.1.8 as an extension to the present model.
Finally, F B i ( t ) is the OPL input term. To match classical retina models as developed e.g., in [42,44], it reads:
F B i ( t ) = V i d r i v e τ B + d V i d r i v e d t = K B i x , y , t S τ B + d S d t ( t ) ,
(where K B i ( x , y , 0 ) = 0 ). In short, F B i ( t ) is chosen so that, in the absence of A cells interaction, V B i ( t ) = V i d r i v e ( t ) . Note that F B i ( t ) implements therefore a time derivative of the drive, which makes, e.g., a B cell response to moving objects sensitive to changes in directions or speed.
A cells are connected to B cells with chemical synapses. The differential equation obeyed by the voltage of A cell j is:
d V A j d t = 1 A j V A j + i = 1 N B W A j B i N B V B i ,
where A j is the characteristic time scale of A cell j response, and N B has the same form as (4), with a threshold θ B . Note that, in contrast to B cells, A cells do not receive an OPL input.

2.1.4. RG Cells

We label RG cells (third layer) with the index k = 1 N G . They are connected to B cells with excitatory synaptic weights, W G k B i 0 (e.g., glutamatergic synapses) and to A cells with inhibitory synaptic weights, W G k A j 0 (e.g., glycinergic or GABA-ergic synapses). Their voltage, V G k , evolves according to:
d V G k d t = 1 τ G V G k + i = 1 N B W G k B i N B ( V B i ) + j = 1 N A W G k A j N A ( V A j ) .
RG cells are spiking. In the model, their spiking activity (firing rate) is defined by an LNP model [38,39,40]. It depends on the voltage via a nonlinear function N G V G f V G ( t ) θ G σ G , where f is typically a sigmoid. Although the detailed form of f does not matter here, it will be convenient, in the sequel, to consider:
N G V G = 1 2 π V G θ G σ G e x 2 2 d x .
The parameters θ G (spiking threshold) and σ G (controlling the slope of the sigmoid at V G = θ G ) corresponds, in the case where N G has the form (8), to the probability that a Gaussian centred Ornstein–Uhlenbeck processes with mean-square deviation σ G crossing the threshold θ G .

2.1.5. Joint Dynamics

The joint dynamics of all cells voltage are given by the dynamical system:
d V B i d t = 1 τ B V B i + j = 1 N A W B i A j N A V A j + F B i ( t ) , i = 1 N B ; d V A j d t = 1 τ A V A j + i = 1 N B W A j B i N B V B i , j = 1 N A ; d V G k d t = 1 τ G V G k + i = 1 N B W G k B i N B ( V B i ) + j = 1 N A W G k A j N A ( V A j ) , k = 1 N G ;
whereas RG cells spikes are produced by the LNP mechanism described above.
The system of Equation (9) can be summarized as follows (Figure 1). B cells receive the visual input via the term F B i ( t ) which depends on the stimulus and on the B cell’s receptive field. They are inhibited by A cells via the synaptic weights W B i A j < 0 . A cells are excited by B cells via the synaptic weights W A j B i > 0 . B cells are connected to RG cells via the synaptic weights W G k B i > 0 . A cells are connected to RG cells via the synaptic weights W G k A j < 0 . Note that we do not impose any constraint on the connectivity here.
To study mathematically the dynamical system (9), we write it in a more convenient form. We use Greek indices α , β , γ = 1 N N A + N B + N G , and define the state vector X , with entries:
X α = V B i , α = i , i = 1 N B ; V A j , α = N B + j , j = 1 N A ; V G k , α = N B + N A + k , k = 1 N G .
We introduce F ( t ) , the non-stationary input, with entries:
F α ( t ) = F B i ( t ) , α = i , i = 1 N B ; 0 , α > N B ;
and R ( X ) , the rectification term, with entries:
R α ( X ) = N B V B i , α = i , i = 1 N B ; N A V A j , α = N B + j , j = 1 N A ; 0 , α = N B + N A + k , k = 1 N G .
We use the notation 0 n 1 n 2 for the n 1 × n 2 matrix with zero entries. We introduce the N × N matrices:
T = diag τ B i i = 1 N B 0 N B N A 0 N B N G 0 N A N B diag τ A j j = 1 N A 0 N A N G 0 N G N B 0 N G N A diag τ G k k = 1 N G ,
characterizing the characteristic integration times of cells,
W = 0 N B N B W B A 0 N B N G W A B 0 N A N A 0 N A N G W G B W G A 0 N G N G ,
summarizing chemical synapses interactions. Note that, to our best knowledge, there are no synapses from RG cells to RG cells, but they could be added in this formalism.
Then, the dynamical system (9) reads, in vector form:
d X d t = T 1 . X + W . R ( X ) + F ( t ) .
We remark that (13) has a specific skew-product structure: the dynamics of RG cells is driven by B cells and A cells with no feedback. This means that one can study first the coupled dynamics of B cells and A cells and then the effect on RG cells. This corresponds to a biological reality as, to our best knowledge, there is no feedback from RG cells to B cells or to A cells.

2.1.6. Piecewise Linear Evolution

We assume here that F α ( t ) is bounded, as well as synaptic weights. Thus, the phase space Ω of (13) can be taken to be compact. Indeed, trajectories cannot escape to infinity thanks to the rectification terms N B , N A , (Equation (4)) and thanks to the sign of synaptic weights W A j B i , W B i A j . More precisely, V B i cannot become arbitrarily large and positive because the input term F α ( t ) F B i ( t ) is bounded and because j = 1 N A W B i A j N A V A j 0 . Assume indeed that V B i increases (due to a large enough F α > 0 making the r.h.s. of Equation (3) positive). This leads to an increase of connected A cell voltages V A j (Equation (6)), thus to a decrease of the term j = 1 N A W B i A j N A V A j 0 until the point where the r.h.s. of (3) becomes negative, thereby decreasing V B i and preventing it from becoming arbitrarily large. This implies as well that V A j s cannot become arbitrarily large. On the opposite, if V B i (resp. V A j ) becomes smaller than θ B (resp. θ A ), it does not play any more role in the dynamics because of rectification.
Due to the specific form (4) of the rectification terms, the dynamical system (13) is piecewise linear. More precisely, we can partition the phase space Ω into sub domains Ω ( n ) , n = 1 2 N B + N A defined as follows. To each cell α = 1 N B + N A (B cell or A cell), we associate a “rectification label” η α = 1 if the cell α is rectified and η α = 0 otherwise. Because of the form (4) of the rectification, the label η α corresponds to a partition of the voltage X α ’s domain of variation into two sub domains (e.g., for a B cell, η α = 1 if V B i < θ B and η α = 0 if V B i θ B ). Now, the set 0 , 1 N B + N A is made of chains η = η 1 η N B + N A composed of the rectification labels η α of all B cells and A cells. To each such sequence is therefore associated a convex domain Γ ( n ) of R N B + N A where all cells α such that η α = 0 have their voltage X α larger than the rectification threshold, thus, are not rectified, and all cells such that η α = 1 are rectified. To each such η is associated a unique integer (e.g., n = α = 1 N B + N A η α 2 α 1 , η is then the binary coding of n). Finally, we set Ω ( n ) = Γ ( n ) × R N G , where the product with the subspace R N G integrates the states space of RG cells dynamics. They are slaved by B cells and A cells dynamics, but they are not rectified. In this setting, Ω ( 0 ) is the subset of Ω such that neither B cells nor A cells are rectified; Ω ( 1 ) the subset of the phase space where only B cell 1 is rectified; Ω ( 3 ) the subset where only B cells 1 , 2 are rectified; Ω ( 2 N B ) the subset where only A cell 1 is rectified and so on.
It is easy to check that the sets Ω ( n ) are disjoint and cover R N , and thus make a partition of the phase space. The vector R ( X ) now has the form:
R α ( X ) = ( 1 η α ) X α θ B , α = 1 N B ; ( 1 η α ) X α θ A , α = N B N B + N A ; 0 , α = N B + N A + k , k = 1 N G ,
and is piecewise-linear in X . For X Ω ( n ) , the transformation T 1 . X + W . R ( X ) can therefore be written L ( n ) . X + C ( n ) , where C ( n ) is the vector with entries:
C ( n ) = θ B ( 1 η α ) j W B α A j , α = 1 N B ; θ A ( 1 η α ) i W A α B i , α = N B N B + N A ; 0 , α = N B + N A + 1 , , N .
This is a time-constant vector, coming from the presence of a threshold in rectification (it is zero when θ A , θ B = 0 ), depending on the rectification state of cells, thus depending on the domain Ω ( n ) . Rectified cells have zero entries in C ( n ) . The matrix:
L ( n ) = diag 1 τ B i i = 1 N B W B A . D A ( n ) 0 N B N G W A B . D B ( n ) diag 1 τ A j j = 1 N A 0 N A N G W G B . D B ( n ) W G A . D A ( n ) diag 1 τ G k k = 1 N G ,
is called the transport operator in the domain Ω ( n ) . This terminology is further explained in Section 2.1.9, but, in short, L ( n ) acts as a flow (or a propagator) characterizing the evolution of a trajectory within Ω ( n ) . In Equation (15), the matrices D B ( n ) = diag 1 η α α = 1 N B , D A ( n ) = diag 1 η α α = N B + 1 N B + N A are projecting onto the subspace of non-rectified cells in the domain Ω ( n ) . In other words, when the state X is in Ω ( n ) , a rectified cell α gives a zero contribution to the dynamics of other cells, which corresponds to have a row and column α made of zeros in D A ( n ) , D B ( n ) .
The dynamical system (13) reads now:
d X d t = L ( n ) . X + F ( n ) ( t ) , X Ω ( n ) ,
where we wrote F ( n ) ( t ) = C ( n ) + F ( t ) . Thanks to the decomposition of the phase space into convex sub-domains Ω ( n ) , (16) is now linear. This technique of phase space decomposition is classical and has been used in domains such as ergodic theory and billiards, self-organized criticality [59,60] or neurosciences [61,62,63,64]. See especially the recent paper by A. Rajakumar et al. [65], very much in the spirit of the present model.

2.1.7. Spectra and Fixed Points

It is important to consider in detail the spectrum of L ( n ) for further studies. (Anothe approach consists of considering the Schur decomposition instead of the diagonalisation [65,66,67].) We note λ β ( n ) , β = 1 N , the eigenvalues of L ( n ) , and its right eigenvectors are noted, P β ( n ) . These vectors are the columns of the matrix P ( n ) transforming L ( n ) in diagonal form (assuming it is diagonalizable). P ( n ) 1 is the inverse matrix. Its rows are the left eigenvectors of L ( n ) .
As D A ( n ) , D B ( n ) are projection matrices, it is easy to see, from the form (15), that a rectified cell generates an eigenvalue 1 τ α and an eigenvector e α , the canonical basis vector of R N in the direction α . The non-rectified cells span a subspace of R N and the projection of L ( n ) on this subspace has a spectrum depending on the connectivity matrices W A B , W B A and on other parameters like characteristic times.
The corresponding eigenvalues λ β ( n ) , β = 1 N can be real or complex, with a positive or a negative real part. In the case where W A B and W B A commute, it is actually possible to explicitly compute the eigenvalues and the eigenvectors and obtain conditions for stability (all eigenvalues have real negative part) and real/complex eigenvalues [45]. If we further assume that W A B and W B A have no zero eigenvalues, the sign constraints on these matrices imply that L ( n ) is invertible for all n. This is what we are going to assume from now.
It follows that, in the absence of external stimulus ( F ( t ) = 0 ), Equation (16) has, for each n, a unique fixed point X ( n ) = L ( n ) 1 . C ( n ) . Note, however, that this point may not be in Ω ( n ) . This is a typical situation for piecewise linear dynamical systems (like Iterated Function Systems [68,69,70]) where dynamics can have complex attractors even if maps are linear (and contracting) into sub-domains of the phase space. The simplest non trivial case is when dynamics generates a periodic orbit, but more complex attractors (fractal sets) can be obtained. Here, it is reasonable to assume at least that cells at rest are not rectified. Mathematically, this means that the fixed point of L ( 0 ) , X * = L ( 0 ) 1 . C ( 0 ) , belongs to Ω ( 0 ) , and this is what we are going to assume for now. This imposes a set of constraints linking synaptic weights and thresholds. A simple assumption consists of having vanishing thresholds θ A = θ B = 0 , in which case the rest state is 0 . We will also assume that X * is stable (eigenvalues of L ( 0 ) have a negative real part), which imposes additional assumptions on synaptic weights and cell integration times. On biophysical grounds, it means that the rest state is stable to small perturbations, like noise. Because rectified cells produce stable eigenvalues, the following holds. Taking an initial condition in any domain Ω ( n ) , spontaneous dynamics (without stimulus) eventually drive the trajectory back to Ω ( 0 ) and, then, to the rest state. This is further commented below (Section 2.1.9, remark 2).

2.1.8. Extensions: Gain Control and Gap Junctions

Gap Junctions

Electric synapses, e.g., between B cells and A cells, play an important role in the retina, for example in the rod–cone pathway [17]. We consider here passive gap junctions corresponding to electric synapses with a constant conductance (in contrast to conductances depending on variables such as light illumination, see https://webvision.med.utah.edu/book/part-iii-retinal-circuits/myriad-roles-for-gap-junctions-in-retinal-circuits/ (acessed on 22 November 2021)). Let us consider, for example, a gap junction between B cell i and A cell j. We note g B i A j 0 the electric conductance from j to i (with g B i A j = 0 if there is no electric connection between the two cells). As gap junctions are symmetric, g B i A j = g A j B i . We also note C B i the membrane capacitance of B cell i and C A j the membrane capacitance of A cell j and introduce the notation G B i A j = g B i A j C B i , G A j B i = g A j B i C A j . Remark therefore that G B i A j = G A j B i if and only if B cell i and A cell j have the same capacitance. The electric synapse generates a (signed) current G B i A j V B i V A j feeding B cell i and a current G A j B i V A j V B i feeding A cell j. Note that, in contrast to chemical synapses, voltages are not rectified, ionic currents are simply following the gradients of electric potentials and can, therefore, go both ways.
The presence of electric synapses between B cells and A cells modifies therefore Equation (9) as:
d V B i d t = 1 τ B i V B i + j = 1 N A W B i A j N A V A j + G B i A j V A j + F B i ( t ) , i = 1 N B ; d V A j d t = 1 τ A j V A j + i = 1 N B W A j B i N B V B i + G A j B i V B i , j = 1 N A ;
where 1 τ B i = 1 τ B + 1 C B i j = 1 N A g B i A j , 1 τ A j = 1 τ A + 1 C A j i = 1 N B g A j B i are inverse of characteristic time. Thus, gap junctions have the effect of reducing the characteristic time of cell response (increase their conductance). Gap junctions between A cells and RG cells, or between RG cells, would be implemented the same way.

Gain Control

This mechanism plays a prominent role in the nervous system. In short, this is the property that neural systems have to adjust the nonlinear transfer function relating input to output to dynamically span the varying range of incoming stimuli [71]. It has been reported in the retina and invoked in several motion processing features: anticipation, alert response to motion onset and motion reversal [42,44]. In particular, B cells have gain control. Here, this is a desensitization when activated by a steady illumination [72], mediated by a rise in intracellular calcium C a 2 + , at the origin of a feedback inhibition thus preventing prolonged signalling of the ON B cell [44,73]. It can be modelled as follows [42,44,45]. Each B cell has a dimensionless activity variable a B i obeying the differential equation:
d a B i d t = a B i τ a + h B N B V B i .
The gain function is a strongly nonlinear function, almost step-wise, of the form:
G B ( a B i ) = 0 , if a B i 0 ; 1 1 + a B i 6 , else .
The effect of gain control acts at the level of synaptic transmission from B cells to A cells, where the rectification term N B V B i is replaced by N B V B i G B ( a B i ) . That is the equation ruling the A cell j’s voltage reads now:
d V A j d t = 1 τ A V A j + i = 1 N B W A j B i N B V B i G B ( a B i ) .
It has the following meaning. When the voltage of B cell i increases, its activity a B i increases as well, up to a range where gain control takes place. When a B i becomes too large, G B ( a B i ) drops down, thereby reducing the action of B cell i on A cell j. As mentioned earlier, this is a way to rectify voltages from above.
Gain control has also been reported for (OFF) RG cells [42,44] and shape their firing rate. Gain control at the level of B cells and RG cells induces retinal anticipation. When combined with A cells’ lateral connectivity or gap junctions’ connectivity, it results in a wave of activity ahead of the propagating stimulus (e.g., a moving bar) for specific ranges of parameters (characteristic times of cells response, weight intensities) as studied in [45].

Piecewise Linear System with Gain Control and Gap Junctions

Here, we want to expose how the piecewise linear formalism developed above can be applied in the case of gain control and gap junctions. Note that gap junctions actually do not pose any problem from this perspective because they add linear contributions. In the presence of gain control and gap junctions, the dynamical system (9) becomes:
d V B i d t = 1 τ B i V B i + j = 1 N A W B i A j N A V A j + G B i A j V A j + F B i ( t ) , i = 1 N B ; d V A j d t = 1 τ A j V A j + i = 1 N B W A j B i N B V B i G B ( a B i ) + G A j B i V B i , j = 1 N A ; d V G k d t = 1 τ G V G k + i = 1 N B W G k B i N B ( V B i ) G B ( a B i ) + j = 1 N A W G k A j N A ( V A j ) , k = 1 N G ; d a B i d t = a B i τ a + h B N B V B i , i = 1 N B ;
We do not know about experimental evidence of gain control in A cells. This is why A cells are not gain controlled in (19), but the extension is straightforward. RG cells are gain controlled at the level of their firing rate (see [44]).
To make (19) a piecewise linear dynamical system, the trick is to replace the function (18) with a step function where G B ( a B i ) = 1 if a B i 0 , θ a , where θ a is a threshold (typically 2 3 coming from a linear interpolation of (18); see [45]) and G B ( a B i ) = 0 , otherwise. In addition to the rectification variables η α , we introduce gain control variables g α = 1 if a B i 0 , θ a and g α = 0 otherwise, α = 1 N B . The definition of the domains Ω ( n ) extends easily in this context by partitioning R N + N B into sub-domains taking the product of the voltage partition ] , θ B ] , ] θ B , + ] with the activity partition ] , θ a ] , ] θ a , + ] . The transport operator generalizes to:
L ( n ) = diag 1 τ B i i = 1 N B W B A . D A ( n ) + G B A 0 N B N G 0 N B N B W A B . D B ( n ) + G A B diag 1 τ A j j = 1 N A 0 N A N G 0 N A N B W G B . D B ( n ) W G A . D A ( n ) diag 1 τ G k k = 1 N G 0 N G N B h B I N B N B 0 N B N A 0 N B N G diag 1 τ a i = 1 N B ,
where I N B N B is the N B -dimensional identity matrix and D B ( n ) = diag ( 1 η α ) g α α = 1 N B . Thus, D B ( n ) has zero entries whenever a B cell is either rectified ( η α = 1 ) or gain controlled ( g α = 0 ) leading to a projection on the subspace of B cells which are neither rectified nor gain controlled. Extending the phase space with activity variables corresponds to adding N B eigenvalues 1 τ a to the spectrum. The corresponding eigenvectors are generalized eigenvectors though because the activities variables add a Jordan block to the matrix [45].

2.1.9. Solutions

We now consider the general situation where dynamics is in the rest state at times t < 0 , and, from time t = 0 on, the stimulus S ( x , y , t ) is applied, resulting in a non-stationary drive F ( t ) . In general, the stimulus is applied over a finite time. After this, the system eventually returns to rest. Under this stimulation, the trajectory X ( t ) t 0 is going to cross a sequence of domains Ω ( n k ) , k = 1 , , with n 1 = 0 , entirely determined by the stimulus and the network characteristics. Call t ( n k + 1 ) the time where the trajectory enters the domain Ω ( n k + 1 ) and t + ( n k + 1 ) the time where it gets out. Note that t ( n k + 1 ) = t + ( n k ) . By direct integration of Equation (16), we have:
X ( t ) = e L ( n k + 1 ) ( t t ( n k + 1 ) ) . X ( t ( n k + 1 ) ) + t ( n k + 1 ) t e L ( n k + 1 ) ( t s ) . F ( n k + 1 ) ( s ) d s , t t ( n k + 1 ) , t + ( n k + 1 ) ,
where X ( t ( n k + 1 ) ) , corresponding to the state of X when entering Ω ( n k + 1 ) , is given by the integration of the past trajectory and can be computer explicitly. This is:
X ( t ( n k + 1 ) ) = X ( t + ( n k ) ) = m = 0 k H m k Φ m ,
where H m k is a sequence of matrices satisfying:
H k k = I N ; H m k = H k 1 k H m k 1 ; H k 1 k = e L ( n k ) ( t + ( n k ) t + ( n k 1 ) ) ,
where I N is the identity matrix of dimension N. The matrix H m k transports the flow from the exit point of Ω ( n m ) to the exit point of Ω ( n k ) . The vectors Φ m are defined by:
Φ 0 = X ( 0 ) ; Φ m = t ( n m ) t + ( n m ) e L ( m ) ( t + ( n m ) s ) . F ( m ) ( s ) d s .
The proof of (22) is easily done by recurrence.

2.1.10. Remarks

Let us now make some remarks on the structure of these solutions.
  • The interpretation of (22) is the following. Starting from an initial condition X ( 0 ) Ω ( n 1 ) , the dynamics (19) is integrated up to the possible time t = t ( n 2 ) = t + ( n 1 ) when X ( t ) gets out of Ω ( n 1 ) and enters a new domain Ω ( n 2 ) . This arises if, during the time evolution of the system, some cells get rectified (or gain controlled) at time t. Then, there is a drastic change in time evolution because rectified cells do not participate in dynamics anymore. The value of the state vector at this time is X ( t + ( n 1 ) ) = e L ( n 1 ) ( t + ( n 1 ) t ( n 1 ) ) . X ( 0 ) + t ( n 1 ) t + ( n 1 ) e L ( n 1 ) ( t s ) . F ( n 1 ) ( s ) d s which can be written X ( t + ( n 1 ) ) = H 0 1 . Φ 0 + H 1 1 . Φ 1 = m = 0 1 H m 1 Φ m using t ( n 1 ) = 0 . The system is now in the domain Ω ( n 2 ) and follows its evolution until the (possible) time t ( n 3 ) = t + ( n 2 ) when some new cells are rectified or some rectified cells become non-rectified. The system enters a new domain Ω ( n 3 ) and so on. In general, the state at the entrance of domain Ω ( n k + 1 ) is given by (22). This is a linear combination of terms H m k Φ m where Φ m (Equation (24)) integrates the stimulus contribution from the entrance time into domain Ω ( n m ) up to the exit time of this domain and H m k transports the state from the exit point of Ω ( n m ) to the exit point of Ω ( n k ) .
  • In the definition of H m k , the operators L n k do not commute in general.
  • Eigenvalues of some H m k can have a positive real part leading to an exponential increase along the corresponding eigendirection. This means that some cell voltage increases exponentially in absolute value. However, when voltage becomes too large, voltage rectification (or gain control) takes place, corresponding to the trajectory entering a new continuity domain. Here, unstable cells do not contribute to dynamics anymore, which are projected on the subspace of non-rectified cells. This has the effect of transforming unstable eigenvalues into stable ones preventing the trajectories X ( t ) from diverging. Actually, the spectrum of H m k , controlling stability, resembles the Lyapunov spectrum in ergodic theory [74], with two main differences. First, we are simply considering product of matrices without multiplying by the adjoin so that eigenvalues can be complex. Second, we are not assuming stationarity and the existence of an invariant measure. Instead, the product H m k is constrained by the non-stationary stimulus and dynamical system parameters which fixes the sequence of times n k s.
  • Rectification induces a weak form of nonlinearity where e.g., the contraction/expansion in the phase space depends on the domain Ω ( n k ) (whereas, in a differentiable nonlinear system, it would depend on the point in the phase space). This has deep consequences on cells response, as mentioned in the Results section.

2.2. Spike Statistics

As pointed out in the Introduction, it might be helpful to propose a mathematical setting taking into account non-stationarity and potentially long memory in spike trains’ probabilities. Such a setting has existed for a long time but has not been applied to spike train statistics until recently. It is inherited from statistical physics on one hand [75] and on extensions of Markov chains to unbounded memory on the other hand [76]. The material briefly sketched here has been published in [46,47,48,49,50,51].

2.2.1. Mathematical Setting for Spike Trains

Neuron variables such as membrane potential or ionic currents are described by continuous-time equations. In contrast, spikes resulting from the experimental observation are discrete events, binned with a certain time resolution δ , say on the order of a millisecond. We consider a network of N spiking neurons, labelled with an index k = 1 N . We define a spike variable ω k ( n ) = 1 if neuron k has emitted a spike in the time interval [ n δ , ( n + 1 ) δ [ , and ω k ( n ) = 0 otherwise. We denote by ω ( n ) = ω k ( n ) k = 1 N the spike-state of the entire network at time n, which we call a spiking pattern. A spike block denoted by ω m n , n m , is the sequence of spike patterns ω ( m ) , ω ( m + 1 ) ω ( n ) . The range of a block ω m n is n m + 1 , the number of time steps from m to n. We call a spike train an infinite sequence of spikes both in the past and in the future, and, to simplify notations, we note a spike train ω (instead of ω + ). Of course, on operational grounds, spike trains are finite, but it is mathematically more convenient to work on a space of bi-infinite spike sequences.

2.2.2. Mathematical Setting for Spiking Probabilities

We now consider a family of transition probabilities of the form P n ω ( n ) ω n 1 , which represents the probability that, at time n, one observes the spiking pattern ω ( n ) given the network spike history, extending to an infinite past. This is an extension of Markov chains where probabilities have the form P n ω ( n ) ω n D n , where D is the memory depth of the Markov chain. Letting the memory be possibly infinite corresponds to a situation where one cannot precisely fix the memory depth necessary to characterize the probability of a spike pattern given the past spike history. An example of a model requiring this context is presented in Section 2.2.3 below. Having infinite memory imposes mathematical constraints on the memory decay that has to be sufficiently fast (typically, exponential) so that the situation is close to Markov chains. In addition to the model presented below, neural models with infinite memories have been considered by several authors such as E. Loecherbach and A. Galves [77]. A few remarks about this form of probability:
  • We do not assume stationarity. P n may depend explicitly on time. This is actually the reason why we have an index n. A time translation invariant probability will simply be written P .
  • For such probabilities to be well defined and useful, one needs to make assumptions on their structure. Beyond technical assumptions such as measurability, summability, non-nullness and continuity [78,79], the most important assumption here is that the dependence in the past (memory) decays fast enough, typically, exponentially, so that, even if this chain has infinite memory, it is very close to Markov.
  • As one can associate to Markov chains an equilibrium probability (under conditions actually quite more general than detailed-balance), the system of transition probabilities { P n } n Z also admits, under the mathematical conditions sketched in the item 2 above, an equivalent notion called “chains with complete connections” or a “chain with unbounded memory” [76].
  • These distributions are formally (left-sided) Gibbs distributions where the Gibbs potential is Φ ( n , ω ) = def log P n ω ( n ) ω n 1 (the non-nullness assumption imposes that P n ω ( n ) ω n 1 > 0 ). This establishes a formal link to statistical physics. In particular, when the chain is stationary, expanding the potential in product of spikes events up to the second order, one recovers the maximum entropy models used in the literature of spike trains analysis, including the so-called Ising model [22,48,80,81]. However, the chains we consider are not necessarily stationary.

2.2.3. A Model of Effective Interactions between RG Cells

The visual cortex has no clue on which biophysical processes are taking place in the retina. All the visual information it receives is encoded in spike trains. This leads to the idea of proposing models of a spiking RG cells network where dynamics of RG cells voltage are only constrained by RG cells’ spikes history. Here, one assumes that RG cells dynamics are controlled by the interactions with hidden layers, for example, the B cells–A cells layers in the model (13), in a situation where an observer is just recording the spikes emitted by RG cells, while having no clue of the dynamics in the upper layers. These hidden layers result in providing effective interactions between RG cells that one can interpolate by fitting the statistics. The idea is then to construct a dynamical model where the spiking of an RG cell depends on the spike history emitted by the network, with virtual interactions that mimic hidden causal effects [82]. This strategy leads us to propose the model presented in the next paragraph. The advantage of this approach is that one can explicitly write the transition probabilities P n ω ( n ) ω n 1 > 0 and infer, from this, a linear response formula telling us how statistical quantities such as firing rates, but also spike correlations are modified by a time dependent stimulus. These results are presented in the “Results” Section 2.2.3.
The model is inspired from the generalized Integrate and Fire model (gIF) proposed by Rudolph and Destexhe [83] and generalizes the Leaky-Integrate and Fire (LIF) model [84,85]. We have N neurons (say RG cells) characterized by their voltage V k , k = 1 N . One fixes a voltage threshold θ such that, whenever V k ( t ) = θ , a spike is emitted by neuron k at time t, and is reset to a reset value (typically, V r e s e t = 0 ). Below θ , the dynamics of voltage (sub-threshold dynamics) are governed by Equation (27) below.
In the LIF model, synaptic conductances are constant. In the gIF model, in contrast, the synaptic conductance g k j between the pre-synaptic neuron j and the post-synaptic neuron k depends on spike history as:
g k j ( t , ω ) = G k j α k j ( t , ω ) ,
where:
α k j ( t , ω ) = n = t α k j ( t n ) ω j ( n ) .
The notation g k j ( t , ω ) means that function g k j depends on spikes occurring before time t. G k j 0 is the maximal conductance between j and k. It is zero when there is no synaptic connection between neurons j and k. In (26), the function α k j ( t ) , called α -kernel, summarizes the complex dynamical process underlying the generation of a post-synaptic potential after the emission of a pre-synaptic spike [86]. It has the typical form α k j ( t ) = P ( t ) e t τ k j H ( t ) where P ( t ) is a polynomial in time and H ( t ) is the Heaviside function. What matters on mathematical grounds is the exponential tail of α k j ( t ) [46]. The function α k j ( t , ω ) depends on the spike history preceding t. It records the spikes emitted by the pre-synaptic neuron j before t, corresponding to ω j ( n ) = 1 and adds up a contribution α k j ( t n ) to the post synaptic conductance from pre-synaptic neuron j to post-synaptic neuron k.
Now, the gIF dynamics reads [46,47,62]:
C k d V k d t + g L ( V k E L ) + j g k j ( t , ω ) ( V k E j ) = S k ( t ) + σ B ξ k ( t ) , if   V k ( t ) < θ ,
where g L , E L are respectively the leak conductance and the leak reversal potential, E j the reversal potential characterizing the synaptic transmission between j and k. Finally, ξ k ( t ) is a white noise, introducing stochasticity in dynamics. Its intensity is σ B .
Setting W k j = G k j E j , i k ( t , ω ) = g L E L + j W k j α k j ( t , ω ) + S k ( t ) + σ B ξ k ( t ) , g k ( t , ω ) = g L + j = 1 N g k j ( t , ω ) , one can finally write the gIF dynamics in the form:
C k d V k d t + g k t , ω V k = i k ( t , ω ) , i f V k ( t ) < θ ,
where i k ( t , ω ) depends on the network spike history via α k j ( t , ω ) , on the stimulus, and contains a stochastic term. As the reversal potential E j can be positive or negative, the synaptic weights W k j define an oriented and signed graph, whose vertices are the neurons. These weights are what we call effective interactions.
What makes the gIF model very rich is that it proposes a biophysically grounded way to construct a dynamical system where the variables (here, voltages) are constrained by the only information of spike train history. The price to pay is that dynamics actually depend on the whole spike history, which is potentially infinite. Actually, the gIF model has an infinite memory. This is essentially because the conductance depends on the whole history, and, contrarily to voltages, is not reset when the neuron fires. Nevertheless, the exponential decay in the alpha profile actually ensures the existence (and uniqueness) of transition probabilities of the form P n ω ( n ) ω n 1 [47,48,49,50].
Note that the integration of (27) does not only require the knowledge of voltages V k , stimulus and noise at time t. It requires, in addition, the knowledge of the spike train ω emitted by the network before t. In this sense, this is not a classical dynamical system. Nevertheless, Equation (27) can be explicitly integrated [47,51].

3. Results

3.1. How Could Lateral A Cells Connectivity Shape the Receptive Field of a Ganglion Cell?

The response of an RG cell to visual stimuli is shaped by the retina structure depicted in Figure 1. Here, with the model introduced in Section 2.1, we would like to characterize the respective effects of the stimulus and of the network connectivity, especially A cells, and understand under which condition can the conjugated effect of network dynamics and stimulus be represented by a convolution of the form (1) where the kernel K G α is intrinsic to the cell, i.e., does not depend on the stimulus?

3.1.1. Non-Rectified Case

The answer is relatively easy when no rectification takes place, i.e., when the trajectory of (13) stays in the domain Ω ( 0 ) (see Section 2.1.6 for the definition). Indeed, in this case, evolution is ruled by Equation (21) which holds from the initial time t = t 0 , where the stimulus starts to be applied, to the current time t. Actually, we can consider that t 0 starts far in the past and let it tend to . This corresponds to considering that the stimulus is applied on a time scale quite longer than the characteristic times in the problem (i.e., the inverse of the real part of eigenvalues). Then, Equation (21) reads X ( t ) = t e L ( 0 ) ( t s ) . F ( 0 ) ( s ) d s , which is X ( t ) = e L ( 0 ) t F 0 ( t ) . This equation actually makes sense only if all eigenvalues of L ( 0 ) are stable, as we assumed above. Note also that F ( 0 ) = C ( 0 ) + F where C ( 0 ) is a constant, depending on thresholds (Equation (14)) and whose integration in the convolution product gives L ( 0 ) 1 . C ( 0 ) = X * , the base line activity of X ( t ) without stimulus. We may ignore this constant in the sequel and focus on the time varying part of the response, e L ( 0 ) t F ( t ) . As F is itself defined in terms of a convolution (Equation (5)) with the stimulus and its derivative, X ( t ) is a convolution with the stimulus and its derivative. Here, it is useful to express X ( t ) in components.
One can then show that [45]:
X α ( t ) = V α d r i v e ( t ) + E α n e t ( 0 ) ( t ) , α = 1 N ,
where:
E α n e t ( 0 ) ( t ) = β = 1 N γ = 1 N B P α β ( 0 ) P β γ ( 0 ) 1 ϖ β γ ( 0 ) t e λ β ( 0 ) ( t s ) V γ d r i v e ( s ) d s ,
where ϖ β γ ( 0 ) = λ β ( 0 ) + 1 τ B γ . The term V α d r i v e ( t ) in Equation (28) is the stimulus drive and acts only on B cells (it vanishes for α > N B ). The term (29) contains the network effects. The drive imposed on B cells impacts A cells via the connectivity and, thereby, have a feedback effect on B cells. In addition, the join activity of B cells and A cells drive the RG cells response ( α > N B + N A ). In particular, this equation allows for computing explicitly the RF of an RG cell.
For this, we introduce the function e β ( 0 ) ( t ) e λ β ( 0 ) t H ( t ) so that t e λ β ( 0 ) ( t s ) V γ d r i v e ( s ) d s e β ( 0 ) t V γ d r i v e ( t ) , which according to (2) is e β ( 0 ) t K B γ x , y , t S ( t ) . Thus, by identification with (1), the kernel of RG cell α = N B + N A + 1 N G is:
K G α ( x , y , t ) = β = 1 N γ = 1 N B P α β ( 0 ) P β γ ( 0 ) 1 ϖ β γ ( 0 ) e β ( 0 ) t K B γ .
This provides an explicit equation for the kernel of an RG cell, embedded in a network of B cells, A cells, RG cells with dynamics (13), when no rectification takes place.

3.1.2. Interpretation

The kernel obtained in (30) is the response of the RG cell to a Dirac pulse corresponding, in experiments, to a brief light (or dark) full-field flash. It can also obtained from a white noise stimulus, corresponding, in experiments, to the so-called Spike Triggered Average (STA) [38,39,40]. It corresponds therefore to the functional definition of the receptive field of RG cells used in experiments. In addition, Equations (28) and (29) give us the voltage of all cells in the network at time t under the influence of a stimulus. Interestingly, thus, these equations allow us to visualize the join evolution of B cells and A cells as well as their action of RG cells. Note that B cells and A cells are difficult to access experimentally. Given a prescribed connectivity (matrices W A B , W B A , W G B , W G A ), Equation (28) provides us, therefore, a mathematical insight on the potential, hidden, dynamics of B cells and A cells leading to the experimentally observed response of RG cells. Thus, this gives us possible scenarios characterizing the potential effects of A cells networks on RG cells response. In addition, Equation (30) also provides the RF for B cells ( α = 1 N B ) and A cells ( α = N B + 1 N B + N A ). We observe in particular that, in a network, the RF of a B cell is therefore not only what comes from the OPL—the term V α d r i v e ( t ) —it integrates as well lateral A cells connectivity. This is similar to the center-surround shaping of OPL output due to H cells, but here, we might have different effects, due to the different physiology of A cells.

3.1.3. Space-Time Separability

The RG cell kernel, in general, does not factorise into a product of a function of space and a function of time (separability). Even in the case where the B cells RF is separable, i.e., K B γ ( x , y , t ) = K B S γ ( x , y ) K B T γ ( t ) where K B S γ is the spatial part, centred at x γ , y γ and K B T γ the temporal part, the RG cell kernel reads:
K G α ( x , y , t ) = β = 1 N P α β ( 0 ) γ = 1 N B P β γ ( 0 ) 1 ϖ β γ ( 0 ) e β ( 0 ) t K B T γ × K B S γ ( x , y ) ,
and is not separable either. Now, if B cells have the same temporal kernel K B T , independent of γ and the same characteristic time τ B , such that ϖ β γ ( 0 ) = λ β ( 0 ) + 1 τ B is independent of γ , we can write:
K G α ( x , y , t ) = β = 1 N P α β ( 0 ) ϖ β ( 0 ) e β ( 0 ) t K B T γ = 1 N B P β γ ( 0 ) 1 K B S γ ( x , y ) .
This kernel is not yet strictly separable as the term γ = 1 N B P β γ ( 0 ) 1 K B S γ ( x , y ) still depends on β , the eigenmode index, via P β γ ( 0 ) 1 . Now, the eigenmodes depend on connectivity. In particular, the B cell to RG cell connectivity corresponds to a pooling of B cells located in the vicinity of RG cell α . The simplest case is when there is no lateral connectivity and where each RG cell α is contacted by only B cell with index γ α (this implies N B = N G ). In this case: P α β ( 0 ) = δ α β , P β γ ( 0 ) 1 = δ β γ so that K G α ( x , y , t ) = ϖ α ( 0 ) e α ( 0 ) t K B T K B S α ( x , y ) is separable. More generally, pooling implies that P α β ( 0 ) and P β γ ( 0 ) 1 are locally spread around α resulting in a spatial part γ = 1 N B P β γ ( 0 ) 1 K B S γ ( x , y ) depending only on α .

3.1.4. Resonances

The eigenvalues of L ( 0 ) can be complex, going by conjugated pairs. It is actually quite easy to obtain such a situation mathematically, even considering nearest neighbours’ interactions [45]. A straightforward consequence is the existence of preferred time frequencies (resonance) for an RG cell. In other words, applying periodic sequences of brief flashes with a varying frequency, one might observe a peak in the amplitude of the RG cell response, for specific frequencies. This remark could, e.g., explain the “bump” observed in experiments when the retina is submitted to the so-called “Chirp” stimulus [87], a stimulus composed of different phases of flashes stimulation where one varies duration, frequency, and amplitude. In the phase where the amplitude is constant but frequency is varying, some RG cells exhibit a resonance like peak (see e.g., Figure 1b in [87]). Of course, such resonances could also be explained by intrinsic cells’ properties, like ion channel response. The potential effect of lateral A cells connectivity would have to be tested experimentally by, e.g., inhibiting A cells synaptic transmission for RG cells exhibiting resonance peaks.

3.1.5. Stimulus Induced Waves

This is a general fact that networks of coupled units can produce waves. Spontaneous waves are actually reported in the developmental retina, induced, in the so-called stage II and stage III by A cells [88]. They are generated by nonlinear mechanisms and closeness to bifurcations [27]. This is not the type of wave we are dealing with here, though. Instead, we are referring to waves triggered by a moving stimulus, say a moving bar. The idea is that such a stimulus can induce, via A cells connectivity, a wave of connectivity which can be ahead of the stimulus, for a certain range of parameters (e.g., synaptic coupling intensity) compatible with physiology. Stimulus induced waves, in advance with respect to the stimulus, have been reported in the visual cortex [89]. They are due to lateral cortical activity and induce cortical anticipation. The mathematical analysis made in [45] suggests that such anticipatory waves could also exist in the retina thanks to A cells’ lateral connectivity, conjugated with nonlinear gain control already known to induce a form of retinal anticipation [42,44].

3.1.6. Stimulus Adaptation

Short term plasticity has been reported in the retina at the synapses between B cells–A cells and A cells–RG cells [43,90]. Note actually that, although most models of plasticity referring to cortical neurons, are considering spiking neurons [91], the physiology of short-term synapse adaptation does not necessarily require spikes and is compatible with inner retinal networks dynamics. The effect of synaptic plasticity can be integrated in the model (13). It will result in variations of eigenvalues and eigenvectors of the transport operator L with potential changes in dynamics. Although potential and highly relevant phenomena such as bifurcations induced by plasticity would require considering a nonlinear version of (13) (at least, rectification to avoid exponential instability), we can ask about the simple linear effect of plasticity on the RG cell response. A straightforward potential effect could be frequency adaptation to periodic flashes.

3.1.7. Rectification

Let us now investigate the role of rectification. In the general case, a trajectory crosses several domains, and is characterized by Equation (21). Starting from the domain Ω ( n 1 ) (rest state), the state of the network submitted to a stimulus, enters a new domain Ω ( n 2 ) at time t + ( n 2 ) where some cells are rectified and so on. Can one still define a response formula of type (1)? This raises several technical difficulties, first because some eigenvalues can be unstable. As we have seen above, this does not lead to an exponential explosion, though precisely rectification prevents cell voltage from diverging. Mathematically, this is expressed by the exit of the trajectory from the domain with a positive eigenvalue and a projection on the subspace spanned by non-rectified cells. Another difficulty also comes from the constants C ( n ) defined in (14) coming from the threshold in the rectification function. They can be removed by assuming that all thresholds are equal to 0. This is what we are going to do now for the sake of simplicity. One can then define domain-dependent flows Φ ( n ) ( X , t ) e L ( n ) t Θ X ( t ) Ω ( n ) , where Θ is the indicator function so that Φ ( n ) ( X , . ) t F ( t ) = n m = n t ( n m ) t + ( n m ) e L ( m ) ( t + ( n m ) s ) . F ( s ) d s , where the sum holds on indices n m in the trajectory such that n m = n . This allows us to express the recurrence Formula (22) in terms of a convolution and thereby to express the whole trajectory in terms of a convolution with a transport operator.
However, there are several important differences with the non-rectified case. First, the kernel defined this way depends on the trajectory. As the sequence of domains met by the trajectory (and the time where the trajectory enters in these domains) depend on the stimulus, the RF of rectifiable cells depends now on the stimulus. Note that the situation would actually be even worse for nonlinear cells. Indeed, the question hidden behind these remarks is: “to what extent the linear response assumption defining a RF via a convolution equation such as (2) is valid”. We will actually come back to a similar question in Section 3.3 for a network of spiking neurons. Linear response essentially requires the perturbation to be “weak enough”, which, in our case, means that cells are not rectified. The formulation in terms of a piecewise linear system allows for extending the notion of RF to rectified cells, but the price to pay is that RF now depends on the stimulus. With respect to biology, this effect would for example mean that cells identified e.g., to be ON with an STA approach, responds differently (e.g., ON-OFF) to a more sophisticated stimulus like the “chirp“ stimulus [87].
In the rectified cases, the eigenvalues λ β ( n ) , β = 1 N and eigenvectors P β ( n ) depend on the domain, i.e., on the list of rectified cells and are different from the domain Ω ( 0 ) of the rest state. They actually differ in two ways. First, rectified cells provide eigenvalues 1 τ β and eigenvectors e β so that P α β ( n ) = δ α β for these cells so that they do not contribute any more to the network response. The second effect is more intricate. Indeed, the mere fact of rectifying one cell, has, in general, the effect of modifying the whole spectrum and eigenvectors, with strong effects on the cell response. This can be easily understood. Consider the (not really retinal-realistic) situation where a cell is a hub in a network. Silencing it has in general dramatic effects on the global dynamics of this network.

3.1.8. Conclusions of This Section

In this section, we have given a mathematical answer to the problem 1, level 1, posed in the Introduction. On the basis of a simplified model of B cells–A cells–RG cells interactions, we have produced a formalism allowing us to compute this network response to spatio-temporal stimuli. We have been able to write explicitly the RF of individual RG cells appearing in Equation (1) where the kernel depends explicitly on lateral connectivity. As we showed, however, the linear response Formula (1), where the kernel is independent of the stimulus, holds when the stimuli have a weak enough amplitude so that cells are not rectified. As soon as rectification takes place, the convolution form (1) implies, in general, that the kernel can change with the stimulus. This effect could be observed in experiments if the cell type, characterized via STA, provides a different type of response to other stimuli.

3.2. How Could Spatio-Temporal Stimuli Correlations and Retinal Network Dynamics Shape the Spike Train Correlations at the Output of the Retina?

In this section, we extrapolate the previous analysis of the model (13) to analyse how spike trains emitted by RG cells can be correlated via the network and especially A cells connectivity. We especially want to make mathematical statements on how could A cells decorrelate RG cells, as claimed on the basis of experiments [15]. We consider first the non-rectified case and then analyse how rectification can modify correlations.

3.2.1. Voltage Correlations

We first compute the voltage correlations induced by a non-stationary spatio-temporal stimulus in the model (13). Note that correlations require some notion of probability and, thus, of randomness. Moreover, it is more convenient when such a probability is stationary, while we want here to consider a non-stationary problem. This is not contradictory though. There are two simple (not incompatible) ways to address this point. First, one may consider that the dynamical system (13) has random initial conditions, drawn with respect to a stationary probability measure. Second, one can add to the dynamics (13) noise, which is always present in biological systems. We can make the assumption that noise is stationary and that it is Brownian (which is a pure mathematical convenience). In biology, spike correlations are usually obtained by averaging over repeats of the same experiment where a stimulus is presented to the retinal network. This corresponds therefore to averaging over initial conditions in the presence of noise. Here, to make things simpler, we assume that initial conditions are deterministic (the network is in the rest state when the stimulus is applied) and randomness is induced by a Brownian noise.

Stimulus Induced Correlations in the Non-Rectified Case

Let us therefore consider a stimulus with the form S ( x , y , t ) = S d ( x , y , t ) + σ S ξ ( x , y , t ) where S d ( x , y , t ) is deterministic and ξ ( x , y , t ) is a spatio-temporal white noise. σ S controls the intensity of this noise. The spatial integration of B cells RF induces then an obvious correlation between B cell voltages. Consider indeed the term V i d r i v e ( t ) in Equation (2) in the presence of this stimulus. Denoting E the expectation with respect to the Wiener measure, we have E ξ ( x , y , t ) = 0 and E ξ ( x , y , t ) ξ ( x , y , t ) = δ ( x x ) δ ( y y ) δ ( t t ) . Then, E V i d r i v e ( t ) = K B i x , y , t S d ( t ) and the correlation between drives is:
E V i d r i v e ( t ) E V i d r i v e ( t ) V j d r i v e ( t ) E V j d r i v e ( t ) = σ S 2 x = + y = + s = t K B i ( x x i , y y i , t s ) K B j ( x x j , y y j , t s ) d x d y d s ,
assuming t t without loss of generality. We recall that x i , y i are the coordinates of the center of BCell i RF. Equation (33) expresses that drives are correlated due to the overlap of B cell RFs, a well known result. In particular, correlations decrease with the distance d between the two RF centers (like e d 2 if RFs are Gaussian).
More generally, the term F B i ( t ) in Equation (5) has mean:
E F B i ( t ) = 1 τ B K B i + t K B i x , y , t S d ( t ) ,
and correlation:
C F i j ( t , t ) = σ S 2 1 τ B 2 x = + y = + s = t K B i ( x x i , y y i , t s ) K B j ( x x j , y y j , t s ) d x d y d s + 1 τ B x = + y = + s = t K B i ( x x i , y y i , t s ) t K B j ( x x j , y y j , t s ) d x d y d s + 1 τ B x = + y = + s = t t K B i ( x x i , y y i , t s ) K B j ( x x j , y y j , t s ) d x d y d s + x = + y = + s = t t K B i ( x x i , y y i , t s ) t K B j ( x x j , y y j , t s ) d x d y d s ,
for i , j = 1 N B . This implies that the forcing term F in (13) has a N × N time dependent correlation matrix C F ( t , t ) with a N B × N B block corresponding to (34), and the rest of the matrix has zeros (A cells and RG cells have no direct stimulus drive).
Let us now consider the full dynamics (16), in the non-rectified case: the trajectory stays in Ω ( 0 ) . Under the stimulus S d ( x , y , t ) + σ S ξ ( x , y , t ) , X ( t ) is a stochastic process, with mean:
E X ( t ) = e L ( 0 ) t E F ( 0 ) ,
and correlation matrix:
C X ( t , t ) = s = t s = t e L ( 0 ) ( t s ) . C F ( s , s ) . e L ˜ ( 0 ) ( t s ) d s d s
where L ˜ ( 0 ) is the transpose of L ( 0 ) . This is the general form of correlations induced by the network. Note that correlations are stationary (they only depend on t t ). This does not hold any more in the rectified case as discussed below.

Correlations Structure and Decorrelation

Equation (36) combines B cells RF overlap (in the matrix C F ( s , s ) ) to network effects, A cells and/or gap junctions, via the transfer operator L ( 0 ) . One can actually better see these combined effects by projecting on the eigenvectors basis of L ( 0 ) , where L ( 0 ) = P ( 0 ) . Λ ( 0 ) . P ( 0 ) 1 and L ˜ ( 0 ) = P ( 0 ) ˜ 1 . Λ ( 0 ) . P ( 0 ) ˜ . Denoting:
Δ F ( s , s ) = P ( 0 ) 1 . C F ( s , s ) . P ( 0 ) ˜ 1 ,
Equation (36) becomes;
C X ( t , t ) = s = t s = t P ( 0 ) . e Λ ( 0 ) ( t s ) . Δ F ( s , s ) . e Λ ( 0 ) ( t s ) . P ( 0 ) ˜ d s d s ,
which interprets as follows, whereas C F ( s , s ) is a rank N B matrix containing the B cell drives’ correlations, Δ F ( s , s ) is a full rank matrix which integrates B cell drives and network correlations (due to the product with transfer matrices P ( 0 ) 1 and P ( 0 ) ˜ 1 ). These correlations are transported in time by the diagonal matrix e Λ ( 0 ) ( t s ) . In general, there is no way to anticipate a priori what will be the combined effect of B cells RF overlaps and network on voltage correlations. Depending on the model parameters (characteristic times, synaptic weights), it can be anything. In particular, there is no general, mathematical reason, to think that A cells would decorrelate RG cell outputs.
This mathematical consequence is in apparent contrast with the claim, found in deep experimental papers stating that “the inhibition” (mediated by A cells) “decorrelates visual feature representations in the inner retina [15]”. What could be the origin of this discrepancy? The first reason is that correlations in the retina are often thought in terms of the drive correlations (33). Reducing the overlap between B cell RFs, i.e., decreasing the magnitude of the product K B i ( x x i , y y i , t s ) K B i ( x x i , y y i , t s ) in the integral (33) lowers the drive correlations. The idea is then that A cells lateral inhibition reduces the center part of the RF and increases the surround, thereby reducing the RF overlap. Is there a way to mathematically validate this statement in (36)? Under which conditions on model parameters does it hold true?
Let us investigate what “decorrelation” means in our setting. Strictly speaking, it means that C X ( t , t ) is diagonal that is that the variable change corresponding to the transfer matrix P ( 0 ) diagonalizes the stimulus correlation matrix C F ( s , s ) . Now, C F ( s , s ) , as a correlation matrix, is diagonalisable by an orthogonal basis change with real eigenvalues, whereas P ( 0 ) has to do with B cells–A cells network, and it is easy to find situation where it is complex, with complex eigenvalues. Thus, in general, the network effects do not diagonalize C X ( t , t ) . Nevertheless, it is indeed possible to construct networks diagonalising C F ( s , s ) by using the spectral decomposition theorem. In addition, if one does not stick to strict decorrelation, one can also figure out conditions on networks reducing stimuli correlations. The question is whether real A cell networks match these conditions. This is an interesting question for further studies. We, however, see below that there are, however, other potential sources of decorrelation, especially nonlinearities.

Non-Correlated Drives

The correlation structure, complex in the non-rectified case, is actually even worse when considering rectification. In the rest of this section, we want to consider in more detail the effects of rectification on RG cell spike correlations. We want to show that they induce non-stationary stimulus dependent correlations which are not due to the drives’ correlations (33).
For this, we are going to consider the situation where C F is δ -correlated that is we discard drive correlations. This corresponds to setting:
F ( t ) = m ( t ) + σ S ξ ( t ) ,
where m ( t ) is deterministic. In this situation, Equation (36) greatly simplifies, giving a correlation matrix:
C X ( t , t ) = σ S 2 e L ( 0 ) ( t t ) . t e L ˜ ( 0 ) ( t s ) . e L ( 0 ) ( t s ) d s ,
for t t .
In the general case, L ( 0 ) is not symmetric and does not commute with L ˜ ( 0 ) . One can then compute C X ( t , t ) in terms of the (common) spectrum of L ( 0 ) , L ˜ ( 0 ) using the spectral decomposition theorem L ( 0 ) = α = 1 N λ α ( 0 ) v α ( 0 ) . w ˜ α ( 0 ) where v α ( 0 ) is the right eigenvector α of L ( 0 ) (the α -th column of P ( 0 ) ) and w ˜ α ( 0 ) is the left eigenvector α of L ( 0 ) (the α -th row of P ( 0 ) 1 ). In general, right (left) eigenvectors are not mutually orthogonal but w ˜ α ( 0 ) . v β ( 0 ) = δ α β so that v α ( 0 ) . w ˜ α ( 0 ) is the projector on eigendirection α . From this, one obtains the correlation matrix:
C X ( t , t ) = σ S 2 α = 1 N e λ α ( 0 ) ( t t ) v α ( 0 ) . w ˜ α ( 0 ) β = 1 N v β ( 0 ) . w ˜ β ( 0 ) λ α ( 0 ) + λ β ( 0 ) ,
where eigenvalues are real or complex conjugate and are assumed to be stable (negative real part). Note that eigenvalues and projectors combine so that, finally, the correlation matrix is real.
We will keep this general form for further discussions on the rectified case, but here, it is insightful to consider the case where L ( 0 ) is symmetric. Here, it is diagonalizable on an orthogonal basis, with P ( 0 ) 1 = P ˜ ( 0 ) and with real eigenvalues λ β s β , β = 1 N . where s β is real, positive. Then, (40) reduces, in form of components, to:
C α 2 , α 1 ( t t ) = σ S 2 2 β = 1 N P α 2 β P α 1 β s β e s β ( t t ) .
It is useful to express, from (41), the variance of cell α i 1 ’s voltage (independent of time due to stationarity):
σ α 1 2 = σ S 2 2 β = 1 N P α 2 β P α 1 β s β .
These computations provide the network correlations between cell voltage in the absence of drive correlations.

3.2.2. Spike Correlations

We now compute spike correlations of RG cells induced by network correlations (40). We assume a spiking probability of the form (8). The probability that RG cell α 1 ( > N B + N A ) spikes at time t 1 is induced by the voltage probability P and is given by ν α 1 t 1 E f V G ( t ) θ G σ G , where the expectation is taken with respect to P . Taking the form (8) for f, this is:
ν α 1 t 1 = f m α 1 ( t 1 ) θ G σ G 2 + σ α 1 2 ,
where m α 1 is the entry α 1 of the deterministic drive term in (38). As pointed out above, two sources of noise add up here: the implicit noise, with variance σ G 2 appearing in the LNP formulation (8), which is intrinsic to the cell, and the network induced noise, explicit in the term σ α 1 2 .
Likewise, the probability that RG cell α 1 ( > N B + N A ) spikes at time t 1 and RG cell α 2 ( > N B + N A ) spikes at time t 2 is:
ν α 1 α 2 ( t 1 , t 2 ) = f μ 1 cos ( ϕ ) y 1 μ 2 sin ( ϕ ) y 2 + m α 1 ( t 1 ) θ G σ G f μ 1 sin ( ϕ ) y 1 + μ 2 cos ( ϕ ) y 2 + m α 2 ( t 2 ) θ G σ G D Y ,
where the integral holds on R 2 and where D Y = 1 2 π e y 1 2 + y 2 2 2 d y 1 d y 2 . Here, μ 1 , μ 2 are the eigenvalues of the pairwise correlation matrix C = σ α 1 2 C α i 1 α i 2 ( t 1 t 2 ) C α i 2 α i 1 ( t 2 t 1 ) σ α 2 2 which is diagonalizable on an orthogonal basis with an orthogonal transformation, a rotation with angle ϕ determined by the coefficients of C .

3.2.3. Decorrelation Induced by Nonlinearities

It is evident that the double integral (44) factorizes only in the case where C is diagonal ( ϕ = 0 , μ 1 = σ α 1 , μ 2 = σ α 2 ), and it reduces to ν α 1 α 2 ( t 1 , t 2 ) = ν α 1 ( t 1 ) ν α 2 ( t 2 ) . Thus, spikes of RG cell α 1 at time t 1 and of RG cell α 2 at time t 2 are decorrelated if and only if the correlation matrix (41) is diagonal. This matrix is diagonal only when there are no A cells. Otherwise, A cells have the effect of correlating voltages and thereby spikes. We already discussed above the possible effect of A cells in decorrelating the B cell drive term. Here, as we have removed this effect, we are in a position to discuss other potential effects inducing RG cell spike decorrelation.
First, note that, if the correlations we compute are non-vanishing, they can nevertheless be weak. The weakness of pairwise correlations in the retina has actually be reported by many authors [22,80]. It has been known since Ref. [92] that the passage of two correlated Gaussian variables through a subsequent nonlinearity always reduces the correlation of the two signals, regardless of the shape of the nonlinearity. Thus, in our case, the nonlinear function of the LNP model reduces the decorrelation.
Now, the LNP nonlinearity is not the only source of decorellation. Rectification also plays a crucial role. What happens, indeed, in the rectified case? Mathematically, one can use Equation (21) to compute the correlation matrices (40) (or even (36)), but the main, quite intricate problem is now that the entrance and exit time of domains t ( n k ) , t + ( n k ) appearing in (21) are themselves random. This is again a consequence of the stimulus dependence of these times. The computation of the voltage correlations in this case being, for the moment, out of reach, I am going to give some straightforward although insightful remarks.
The non-rectified case corresponds to a trajectory staying in the domain Ω ( 0 ) (forgetting about conditions on noise ensuring that this holds for an infinite time). Now, the computation of voltage correlation is essentially the same if the trajectory stays in the domain Ω ( n ) . The only difference is that eigenvalues and projectors have a superscript ( n ) instead of ( 0 ) . This difference is essential though because rectification induces a projection on the space of non-rectified cells. The contribution to rectified cells to voltage correlations with other cells vanishes thereby transforming the voltage correlation matrix. By permutation of rows and columns, one can convert this matrix in a form containing a diagonal block (correlations rectified cells ↔ rectified cells) and a block characterizing the correlations’ non-rectified cells ↔ as all cells. This reduces the model dimensionality and the global correlations. This effect, composed with the LN nonlinearity, can reduce correlations even more.
The last important remark here is that rectification implies that RG cell correlations are stimulus dependent even if we have removed the drive correlations because the exit times of continuity domains are stimulus dependent. In addition, the obtained correlations are non-stationary. This effect might not be noticeable with full field stimuli or white noise, which weakly solicit the lateral A cell connectivity, but it could be more prominent when studying spatio-temporal stimuli, in particular moving trajectories or non-stationary stimuli, which constitute most of the real visual scenes.

3.2.4. Conclusions of This Section

In this section, we have mathematically investigated the structure of correlations induced by the model (13), Figure 1. Our conclusion is essentially that the stimulus generates RG cell spike correlations modulated, on one hand, by the drive correlations, and, on the other hand, by the B cells–A cells networks—more precisely, by the eigenvalues-eigenmodes of the transport operator. In addition, rectification and nonlinearities further impact correlations. This fact was reported by Pitkow and Meister in their paper “Decorrelation and efficient coding by retinal ganglion cells” [12], where they insist on the prominent role of nonlinearities: “Most of the decorrelation was accomplished not by the receptive fields, but by nonlinear processing in the retina”. From these remarks, they conclude about information transmission by the retinal network: “At very high thresholds, the information transmission is poor. Notably, transmission also drops at low thresholds. Thus, the choice of threshold involves a trade-off between rarely using reliable symbols, such as high spike counts, or frequently using unreliable symbols, such as low spike counts”. Thus, nonlinearities play a role in retinal coding making the spike rate of RG cells as sparse as possible, so that these cells are silent most of the time and fire at a high rate only when salient features of the stimulus make it necessary. This effect should be even more prominent for moving objects, which is clearly an example of a stimulus with salient features and strong spatio-temporal correlations induced by its trajectory, especially if this trajectory shows sharp changes. This could be mathematically analysed in the present setting although at the expense of consequent technical efforts.
Let us also remark that rectification makes the stochastic process of voltages non Gaussian because the times of entering and exiting domains are now random variables too. As a consequence, spike statistics involve higher order correlations. Although it has to be further investigated on experimental grounds, this would lead to important consequences in terms of coding. As pointed out, again, by Pitkow and Meister [12], “for highly non-Gaussian signals, such as neural spike trains and natural images, correlation may be only weakly related to redundancy.”
Sticking to the model, we may ask the following questions. Assume that we submit the model to different types of stimuli: the “classical” ones such as white noise, “Chirp” stimulus, natural images; but also more elaborated ones such as moving objects with different types of trajectories, or “natural movies” including motion and “surprise”—for example, a bird crossing the visual scene, with, on the background, a forest of trees in the wind. It is known that the retina is able to filter the “noisy” motion of tree leaves while signalling the bird, thanks to dedicated circuits involving A cells [1,56]. Such circuits can be easily implemented in the model (13) [45]. What will the structure of its spike trains be, depending on the different type of stimuli? How can one “efficiently” decode the stimulus from the mere knowledge of those spike trains? How efficient is a decoding scheme based on independent, decorrelated RG cells? In contrast, would cooperative network effects make the code more precise, affording faster responses to motion [14]?
Although we are not going to answer these questions here (there is still a long way to it), we give, in the next sections, several insightful mathematical results in this direction.

3.3. Computing the Mixed Effect of Network and Stimulus on Spike Correlations

3.3.1. Context

Let us now consider the retina from the point of view of its output. We sit on the optic nerve and measure the spikes sent to the LGN and cortex via the optic nerve. We have no access to the biophysical machinery taking place in the retina and generating those spikes, but we know that the spike trains contain information about the external world stimuli that we want to extract. We can measure as many quantities as we want such as firing rate or higher correlations. More generally, we are seeking the (time dependent) joint probability of spikes adopting the approach described in Section 2.2, Methods.
In this context, assume a retina “at rest” i.e., receiving no stimulus or stationary stimuli like noise. We can describe the spike trains emitted by this retina by a stationary transition probability P , associated with a stationary probability μ ( s p ) (for “spontaneous”). In general, this probability has spike correlations of order 2 and higher. Assume now that, from time t 0 , a stimulus (say a moving object) is getting through the visual field of this retina. As exposed in Section 3.2, one expects the spike correlations (at any order) to be modified by this stimulation. Typically, a moving object carries spatio-temporal correlations in its trajectory which will superimposed upon the network correlations, resulting in a mixed effect where nonlinearities can also play a role. Can we predict, for a given stimulus, how correlations will be modified?
Let us give an example. Consider a linear chain of neurons, as depicted in Figure 3 (top). Each neuron (black points), is connected to its neighbours with an excitatory connection (red arrows) and to its second nearest neighbours with an inhibitory connection (blue arrows). The model here is a classical leaky integrate and fire model in the presence of noise, where parameters have been tuned to have a spontaneous asynchronous activity as depicted in Figure 3 (bottom,left). See [51] for more details. Consider a moving stimulus S ( x , t ) propagating from left to right (cyan, bell shaped curve) Figure 3 (top). S ( x , t ) acts as an input current of the form S ( x , t ) = f ( x v t ) , where v is the propagation speed and f, typically, a Gaussian. This stimulus is going to modify the spike patterns, as seen in Figure 3 (bottom,left), where one clearly sees nearest neighbours excitation and second nearest neighbours inhibition. The remarkable fact is that the stimulus not only modifies the firing rates of neurons, but also their correlations. The question is: can we compute this effect?
This question has been solved in the paper [51] for the gIF model (27). Here, we briefly state the main results (see the paper for technical details). Consider a function f ( t , ω ) (observable) depending on time and spike history up to time t. Let μ ( s p ) be the join probability distribution of spikes in spontaneous activity (no stimulus), and μ the join probability distribution of spikes in the presence of a spatio-temporal stimulus S ( x , t ) . We note δ μ f ( t ) = μ f ( t ) μ ( s p ) f , where μ f ( t ) is the average of f, at time t, in the presence of the stimulus and μ ( s p ) f the average of f in spontaneous activity (which does not depend on time because spontaneous dynamics are stationary). δ μ f ( t ) characterizes how much the time dependent mean of f ( t , ω ) under stimulation departs from the spontaneous mean at time t. In the simplest case, δ μ f ( t ) characterizes the variation in the firing rate of neuron k, if f ( t , ω ) = ω k ( t ) , or the variation in the correlation between neuron k 1 at time t 1 and neuron k 2 at time t 1 + t if f ( t , ω ) = ω k 1 ( t 1 ) μ ( s p ) ω k 1 ω k 2 ( t 1 + t ) μ ( s p ) ω k 2 , and so on.
One can show that, when the stimulus amplitude is weak enough, δ μ f ( t ) is given by a linear response formula of the form:
δ μ f ( t ) = K f S t
That is, by the convolution of the stimulus with a specific kernel, K f , depending on the observable f and on the spontaneous distribution μ ( s p ) . We do not give the expression of this kernel here, for simplicity, but the reader can refer to the paper [51].

3.3.2. Consequences

Convolution

Similarly to (1) (RG cells response to stimuli) or (2) (B cells response to stimuli), we have here again a linear response where the effect of a stimulus on a system is expressed by a convolution. We are, however, in a completely different perspective. Indeed, while we were formerly considering voltage response of individuals cells (shaped by network effects), we are now working on a more abstract level, where we attempt to measure the effect of a stimulus on statistics. This is of course due to the difference in what is accessible by experiments, what the observer is able to deal with in his observations—here spikes. Thus, the mathematical machinery allowing for extracting the response requires defining spike statistics in a non-stationary setting, where the influence of the stimulus can be inferred.

Kernel

The kernel K f can be explicitly computed in the gIF model. It depends on several features. First, on network characteristics (especially the effective interaction W k j , and, more generally, the parameters shaping the model dynamics). It also depends on the observable f. However, the main content of this result is that the kernel K f is actually determined by spike correlations in spontaneous activity. In other words, it is possible to anticipate the response to a non-stationary stimulus from the knowledge of the spontaneous activity. Although this result is expected from Kubo theory in non-equilibrium statistical physics [93,94] or from Volterra–Wiener expansions [21], it has interesting consequences when dealing with neural dynamics, and more specifically here, with retina outputs. First, it provides a consistent treatment of the expected perturbation of higher-order correlations, beyond the known linear perturbation of firing rates and instantaneous pairwise correlations; in particular, it extends to time-dependent correlations. In addition, it reveals how the stimulus–response and dynamics are entangled in a complex manner. For example, the response of a neuron k to a stimulus applied on neuron i does not only depend on the synaptic weight W k i but, in general, on all synaptic weights because the dynamics create complex causality loops which build up the response of neuron k [49,95,96]. The linear response formula is written in terms of the parameters of a spiking neuronal network model and the spike history of the network. In the presence of stimuli, the whole architecture of synaptic connectivity, history and the dynamical properties of the networks are playing a role in the spatio-temporal correlations structure.

Linear Response and Higher Order Corrections

The derived formula provides a good agreement with simulations in the gIF model under time dependent stimuli (typically, a moving object). It requires, however, that the stimulus amplitude is weak enough. That is, higher corrections are weaker than the leading order. For larger amplitude stimuli, one would compute higher order correlations. This can be done using the same formalism [97], although it might not be the best approach. Indeed, this method requires measuring spontaneous correlations which are difficult to obtain experimentally for orders higher than 2. This is actually one of the reasons why LNP-like models exist. The expected nonlinearity in the response is handled by a static nonlinear function. Exploring what could be the best nonlinear correction to the linear response in such models is definitely an interesting mathematical challenge.

3.4. Conclusions

3.4.1. Beyond Naive RF Description

This linear response theory actually shows how the neuronal network substrate and stimulus response are entangled. Indeed, in contrast to naive RF representation where the convolution kernel is assumed to depend only on the cell|, here, mathematics show that it depends as well on the observable. The explicit form of the kernel is also tightly constrained by the neurons’ connections. Finally, a convolution implies an integration over histories, requiring thereby to consider spike probabilities with memory, instead of “instantaneous” spikes probabilities (not or weakly depending on the past). Of course, one may always argue that, on experimental grounds, long tail memory is just impossible to measure so “instantaneous” [22] or first order Markov [98] models are largely sufficient. However, what does “sufficient” mean? This is a difficult question, which requires sophisticated methods to determine the “best performing” memory depth from data [34,99,100]. Actually, numerical computations of the kernel use, of course, Markovian approximations [51], although with a memory depth that can be controlled.

3.4.2. Link with the Retina Model

Can we relate the formalism developed here with the retinal model presented in Section 2.1? As RG cell voltage is Gaussian, it is in principle possible to compute transition probabilities using the transport operator formalism. However, even in the non-rectified case, the computation promises to be a formidable task, unless one adds some additional constraints. For example, a big advantage of Integrate and Fire models is that a spiking neuron loses memory after spiking, a property which is not implemented in LNP like models.

3.4.3. Information Geometry

There is a close link between Gibbs distributions and information geometry. This theory, developed by Shun’ichi Amari and his collaborators (see [101] and references therein) on the basis of early work from Rao [102], establishes a geometric theory of information where probabilities are considered as points on Riemannian manifolds. A prominent family of probability measures is called the exponential family. It contains the Gibbs distributions in the standard statistical physics sense, i.e., probabilities having the form e β H Z where the energy H does not depend on time. In this case, the metric is given by the Hessian of log ( Z ) , the free energy, and is tightly linked to Fisher information on one hand and to linear response on the other hand. The linear response is actually a correlation function from the fluctuation dissipation theorem. Thus, correlation functions induce a natural geometry for Gibbs distributions providing strong insights on how these distributions are modified by smooth, local, transformations of their parameters (like learning [103]) or under a stimulation of weak amplitude. In this last case, the stimulus action corresponds to a perturbation in the tangent space of the manifold [104,105]. Although information geometry has not been extended, to our best knowledge, to the type of Gibbs distribution we study here (they are non-stationary), the mathematical formalism is similar. This essentially tells us that the structure of spatio-temporal correlations observed in spike trains reveals a hidden geometrical structure which, somewhat, shapes the response of the retina, and, henceforth of cortex, to stimuli. We come back to this point in the Conclusions section.

4. Applications

The OPL-B cells-A cells processing is based on graded potentials departing from the classical paradigm of binary spike processing. Mathematically, this has strong consequences in terms of response to a spatio-temporal stimulus: existence of eigenmodes, potentially modulated by nonlinear effects, inducing properties such as activity waves ahead of the stimulus (anticipation), resonances and correlations modified by the stimulus. In this section, and although this paper is essentially theoretical, I would like to shortly propose possible applications of these results, outside the field of neuroscience.

4.1. Retinal Prostheses

Retinal pathologies, such as Age Macular Degeneration or Retinitis Pigmentosa, are due to the degeneration of photo-receptors [106]. In addition, they induce morphological and structural changes in the retina with significant pathogenic effects: inflammation, change in connectivity, the appearance of large-scale spontaneous electrical oscillations, and, of course, attenuation of response to visual stimuli [107,108,109,110]. In this process of degeneration, however, the RG cells are the last to be deficient, maintaining, therefore, a link between the retina and the brain, provided they are suitably stimulated. The strategy of retinal prostheses is to stimulate the retina electrically by an array of electrodes. Stimulation of an electrode generates, in the visual cortex, a phosphene, the perception of a light spot. By stimulating the electrodes, one induces in the cortex an image “pixelised” by the phosphenes, with resolution limited, on the one hand, by the number of electrodes, and, on the other hand, by the size of the phosphenes, which can be enlarged by diffusion and nonlinear effects [111]. Technological solutions, taking into account the physiological limitation on the electrical power that can be injected in an electrode, improve resolution [112]. However, there are still obstacles which cannot be resolved by purely technological solutions (hardware). In addition, a valid stimulation strategy at a given period of the pathology may not be later because the retina degeneration evolves over time.
Stimulation strategies use processor pre-processing to calculate, from a given image (captured by a camera), the pattern of stimulation of the prosthesis, by mimicking the calculation that a healthy retina would make, or by incorporating corrections taking into account the pathology [113]. These algorithms might be improved using what we know about the retinal structure, especially A cells’ lateral connectivity, where a model like (13) can be easily implemented with a relatively low energy consumption cost. The idea would be to improve electrode stimulation sequences in order to allow an implanted patient to perceive in real time a moving object. The model (13) with A cells’ lateral connectivity and gain control is known to produce a wave of activity ahead of a stimulus, performing a form of anticipation [45]. This could be used to compensate for the processing times imposed by the equipment, in the same way that the visual system knows how to compensate for the delays induced by photo-transduction [42]. The ideal would also be to have adaptive algorithms, i.e., depending on parameters adjustable according to the patient and the course of his pathology.

4.2. Convolutional Networks

Several recent studies attempt to understand how retinal response to stimuli is related to circuit processes using convolutional neural network models [114] to grasp the structure of retinal prediction [115]. Reciprocally, these networks can be used to design deep-learning models to encode dynamic visual scenes with important potential outcomes in the domain of computer vision. In particular, a recent work by Zheng et al. [116] shows the important role played by recurrence in encoding complex natural scenes. To my best knowledge (which is quite scarce in this field), there is no mathematical analysis of the dynamics of these models, especially the dependence on parameters and robustness of the training schemes. The present study could bring some insights on this perspective. Even if the model (13) is different from what these researchers were using, the techniques of piecewise linear phase decomposition and eigenmode study could be insightful to better understand the dynamical evolution of these convolutional networks and the role played by rectification.

5. Discussion

In this paper, we have addressed mathematically the potential effect of A cells lateral connectivity on retinal response to spatio-temporal stimuli. We have seen how, mathematically, the retina structure and the collective dynamics of retinal cells organized in local circuits spanning the whole retina might constrain this response. In particular, the structure of correlations is expected to depend on the stimulus, as soon as nonlinear effects are involved. This goes beyond the expected effect of stimulus correlations induced by RF overlap.
These properties are established on the basis of theoretical results which are based on incomplete modelling of the retina and specific assumptions. Their validation would require experiments, some of which may require time and others are not yet accessible, for example, simultaneously measuring retina and cortex. As a matter of fact, one may argue that the models presented here are far too simplistic compared to the real retina(s) having a large number of B cells, A cells and RG cell types, making complex circuits [18] and whose characteristics depend, in addition, on species, age or pathologies. However, the idea behind mathematical modelling is precisely to try and infer some generic mechanism underlying the real object under study, here the retina. This is the simplicity of the structure which makes it generic. The question is: “Would the addition of more elaborated retinal features make the response to stimuli simpler?”
In the next section, I discuss some further implications of this work leading to some new questions.

5.1. Cortical Response

If a dynamical stimulus, combined with the retinal network and nonlinearities, produces non-negligible dynamical spatio-temporal correlations, what could be the consequences at the cortical level? (For simplicity, I am going to consider the LGN as a simple relay). There is a physiological transformation, called retinotopy, which maps smoothly the retina topology to the cortical V1 topology. In models, it is usually considered to be the identity map, although it is not. This is a nonlinear transformation, depending, in addition, on the species [117,118,119]. Nevertheless, what matters here is that this mapping is smooth and invertible. Therefore, retinotopy transports, in a smooth and invertible way, the spatio-temporal retinal correlations to the visual cortex. This leads to a question: “How can a cortical model taking into account spatio-temporal spike correlations be defined?”
Cortical models are usually based on mean-field approximations where one features firing rates evolution, but not spike correlations. This is the case of the Wilson–Cowan model [120,121,122] or neural field models [123,124,125]. I know about two mean-field approaches taking care of spike correlations.
The first approach is the one initiated by S. El Boustani et A. Destexhe [126] using a Markovian approach to write down mean field equations of second order (i.e., including pairwise spatial correlations) and a non static thalamic entry that can feature the retinal-LGN input. This model can be used to construct a retino-cortical model [127], although the mathematical consequences of having correlated retinal entries have not been explored yet.
The second approach is based on the so-called Ott-Antonsen Ansatz [128] and has been used by Montbrio, Pazo and Roxin to propose an exact mean field approach with second order statistics [129]. Since their paper, there has been a lot of activity in developing this model, especially in connection with cortical imaging, with impressive results [130,131,132,133]. It is a promising track.
All these approaches could certainly provide powerful numerical and mathematical tools to better understand how spatio-temporal retinal correlations could be processed. In particular, having a retino-(LGN)-cortical model allows for doing a task that is currently impossible experimentally: measuring simultaneously the retina and cortex.

5.2. Retinal Correlations and Neurogeometry

We have also seen that retinal correlations and Gibbs distributions naturally define a metric on a Riemannian manifold where probabilities are points on this manifold. In particular, the application of a weak amplitude stimulus corresponds to a perturbation along the tangent space of this manifold. What is the image of this metric under the retinotopic transformation? Let us make this question a bit more precise.
The visual system has evolved to map as efficiently as possible retinal output to cortical structures. The shaping of the visual system during development is actually a highly dynamical process involving retinal waves and synaptic plasticity [88]. These processes provide the visual system a structure allowing it to respond in a fast and efficient way to the stimuli coming from the external world, via the retina. In particular, the capacity of the visual cortex to respond to spike trains with spatio-temporal correlations induced by natural stimuli should be somewhat imprinted in the cortical connectivity.
Visual perception is actually highly geometrically structured and shaped by the structure of cortical connectivity. This leads researchers to introduce a link between the geometry of cortex and the geometry of vision in the concept of neurogeometry (or neuromathematics) where the functional architecture of V1 is considered as a Lie group of symmetry with a Riemannian geometry (see [52,53,54,55] and reference therein). In this approach, cortical columns are point-like processors detecting visual features where functional connectivity is represented in terms of geodesics. To my best knowledge, neurogeometry essentially deals with V1 and static percepts, although extensions to motor cortex [134] and motion areas [135] have been done. Now, a natural question is: “Is there a relation between the cortical metric of neurogeometry and the metric induced by spatio-temporal spike correlations observed by the retina?”.
Let us address the problem the other way round: Projecting the cortical metric back to the retina via the inverse retinotopy map, what do we find? Is there a physiological correspondence with the retina structure and especially lateral connectivity? What could be the consequences of spike train statistics and on the way retina processing visual stimuli?
What do cortical metrics tell us about retinal spike correlations? Dealing with neural coding of vision, the simplest assumption consists of assuming that RG cells are independent encoders and that the cortex makes the job of restoring the spatio-temporal correlations exist in the visual scene (e.g., in the trajectory of a moving object). The alternative proposition, where spatio-temporal correlations imprinted in the RGC spike trains are deciphered by the cortex, makes the question of stimuli decoding by the cortex more challenging, but opens up far more possibilities. Answering to these questions could be based, as a first step, on important results existing in the literature—in particular, recent works asking the extent to which retinal connectivity and dynamics affect higher order features later derived from its outputs (e.g., orientation, spatial frequency speed etc) in V1 via LGN [136,137,138].

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

I would like to warmly acknowledge all the neuroscientist collaborators from which I learned about this beautiful object, the retina, and, more generally, about vision: Frédéric Chavane, Gerrit Hilgen, Olivier Marre, Adrian Palacios, Serge Picaud and Evelyne Sernagor. I thank the reviewers for their detailed review and helpful criticism which helped to significantly improve the paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Gollisch, T.; Meister, M. Eye smarter than scientists believed: Neural computations in circuits of the retina. Neuron 2010, 65, 150–164. [Google Scholar] [CrossRef] [Green Version]
  2. Attneave, F. Some informational aspects of visual perception. Psychol. Rev. 1954, 61, 183–193. [Google Scholar] [CrossRef] [PubMed]
  3. Selfridge, O.G. Pandaemonium: A paradigm for learning. In Proceedings of the Symposium on the Mechanisation of Thought Processes, Teddington, UK, 24–27 November 1958; pp. 511–529. [Google Scholar]
  4. Lee, C. Representation of Switching Functions by Binary Decision Programs. Bell Syst. Tech. J. 1959, 38, 985–999. [Google Scholar] [CrossRef]
  5. Barlow, H. Possible principles underlying the transformation of sensory messages. Sens. Commun. 1961, 1, 217–234. [Google Scholar]
  6. Atick, J.J.; Redlich, A.N. Towards a Theory of Early Visual Processing. Neural Comput. 1990, 2, 308–320. [Google Scholar] [CrossRef]
  7. Olshausen, B.A.; Field, D.J. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vis. Res. 1997, 37, 3311–3325. [Google Scholar] [CrossRef] [Green Version]
  8. Vinje, W.E.; Gallant, J.L. Sparse Coding and Decorrelation in Primary Visual Cortex During Natural Vision. Science 2000, 287, 1273–1276. [Google Scholar] [CrossRef] [Green Version]
  9. Simoncelli, E.; Olshausen, B. Natural image statistics and neural representation. Annu. Rev. Neurosci. 2001, 24, 1193–1216. [Google Scholar] [CrossRef] [Green Version]
  10. Simoncelli, E.P. Vision and the Statistics of the Visual Environment. Curr. Opin. Neurobiol. 2003, 13, 144–149. [Google Scholar] [CrossRef] [Green Version]
  11. Zhaoping, L. Theoretical understanding of the early visual processes by data compression and data selection. Netw. Comput. Neural Syst. 2006, 17, 301–334. [Google Scholar] [CrossRef] [Green Version]
  12. Pitkow, X.; Meister, M. Decorrelation and efficient coding by retinal ganglion cells. Nat. Neurosci. 2012, 15, 628–635. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Zhaoping, L. The efficient coding principle. In Understanding Vision: Theory, Models, and Data; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  14. Denève, S.; Machens, C.K. Efficient codes and balanced networks. Nat. Neurosci. 2016, 19, 375–382. [Google Scholar] [CrossRef] [PubMed]
  15. Franke, K.; Berens, P.; Schubert, T.; Bethge, M.; Euler, T.; Baden, T. Inhibition decorrelates visual feature representations in the inner retina. Nature 2017, 542, 439–444. [Google Scholar] [CrossRef] [Green Version]
  16. Besharse, J.; Bok, D. The Retina and Its Disorders; Elsevier Science: Amsterdam, The Netherlands, 2011. [Google Scholar]
  17. Nelson, R.; Kolb, H. ON and OFF pathways in the vertebrate retina and visual system. Vis. Neurosci. 2004, 1, 260–278. [Google Scholar]
  18. Azeredo da Silveira, R.; Roska, B. Cell Types, Circuits, Computation. Curr. Opin. Neurobiol. 2011, 21, 664–671. [Google Scholar] [CrossRef] [PubMed]
  19. Oesch, N.; Diamond, J. Ribbon synapses compute temporal contrast and encode luminance in retinal rod bipolar cells. Nat Neurosci. 2011, 23, 1555–1561. [Google Scholar] [CrossRef] [Green Version]
  20. Meister, M.; Berry, M. The Neural Code of the Retina. Neuron 1999, 22, 435–450. [Google Scholar] [CrossRef] [Green Version]
  21. Rieke, F.; Warland, D.; de Ruyter van Steveninck, R.; Bialek, W. Spikes: Exploring the Neural Code; The MIT Press: Cambridge, MA, US, 1997. [Google Scholar]
  22. Schneidman, E.; Berry, M.; Segev, R.; Bialek, W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 2006, 440, 1007–1012. [Google Scholar] [CrossRef] [Green Version]
  23. Tkacik, G.; Marre, O.; Mora, T.; Amodei, D.; Berry, M.; Bialek, W. The simplest maximum entropy model for collective behavior in a neural network. J. Stat. Mech. 2013, 2013, P03011. [Google Scholar] [CrossRef]
  24. Palmer, S.E.; Marre, O.; Berry, M.J.; Bialek, W. Predictive information in a sensory population. Proc. Natl. Acad. Sci. USA 2015, 112, 6908–6913. [Google Scholar] [CrossRef] [Green Version]
  25. Ecker, A.; Berens, P.; Keliris, G.; Bethge, M.; Logothetis, N.; Tolias, A. Decorrelated neuronal firing in cortical microcircuits. Science 2010, 327, 584. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Baden, T.; Berens, P.; Bethge, M.; Euler, T. Spikes in Mammalian Bipolar Cells Support Temporal Layering of the Inner Retina. Curr. Biol. 2013, 23, 48–52. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Karvouniari, D.; Gil, L.; Marre, O.; Picaud, S.; Cessac, B. A biophysical model explains the oscillatory behaviour of immature starburst amacrine cells. Sci. Rep. 2019, 9, 1859. [Google Scholar] [CrossRef] [Green Version]
  28. Pillow, J.; Paninski, L.; Uzzell, V.; Simoncelli, E.; Chichilnisky, E. Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. J. Neurosci. 2005, 25, 11003–11013. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Ruda, K.; Zylberberg, J.; Field, G. Ignoring correlated activity causes a failure of retinal population codes. Nat. Commun. 2020, 11, 4605. [Google Scholar] [CrossRef]
  30. Sekhar, S.; Ramesh, P.; Bassetto, G.; Zrenner, E.; Macke, J.H.; Rathbun, D.L. Characterizing Retinal Ganglion Cell Responses to Electrical Stimulation Using Generalized Linear Models. Front. Neurosci. 2020, 14, 378. [Google Scholar] [CrossRef]
  31. Martínez-Cañada, P.; Morillas, C.; Pino, B.; Ros, E.; Pelayo, F. A Computational Framework for Realistic Retina Modeling. Int. J. Neural Syst. 2016, 26, 1650030. [Google Scholar] [CrossRef]
  32. Huth, J.; Masquelier, T.; Arleo, A. Convis: A Toolbox to Fit and Simulate Filter-Based Models of Early Visual Processing. Front. Neuroinform. 2018, 12, 9. [Google Scholar] [CrossRef] [Green Version]
  33. Wohrer, A.; Kornprobst, P. Virtual Retina: A biological retina model and simulator, with contrast gain control. J. Comput. Neurosci. 2009, 26, 219. [Google Scholar] [CrossRef] [Green Version]
  34. Cessac, B.; Kornprobst, P.; Kraria, S.; Nasser, H.; Pamplona, D.; Portelli, G.; Viéville, T. PRANAS: A New Platform for Retinal Analysis and Simulation. Front. Neuroinform. 2017, 11, 49. [Google Scholar] [CrossRef] [Green Version]
  35. Taylor, W.; Smith, R. The role of starburst amacrine cells in visual signal processing. Vis. Neurosci. 2012, 29, 73–81. [Google Scholar] [CrossRef]
  36. Demb, J.B.; Singer, J.H. Intrinsic properties and functional circuitry of the AII amacrine cell. Vis. Neurosci. 2012, 29, 51–60. [Google Scholar] [CrossRef]
  37. Pottackal, J.; Singer, J.H.; Demb, J.B. Computational and Molecular Properties of Starburst Amacrine Cell Synapses Differ With Postsynaptic Cell Type. Front. Cell. Neurosci. 2021, 15, 244. [Google Scholar] [CrossRef]
  38. Chichilnisky, E.J. A simple white noise analysis of neuronal light responses. Netw. Comput. Neural Syst. 2001, 12, 199–213. [Google Scholar] [CrossRef]
  39. Simoncelli, E.P.; Paninski, L.; Pillow, J.; Schwartz, O. Characterization of neural responses with stochastic stimuli. In The Cognitive Neurosciences, III; Gazzaniga, M., Ed.; MIT Press: Cambridge, MA, USA, 2004; Chapter 23; pp. 327–338. [Google Scholar]
  40. Schwartz, O.; Pillow, J.W.; Rust, N.C.; Simoncelli, E.P. Spike-triggered neural characterization. J. Vis. 2006, 6, 13. [Google Scholar] [CrossRef] [Green Version]
  41. Motulsky, H.; Christopoulos, A. Fitting Models to Biological Data Using Linear and Nonlinear Regression: A Practical Guide to Curve Fitting; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
  42. Berry, M.; Brivanlou, I.; Jordan, T.; Meister, M. Anticipation of moving stimuli by the retina. Nature 1999, 398, 334–338. [Google Scholar] [CrossRef]
  43. Hosoya, T.; Baccus, S.A.; Meister, M. Dynamic predictive coding by the retina. Nature 2005, 436, 71–77. [Google Scholar] [CrossRef] [PubMed]
  44. Chen, E.Y.; Marre, O.; Fisher, C.; Schwartz, G.; Levy, J.; da Silviera, R.A.; Berry, M. Alert response to motion onset in the retina. J. Neurosci. 2013, 33, 120–132. [Google Scholar] [CrossRef] [PubMed]
  45. Souihel, S.; Cessac, B. On the potential role of lateral connectivity in retinal anticipation. J. Math. Neurosci. 2020, 11, 3. [Google Scholar] [CrossRef]
  46. Cessac, B. Statistics of spike trains in conductance-based neural networks: Rigorous results. J. Math. Neurosci. 2011, 1, 8. [Google Scholar] [CrossRef] [Green Version]
  47. Cofré, R.; Cessac, B. Dynamics and spike trains statistics in conductance-based integrate-and-fire neural networks with chemical and electric synapses. Chaos Solitons Fractals 2013, 50, 3. [Google Scholar] [CrossRef]
  48. Cessac, B.; Cofré, R. Spike train statistics and Gibbs distributions. J. Physiol.-Paris 2013, 107, 360–368. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Cofré, R.; Cessac, B. Exact computation of the maximum-entropy potential of spiking neural-network models. Phys. Rev. E 2014, 89, 052117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Cofré, R.; Maldonado, C.; Cessac, B. Thermodynamic Formalism in Neuronal Dynamics and Spike Train Statistics. Entropy 2020, 22, 1330. [Google Scholar] [CrossRef] [PubMed]
  51. Cessac, B.; Ampuero, I.; Cofré, R. Linear Response of General Observables in Spiking Neuronal Network Models. Entropy 2021, 23, 155. [Google Scholar] [CrossRef]
  52. Sarti, A.; Citti, G. The constitution of visual perceptual units in the functional architecture of V1. J. Comput. Neurosci. 2014, 38, 285–300. [Google Scholar] [CrossRef] [Green Version]
  53. Citti, G.; Sarti, A. (Eds.) Neuromathematics of Vision; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  54. Petitot, J. Elements of Neurogeometry; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  55. Citti, G.; Sarti, A. Neurogeometry of Perception: Isotropic and Anisotropic Aspects. Axiomathes 2019. [Google Scholar] [CrossRef] [Green Version]
  56. Baccus, S.; Meister, M. Fast and Slow Contrast Adaptation in Retinal Circuitry. Neuron 2002, 36, 909–919. [Google Scholar] [CrossRef] [Green Version]
  57. Johnston, J.; Lagnado, L. General features of the retinal connectome determine the computation of motion anticipation. eLife 2015, 4, e06250. [Google Scholar] [CrossRef]
  58. Trenholm, S.; Awatramani, G. Chapter 9—Dynamic Properties of Electrically Coupled Retinal Networks. In Network Functions and Plasticity; Jing, J., Ed.; Academic Press: Cambridge, MA, USA, 2017; pp. 183–208. [Google Scholar] [CrossRef]
  59. Blanchard, P.; Cessac, B.; Krueger, T. A dynamical system approach to SOC models of Zhang’s type. J. Stat. Phys. 1997, 88, 307–318. [Google Scholar] [CrossRef]
  60. Blanchard, P.; Cessac, B.; Krueger, T. What can one learn about Self-Organized Criticality from Dynamical System theory? J. Stat. Phys. 2000, 98, 375–404. [Google Scholar] [CrossRef]
  61. Cessac, B. A discrete time neural network model with spiking neurons. Rigorous results on the spontaneous dynamics. J. Math. Biol. 2008, 56, 311–345. [Google Scholar] [CrossRef] [PubMed]
  62. Cessac, B.; Viéville, T. On Dynamics of Integrate-and-Fire Neural Networks with Adaptive Conductances. Front. Neurosci. 2008, 2, 2. [Google Scholar]
  63. Cessac, B. A discrete time neural network model with spiking neurons II. Dynamics with noise. J. Math. Biol. 2011, 62, 863–900. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Coombes, S.; Lai, Y.M.; Şayli, M.; Thul, R. Networks of piecewise linear neural mass models. Eur. J. Appl. Math. 2018, 29, 869–890. [Google Scholar] [CrossRef] [Green Version]
  65. Rajakumar, A.; Rinzel, J.; Chen, Z.S. Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation. Neural Comput. 2021, 33, 2603–2645. [Google Scholar] [CrossRef] [PubMed]
  66. Goldman, M.S. Memory without Feedback in a Neural Network. Neuron 2009, 61, 621–634. [Google Scholar] [CrossRef] [Green Version]
  67. Murphy, B.; Miller, K. Balanced amplification: A new mechanism of selective amplification of neural activity patterns. Neuron 2009, 61, 635–648. [Google Scholar] [CrossRef] [Green Version]
  68. Falconer, K.J. The Geometry of Fractal Sets; Cambridge University Press: Cambridge, MA, USA, 1985. [Google Scholar]
  69. Barnsley, M.; Rising, H. Fractals Everywhere; Elsevier Science: Amsterdam, The Netherlands, 1993. [Google Scholar]
  70. Falconer, K. Techniques in Fractal Geometry; John Wiley & Sons, Ltd.: Chichester, UK, 1997. [Google Scholar]
  71. Mease, R.A.; Famulare, M.; Gjorgjieva, J.; Moody, W.J.; Fairhall, A.L. Emergence of Adaptive Computation by Single Neurons in the Developing Cortex. J. Neurosci. 2013, 33, 12154–12170. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  72. Yu, Y.; Sing Lee, T. Adaptive contrast gain control and information maximization. Neurocomputing 2005, 65–66, 111–116. [Google Scholar] [CrossRef]
  73. Snellman, J.; Kaur, T.; Shen, Y.; Nawy, S. Regulation of ON bipolar cell activity. Prog. Retin. Eye Res. 2008, 27, 450–463. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Young, L.S. Mathematical theory of Lyapunov exponents. J. Phys. A Math. Theor. 2013, 46, 254001. [Google Scholar] [CrossRef]
  75. Georgii, H.O. Gibbs Measures and Phase Transitions; De Gruyter: Berlin, NY, USA, 1988. [Google Scholar]
  76. Onicescu, O.; Mihoc, G. Sur les chaînes statistiques. C. R. Acad. Sci. Paris 1935, 200, 511–512. [Google Scholar]
  77. Galves, A.; Löcherbach, E. Infinite Systems of Interacting Chains with Memory of Variable Length—A Stochastic Model for Biological Neural Nets. J. Stat. Phys. 2013, 151, 896–921. [Google Scholar] [CrossRef] [Green Version]
  78. Fernandez, R.; Maillard, G. Chains with complete connections: General theory, uniqueness, loss of memory and mixing properties. J. Stat. Phys. 2005, 118, 555–588. [Google Scholar] [CrossRef] [Green Version]
  79. LeNy, A. Introduction to (Generalized) Gibbs Measures. Ensaios Mat. 2008, 15, 7. [Google Scholar]
  80. Shlens, J.; Field, G.; Gauthier, J.; Grivich, M.; Petrusca, D.; Sher, A.; Litke, A.; Chichilnisky, E. The Structure of Multi-Neuron Firing Patterns in Primate Retina. J. Neurosci. 2006, 26, 8254. [Google Scholar] [CrossRef] [Green Version]
  81. Nghiem, T.A.; Telenczuk, B.; Marre, O.; Destexhe, A.; Ferrari, U. Maximum-entropy models reveal the excitatory and inhibitory correlation structures in cortical neuronal activity. Phys. Rev. E 2018, 98, 012402. [Google Scholar] [CrossRef] [Green Version]
  82. Cocco, S.; Leibler, S.; Monasson, R. Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods. Proc. Natl. Acad. Sci. USA 2009, 106, 14058–14062. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  83. Rudolph, M.; Destexhe, A. Analytical Integrate and Fire Neuron models with conductance-based dynamics for event driven simulation strategies. Neural Comput. 2006, 18, 2146–2210. [Google Scholar] [CrossRef] [Green Version]
  84. Gerstner, W.; Kistler, W.M. Mathematical formulations of Hebbian learning. Biol. Cybern. 2002, 87, 404–415. [Google Scholar] [CrossRef] [Green Version]
  85. Dayan, P.; Abbott, L. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  86. Destexhe, A.; Mainen, Z.F.; Sejnowski, T.J. Kinetic models of synaptic transmission. In Methods in Neuronal Modeling; The MIT Press: Cambridge, MA, USA, 1998; pp. 1–25. [Google Scholar]
  87. Baden, T.; Berens, P.; Franke, K.; Rosón, M.R.; Bethge, M.; Euler, T. The functional diversity of retinal ganglion cells in the mouse. Nature 2016, 529, 345–350. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  88. Sernagor, E.; Hennig, M. Chapter 49—Retinal Waves: Underlying Cellular Mechanisms and Theoretical Considerations. In Cellular Migration and Formation of Neuronal Connections; Rubenstein, J.L., Rakic, P., Eds.; Academic Press: Oxford, UK, 2013; pp. 909–920. [Google Scholar] [CrossRef]
  89. Benvenuti, G.; Chemla, S.; Arjan Boonman, G.M.; Chavane, F. Anticipation of an approaching bar by neuronal populations in awake monkey V1. J. Vis. 2015, 15, 479. [Google Scholar] [CrossRef]
  90. Kastner, D.; Ozuysal, Y.; Panagiotakos, G.; Baccus, S. Adaptation of Inhibition Mediates Retinal Sensitization. Curr. Biol. 2019, 29, 2640–2651. [Google Scholar] [CrossRef] [PubMed]
  91. Hennig, M. Theoretical models of synaptic short term plasticity. Front. Comput. Neurosci. 2013, 7, 154. [Google Scholar] [CrossRef]
  92. Lancaster, H.O. Some properties of the bivariate normal distribution considered in the form of a contingency table. Biometrika 1957, 44, 289–292. [Google Scholar] [CrossRef]
  93. Kubo, R. Statistical-Mechanical Theory of Irreversible Processes. I. General Theory and Simple Applications to Magnetic and Conduction Problems. J. Phys. Soc. Jpn. 1957, 12, 570–586. [Google Scholar] [CrossRef]
  94. Kubo, R. The fluctuation-dissipation theorem. Rep. Prog. Phys. 1966, 29, 255–284. [Google Scholar] [CrossRef] [Green Version]
  95. Cessac, B.; Sepulchre, J. Stable resonances and signal propagation in a chaotic network of coupled units. Phys. Rev. E 2004, 70, 056111. [Google Scholar] [CrossRef] [Green Version]
  96. Cessac, B.; Sepulchre, J. Transmitting a signal by amplitude modulation in a chaotic network. Chaos 2006, 16, 013104. [Google Scholar] [CrossRef] [Green Version]
  97. Ruelle, D. Nonequilibrium statistical mechanics near equilibrium: Computing higher-order terms. Nonlinearity 1998, 11, 5–18. [Google Scholar] [CrossRef]
  98. Marre, O.; Boustani, S.E.; Frégnac, Y.; Destexhe, A. Prediction of Spatiotemporal Patterns of Neural Activity from Pairwise Correlations. Phys. Rev. Let. 2009, 102, 138101. [Google Scholar] [CrossRef] [PubMed]
  99. Nasser, H.; Marre, O.; Berry, M.; Cessac, B. Spatio temporal Gibbs distribution analysis of spike trains using Monte Carlo method. In Proceedings of the AREADNE 2012 Research in Encoding In addition, Decoding of Neural Ensembles, Santorini, Greece, 21–24 June 2012. [Google Scholar]
  100. Nasser, H.; Cessac, B. Parameters estimation for spatio-temporal maximum entropy distributions: Application to neural spike trains. Entropy 2014, 16, 2244–2277. [Google Scholar] [CrossRef] [Green Version]
  101. Amari, S.I.; Nagaoka, H. Methods of Information Geometry; American Mathematical Society: Oxford, MI, USA, 2000; Volume 191. [Google Scholar]
  102. Rao, C. Information and Accuracy Attainable in the Estimation of Statistical Parameters. Bull. Calcutta Math. Soc. 1945, 37, 81–91. [Google Scholar]
  103. Amari, S.I. Information geometry in optimization, machine learning and statistical inference. Front. Electr. Electron. Eng. China 2010, 5, 241–260. [Google Scholar] [CrossRef]
  104. Ruelle, D. Smooth dynamics and new theoretical ideas in nonequilibrium statistical mechanics. J. Statist. Phys. 1999, 95, 393–468. [Google Scholar] [CrossRef] [Green Version]
  105. Cessac, B. The retina as a dynamical system. In Recent Trends in Chaotic, Nonlinear and Complex Dynamics; In Honour of Prof. Miguel A.F. Sanjuán on his 60th Birthday; World Scientific: Singapore, 2020. [Google Scholar]
  106. Lok, C. Curing blindness: Vision quest. Nature 2014, 513, 160. [Google Scholar] [CrossRef] [Green Version]
  107. Jones, B.; Kondo, M.; Terasaki, H.; Lin, Y.; McCall, M.; Marc, R. Retinal remodeling. Jpn. J. Ophthalmol. 2012, 56, 289–306. [Google Scholar] [CrossRef]
  108. Marc, R.E.; Jones, B.W.; Watt, C.B.; Strettoi, E. Neural remodeling in retinal degeneration. Prog. Retin. Eye Res. 2003, 22, 607–655. [Google Scholar] [CrossRef]
  109. Barrett, J.; Degenaar, P.; Sernagor, E. Blockade of pathological retinal ganglion cell hyperactivity improves optogenetically evoked light responses in rd1 mice. Front. Cell. Neurosci. 2015, 9, 330. [Google Scholar] [CrossRef] [Green Version]
  110. Barrett, J.; Hilgen, G.; Sernagor, E. Dampening Spontaneous Activity Improves the Light Sensitivity and Spatial Acuity of Optogenetic Retinal Prosthetic Responses. Sci. Rep. 2016, 6, 33565. [Google Scholar] [CrossRef] [Green Version]
  111. Roux, S.; Matonti, F.; Dupont, F.; Hoffart, L.; Takerkart, S.; Picaud, S.; Pham, P.; Chavane, F. Probing the functional impact of sub-retinal prosthesis. eLife 2016, 5, e12687. [Google Scholar] [CrossRef]
  112. Pham, P.; Roux, S.; Matonti, F.; Dupont, F.; Agache, V.; Chavane, F. Post-implantation impedance spectroscopy of subretinal micro-electrode arrays, OCT imaging and numerical simulation: Towards a more precise neuroprosthesis monitoring tool. J. Neural Eng. 2013, 10, 046002. [Google Scholar] [CrossRef]
  113. Al-Atabany, W.; McGovern, B.; Mehran, K.; Berlinguer-Palmini, R.; Degenaar, P. A Processing Platform for Optoelectronic/Optogenetic Retinal Prosthesis. IEEE Trans. Biomed. Eng. 2013, 60, 781–791. [Google Scholar] [CrossRef]
  114. Maheswaranathan, N.; McIntosh, L.; Kastner, D.B.; Melander, J.B.; Brezovec, L.; Nayebi, A.; Wang, J.; Ganguli, S.; Baccus, S.A. Deep learning models reveal internal structure and diverse computations in the retina under natural scenes. bioRxiv 2018, 340943. [Google Scholar] [CrossRef] [Green Version]
  115. Tanaka, H.; Nayebi, A.; Maheswaranathan, N.; McIntosh, L.; Baccus, S.; Ganguli, S. From deep learning to mechanistic understanding in neuroscience: The structure of retinal prediction. In Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019; pp. 8535–8545. [Google Scholar]
  116. Zheng, Y.; Jia, S.; Yu, Z.; Liu, J.K.; Huang, T. Unraveling neural coding of dynamic natural visual scenes via convolutional recurrent neural networks. Patterns 2021, 2, 100350. [Google Scholar] [CrossRef]
  117. Gias, C.; Hewson-Stoate, N.; Jones, M.; Johnston, D.; Mayhew, J.; Coffey, P. Retinotopy within rat primary visual cortex using optical imaging. NeuroImage 2005, 24, 200–206. [Google Scholar] [CrossRef] [PubMed]
  118. Schira, M.M.; Tyler, C.W.; Spehar, B.; Breakspear, M. Modeling Magnification and Anisotropy in the Primate Foveal Confluence. PLoS Comput. Biol. 2010, 6, e1000651. [Google Scholar] [CrossRef] [PubMed]
  119. Ayzenshtat, I.; Gilad, A.; Zurawel, G.; Slovin, H. Population Response to Natural Images in the Primary Visual Cortex Encodes Local Stimulus Attributes and Perceptual Processing. J. Neurosci. 2012, 32, 13971–13986. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  120. Amari, S. Characteristics of randomly connected threshold element networks and neural systems. Proc. IEEE 1971, 59, 35–47. [Google Scholar] [CrossRef]
  121. Wilson, H.; Cowan, J. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 1972, 12, 1–24. [Google Scholar] [CrossRef] [Green Version]
  122. Wilson, H.; Cowan, J. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Biol. Cybern. 1973, 13, 55–80. [Google Scholar] [CrossRef] [PubMed]
  123. Bressloff, P. Traveling fronts and wave propagation failure in an inhomogeneous neural network. Phys. D Nonlinear Phenom. 2001, 155, 83–100. [Google Scholar] [CrossRef] [Green Version]
  124. Bressloff, P.; Cowan, J.; Golubitsky, M.; Thomas, P.; Wiener, M. Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. Philos. Trans. R. Soc. Lond. Ser. Biol. Sci. 2001, 306, 299–330. [Google Scholar] [CrossRef] [Green Version]
  125. Bressloff, P. Stochastic neural field theory and the system-size expansion. SIAM J. Appl. Math 2009, 70, 1488–1521. [Google Scholar] [CrossRef]
  126. El Boustani, S.; Destexhe, A. A master equation formalism for macroscopic modeling of asynchronous irregular activity states. Neural Comput. 2009, 21, 46–100. [Google Scholar] [CrossRef]
  127. Cessac, B.; Souihel, S.; Di Volo, M.; Chavane, F.; Destexhe, A.; Chemla, S.; Marre, O. Anticipation in the retina and the primary visual cortex: Towards an integrated retino-cortical model for motion processing. In Proceedings of the Workshop on Visuo Motor Integration, Paris, France, 6–7 June 2019. [Google Scholar]
  128. Ott, E.; Antonsen, T.M. Low dimensional behavior of large systems of globally coupled oscillators. Chaos Interdiscip. J. Nonlinear Sci. 2008, 18, 037113. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  129. Montbrió, E.; Pazó, D.; Roxin, A. Macroscopic Description for Networks of Spiking Neurons. Phys. Rev. X 2015, 5, 021028. [Google Scholar] [CrossRef] [Green Version]
  130. Bick, C.; Goodfellow, M.; Laing, C.; Martens, E.A. Understanding the dynamics of biological and neural oscillator networks through exact mean-field reductions: A review. J. Math. Neurosc. 2020, 10, 9. [Google Scholar] [CrossRef]
  131. Bi, H.; Segneri, M.; di Volo, M.; Torcini, A. Coexistence of fast and slow gamma oscillations in one population of inhibitory spiking neurons. Phys. Rev. Res. 2020, 2, 013042. [Google Scholar] [CrossRef] [Green Version]
  132. Buendía, V.; Villegas, P.; Burioni, R.; Muñoz, M.A. Hybrid-type synchronization transitions: Where incipient oscillations, scale-free avalanches, and bistability live together. Phys. Rev. Res. 2021, 3, 023224. [Google Scholar] [CrossRef]
  133. Volo, M.D.; Segneri, M.; Goldobin, D.; Politi, A.; Torcini, A. Coherent oscillations in balanced neural networks driven by endogenous fluctuations. arXiv 2021, arXiv:2110.09439. [Google Scholar]
  134. Mazzetti, C. A Mathematical Model of the Motor Cortex. Ph.D. Thesis, University of Bologna, Bologna, Italy, 2017. [Google Scholar]
  135. Barbieri, D.; Citti, G.; Cocci, G.; Sarti, A. A Cortical-Inspired Geometry for Contour Perception and Motion Integration. J. Math. Imaging Vis. 2014, 49, 511–529. [Google Scholar] [CrossRef] [Green Version]
  136. Romagnoni, A.; Ribot, J.; Bennequin, D.; Touboul, J. Parsimony, Exhaustivity and Balanced Detection in Neocortex. PLoS Comput. Biol. 2015, 11, e1004623. [Google Scholar] [CrossRef] [Green Version]
  137. Rankin, J.; Chavane, F. Neural field model to reconcile structure with function in primary visual cortex. PLoS Comput. Biol. 2017, 13, e1005821. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  138. Nicks, R.; Cocks, A.; Avitabile, D.; Johnston, A.; Coombes, S. Understanding Sensory Induced Hallucinations: From Neural Fields to Amplitude Equations. SIAM J. Appl. Dyn. Syst. 2021, 20, 1683–1714. [Google Scholar] [CrossRef]
Figure 1. Structure of the retina model introduced in Section 2.1. A moving object (here, presumably, a car) moves along a trajectory (dashed black line). Its image is projected by the eye optics to the upper retina layers (Photoreceptors and H cells) and stimulates them. In the model, this corresponds to the convolution of the stimulus with the Receptive Field (RF) of B cells. This provides to B cells what we call the “OPL” input to B cells. B cells (blue points) are connected to A cells (red points) via excitatory synapses (pink arrows, denoted W A B ) and to RG cells (green points) via excitatory synapses (brown arrows, denoted W G B ). A cells are connected to B cells via inhibitory synapses (green arrows, denoted W B A ) and to RG cells via inhibitory synapses (cyan arrows, denoted W G A ). The voltage of RG cells is sent through a nonlinearity (pink curve in the black circle) so as to produce spike trains conveyed to the LGN.
Figure 1. Structure of the retina model introduced in Section 2.1. A moving object (here, presumably, a car) moves along a trajectory (dashed black line). Its image is projected by the eye optics to the upper retina layers (Photoreceptors and H cells) and stimulates them. In the model, this corresponds to the convolution of the stimulus with the Receptive Field (RF) of B cells. This provides to B cells what we call the “OPL” input to B cells. B cells (blue points) are connected to A cells (red points) via excitatory synapses (pink arrows, denoted W A B ) and to RG cells (green points) via excitatory synapses (brown arrows, denoted W G B ). A cells are connected to B cells via inhibitory synapses (green arrows, denoted W B A ) and to RG cells via inhibitory synapses (cyan arrows, denoted W G A ). The voltage of RG cells is sent through a nonlinearity (pink curve in the black circle) so as to produce spike trains conveyed to the LGN.
Jimaging 08 00014 g001
Figure 2. Receptive Field of a ON BCell. (Left) Example of a spatio-temporal RF of B cells (ON center cell) represented in 3D (one dimension of space, x and time t). There is inhibition at the surround, physiologically due to H cells. (Right) Spatio-temporal RF representation with a color map.
Figure 2. Receptive Field of a ON BCell. (Left) Example of a spatio-temporal RF of B cells (ON center cell) represented in 3D (one dimension of space, x and time t). There is inhibition at the surround, physiologically due to H cells. (Right) Spatio-temporal RF representation with a color map.
Jimaging 08 00014 g002
Figure 3. (Top) Network of spiking neurons sensing a stimulus (Redrawn from ref. [51]). Each neuron, represented as a black point, is connected to its neighbours with an excitatory connection (red arrows) and to its second nearest neighbours with an inhibitory connection (blue arrows). In addition, each neuron is able to sense external stimuli S ( x , t ) (cyan, bell shaped curve). (Bottom Left) Spontaneous spiking activity. (Bottom Right) Spiking activity in the presence of the moving stimulus.
Figure 3. (Top) Network of spiking neurons sensing a stimulus (Redrawn from ref. [51]). Each neuron, represented as a black point, is connected to its neighbours with an excitatory connection (red arrows) and to its second nearest neighbours with an inhibitory connection (blue arrows). In addition, each neuron is able to sense external stimuli S ( x , t ) (cyan, bell shaped curve). (Bottom Left) Spontaneous spiking activity. (Bottom Right) Spiking activity in the presence of the moving stimulus.
Jimaging 08 00014 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cessac, B. Retinal Processing: Insights from Mathematical Modelling. J. Imaging 2022, 8, 14. https://doi.org/10.3390/jimaging8010014

AMA Style

Cessac B. Retinal Processing: Insights from Mathematical Modelling. Journal of Imaging. 2022; 8(1):14. https://doi.org/10.3390/jimaging8010014

Chicago/Turabian Style

Cessac, Bruno. 2022. "Retinal Processing: Insights from Mathematical Modelling" Journal of Imaging 8, no. 1: 14. https://doi.org/10.3390/jimaging8010014

APA Style

Cessac, B. (2022). Retinal Processing: Insights from Mathematical Modelling. Journal of Imaging, 8(1), 14. https://doi.org/10.3390/jimaging8010014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop