Skip to main content
Advertisement
  • Loading metrics

A framework for macroscopic phase-resetting curves for generalised spiking neural networks

  • Grégory Dumont ,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    gregory.dumont@ens.fr

    Affiliation Group for Neural Theory, LNC INSERM U960, DEC, Ecole Normale Supérieure - PSL University, Paris France

  • Alberto Pérez-Cervera,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Center for Cognition and Decision Making, Institute for Cognitive Neuroscience, National Research University Higher School of Economics, Moscow, Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain

  • Boris Gutkin

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Group for Neural Theory, LNC INSERM U960, DEC, Ecole Normale Supérieure - PSL University, Paris France, Center for Cognition and Decision Making, Institute for Cognitive Neuroscience, National Research University Higher School of Economics, Moscow

Abstract

Brain rhythms emerge from synchronization among interconnected spiking neurons. Key properties of such rhythms can be gleaned from the phase-resetting curve (PRC). Inferring the PRC and developing a systematic phase reduction theory for large-scale brain rhythms remains an outstanding challenge. Here we present a theoretical framework and methodology to compute the PRC of generic spiking networks with emergent collective oscillations. We adopt a renewal approach where neurons are described by the time since their last action potential, a description that can reproduce the dynamical feature of many cell types. For a sufficiently large number of neurons, the network dynamics are well captured by a continuity equation known as the refractory density equation. We develop an adjoint method for this equation giving a semi-analytical expression of the infinitesimal PRC. We confirm the validity of our framework for specific examples of neural networks. Our theoretical framework can link key biological properties at the individual neuron scale and the macroscopic oscillatory network properties. Beyond spiking networks, the approach is applicable to a broad class of systems that can be described by renewal processes.

Author summary

The formation of oscillatory neuronal assemblies at the network level has been hypothesized to be fundamental to many cognitive and motor functions. One prominent tool to understand the dynamics of oscillatory activity response to stimuli, and hence the neural code for which it is a substrate, is a nonlinear measure called Phase-Resetting Curve (PRC). At the network scale, the PRC defines the measure of how a given synaptic input perturbs the timing of next upcoming volley of spike assemblies: either advancing or delaying this timing. As a further application, one can use PRCs to make unambiguous predictions about whether communicating networks of neurons will phase-lock as it is often observed across the cortical areas and what would be this stable phase-configuration: synchronous, asynchronous or with asymmetric phase-shifts. The latter configuration also implies a preferential flow of information form the leading network to the follower, thereby giving causal signatures of directed functional connectivity. Because of the key position of the PRC in studying synchrony, information flow and entrainment to external forcing, it is crucial to move toward a theory that allows to compute the PRCs of network-wide oscillations not only for a restricted class of models, as has been done in the past, but to network descriptions that are generalized and can reflect flexibly single cell properties. In this manuscript, we tackle this issue by showing how the PRC for network oscillations can be computed using the adjoint systems of partial differential equations that define the dynamics of the neural activity density.

This is a PLOS Computational Biology Methods paper.

Introduction

The phase-resetting curve (PRC), popularized by Arthur T. Winfree in 1980 [1], is one of the central tools to study properties and mechanisms of biological rhythms. The PRC is a measure that tracks down the phase shift of a rhythm when a transient perturbation is presented at a determined phase of the oscillatory cycle. The PRC is particularly well adapted to clarify essential dynamical features across a variety of biological contexts [2, 3]. For instance, it has proven to be especially efficient to predict the phase-locking behavior of coupled neural oscillators [4] and rhythms emergent in neural populations [5], to study information flow in networks of bio-chemical oscillators [6], to illustrate the impact of neuromodulation in single neurons experimentally [7] and has been a key classical technique in chronobiology [8].

For oscillatory systems described by ordinary differential equations, the adjoint method provides an accurate procedure to compute the so-called infinitesimal PRC (iPRC) [9]. In the case of vanishingly small perturbation amplitudes, PRC and iPRC become proportional to each other, and therefore, any oscillating dynamical system can be reduced to a single phase equation: Here θ is the oscillation phase, ω is the natural frequency of the oscillator, p(t) represents the time dependent-perturbation, and the function Z the iPRC.

Data suggest that most cortical rhythms emerge from the interactions of irregular spiking cells [10, 11]. Thus the brain oscillatory’s activity results from synchronisation among firing events of large neuronal populations. So far, deriving elementary dynamical systems for such macroscopic oscillation could not be done without drastic simplifications of the individual neurons. From now on, to avoid confusion, we term macroscopic PRC (mPRC), the PRC extracted for an oscillation emerging at the network scale. Initial attempts to derive mPRCs for emergent oscillations [12, 13] required quadratic integrate-and-fire models to describe the neurons. As a consequence, extracting the PRCs of realistic oscillating spiking networks has remained elusive despite its relevance to study brain rhythms [14].

In this paper, we tackle this issue adopting a mean-field description of networks where a given cell is characterized by the amount of time passed by since its last action potential, i.e. the age of the cell.

Originating from the beginning of the 20th century with the paper of Sharpe and Lotka in 1911 [15] and the work of McKendrick in 1926 [16], the study of population dynamics with an age-structured modeling approach has never lost interest within the scientific community. Such models track the time evolution of ages of single individuals and are very well adapted to capture the essential dynamical features of actual data in a wide variety of biological context. They have proven to be especially effective in epidemiology [1719], cellular proliferation [2022] and population dynamics [23, 24].

Among the different ways of formulating the problem stands out the von Foerster equation; a continuity equation named after the Austrian American physicist Heinz von Foerster [25]. Written in the form of a partial differential equation, the von Foerster formalism has the tremendous advantage of entailing the other age-structured formulations. So its use is nowadays widespread and favored by theoreticians. The interested reader may find several textbooks in mathematical biology that dedicate a chapter to it [2628].

Applied to neural systems, this continuity equation is known as the refractory density equation. It was first implemented by Wulfram Gerstner and Leo van Hemmen in 1992 [29]. The refractory equation can rigorously be derived starting from the stochastic process [30], and is amenable to mathematical analysis [31]. Moreover, this continuity equation has been a major tool for studying emergent synchronized assemblies [32], transient dynamics [33], low dimensional reduction [34], and finite-size network activity fluctuations [3538]. We recommend the reader the textbook [39] for an intuitive introduction on the refractory density equation.

The construction of the refractory density equation relies on a mean-field description of spiking networks where a given cell is characterized by the amount of time passed by since its last action potential. There are undoubtedly alternative ways to describe neurons, however, such a formalism is general as it can effectively reflect many spiking formulations. For instance, renewal processes such as the noisy integrate-and-fire [4042], or spike response models [32], can be expressed within this framework. Furthermore, this approach provides approximation schemes for complex biophysically-realistic models [43, 44], for correlated noise [45], generalized linear models [46], and for neural adaptation [35, 48], see [48] for a recent review. As a consequence, the refractory density equation can be seen as a general description of spiking neural networks.

This paper is organized as follows. First, we present the network and neuron model that will be used throughout. Then, we obtain the adjoint system which gives access to the PRC. We finish the paper by illustrating a possible application of our framework by studying macroscopic phase locking.

Results

Spiking and mean-field description

To describe spiking neurons as renewal processes we need to take into account h(t), the total input a neuron receives and r, the time since the last action potential. Denoting S(h(t), r) the escape rate, then, the probability that a firing event occurs during a time interval dt is given by S(h(t), r)dt. Note that the escape rate reflects the individual properties of neurons, as an example, we take an escape rate that captures the dynamics of pyramidal cells [39]. As soon as an action potential is triggered, the neuron’s age r is reset to zero. The population activity can be extracted and is given by the sum of all the occurring spikes: (1) where δ is the Dirac mass, N the number of neurons and the firing time of the cell numbered k. The total input current is given by where Iext(t) is an external current and the synaptic Is(t), which defines the current feedback of the network, is given by here Js is the synaptic efficiency, κ the normalized synaptic filter and τs the synaptic decay.

In the limit of an infinitely large number of neurons N (the thermodynamic limit), the full network description reduces to a single partial differential equation. Denoting q(t, r) the probability density for a neuron to have at time t an age r, the density profile evolves according to the continuity equation: (2) Because once a cell emits an action potential its age is reset to zero, the natural boundary condition is where A(t) is the neural network activity and is defined as (3) We recall that in the thermodynamic limit the total input current is given by

The mean-field Eq (2), also termed the von Foerster equation in Mathematical Biology [25], defines a conservation law and expresses three different processes taking place at the cellular level: a drift process due to the time passing between action potentials, an escape rate generated by the randomness of firing events and the individual cell properties, a non-local boundary condition which describes the reset of the neurons that just fired. As we illustrate in Fig 1, the essential shape of the full network activity is well captured by the mean-field Eq (3).

thumbnail
Fig 1. Dynamics for a recurrent excitatory network.

Comparison of firing activity. A) Time evolution of the stimulus Iext(t). B) Raster plot of 100 neurons, the blue line displays the resulting firing activity Eq (1) of the full network. C) Firing activity obtained from a simulation of the mean-field Eq (3). The simulation was initiated with a similar Gaussian profile for the full network and the mean-field equation, parameters: S(h, r) = exp(h)H(rTref) (1 − exp(−(rTref) /τ)), Tref = 10 ms, τs = 10 ms, τ = 5 ms, Js = 15 mV.ms, N = 5000 and Δt = 0.05 ms.

https://doi.org/10.1371/journal.pcbi.1010363.g001

Emergent macroscopic oscillatory dynamics

to investigate the emergence of macroscopic oscillations we analyze the refractory density Eq (2). after algebraic manipulations—see method for details—we find that the mean activity in the asynchronous regime a and the mean input h are given by (4) note that we have used the notation:

Linearizing around the steady state we extract the characteristic equation, whose solutions give the eigenvalues of the linearized operator, see also [32]. The time-independent solution loses stability and an oscillatory limit cycle gains stability as soon as an eigenvalue has a positive real part. The characteristic equation reads where is the Laplace transform of the synaptic filter κ and q the steady density profile, see Method for details.

The bifurcation line, which separates an oscillatory dynamic from an asynchronous steady-state regime, can be obtained numerically by solving:

As we can see from Fig 2C, for a sufficiently large synaptic strength Js and external current Iext, the asynchronous state undergoes a bifurcation toward oscillations. The simulated spiking activity of the full network in Fig 2D and 2E confirms the emergence of a transition from an asynchronous to a synchronized activity regime when parameters are taken below or above the bifurcation line.

thumbnail
Fig 2. Emergent oscillations.

A) Illustration of the escape rate S(h, r) for different values of the parameter τ, (h = 2 mV). B) Comparison between the steady state firing activities with Js = 1 mV.ms, blue dots for the full network, and the black line for the theoretical prediction given by (4). C) Bifurcation line in the parameter space (blue curve). The grey shaded region corresponds to an oscillatory regime of the neural network, the white region corresponds to a stable asynchronous mode of the network. D) and E) Raster plots of the spiking activity of 100 neurons. Panel D corresponds to the black asterisk lying in the asynchronous (white) region of panel C, whereas panel E depicts the activity that corresponds to the black asterisk lying in the oscillatory (grey) region of panel C. parameters: S(h, r) = exp(h)H(rTref) (1 − exp(−(rTref) /τ)), Tref = 8 ms, τs = 10 ms, τ = 0 ms, N = 5000 and Δt = 0.1 ms.

https://doi.org/10.1371/journal.pcbi.1010363.g002

Note that in Fig 2C, the stability line is only found for τ = 0. Indeed, in this case, the characteristic equation reduces to a simpler equation which can be solved numerically. We find that taking τ to be non-zero affects the position of the stability line. The parameter τ plays the role of an effective noise level and the bigger τ is, the more current Iext and/or larger synaptic strength Js is required to induce oscillations.

Finally, let us emphasize that the observed oscillation is an emergent feature of the network. Individual cells being described by stochastic processes, they cannot produce a regular, i.e. periodic, firing activity. However, at the network level, a self-sustained oscillation emerges, see [49] for another approach on neural syncronization. The oscillation properties can be characterized by the PRC. Such a measure relies on the assumption that additional perturbations are weak enough.

Phase resetting curve and adjoint method

When a brief depolarizing current is applied to the oscillatory network, the global firing activity shifts in time (see Fig 3A–3C). Having the network in an oscillatory regime, that is, having a periodic solution of (2), we find (see Method for details) the mPRC as the solution of the mean-field adjoint equation: (5) and (6) satisfying the normalisation condition (7) where T is the oscillation period. Note that we made used of the following notation:

Although we obtain two functions from the adjoint method, Zq and , since incoming perturbations come through the synapses, should be interpreted as the mPRC of the macroscopic oscillation. In Fig 3E–3H, we show an example of a periodic solution and its associated periodic adjoint. The adjoint solution is normalized according to (7), see Fig 3I. We note that the analytically determined mPRC agrees with a PRC obtained from direct perturbations of the spiking network (see in Fig 3J); both are type I. Note that the PRC depends on cell properties, for instance, changing parameters of S, e.g. the strength of intrinsic noise (“softness” τ of threshold), gives a higher mPRC amplitude as illustrated in Fig 3K, which in turn can impact the locking behavior of multi-network rhythms [5].

thumbnail
Fig 3. Macroscopic phase-resetting curve.

A-B) Raster plot of 100 neurons from a simulation of a non-perturbed/perturbed network. C) Resulting firing activity of the networks obtained from Eq (1), the dashed line in black for the non-perturbed network and full line in blue for the perturbed one. D) Illustration of the stimulus. E) The panel gives the periodic solution of the synaptic current Is(t) extracted from the mean-field Eq (2). F) Illustration of the periodic solution of the density function q(t, r) obtained by solving the mean-field Eq (2). G) The panel gives the periodic solution of the first component of the adjoint system (5). H) The panel illustrates the periodic solution of the adjoint density function Zq(t, r) obtained via (5). I) Illustration of normalizing condition (7). J) The network PRC, the black line illustrates the solution of Eq (5), while blue dots indicate the PRC obtained via direct perturbations. K) Solution of the mPRC (6) for different values of the parameter τ. Parameters: S(h, r) = exp(h)H(rTref) (1 − exp(−(rTref) /τ)), Iext = 2 mV, Tref = 10 ms, τs = 10 ms, τ = 5 ms, Js = 15 mV.ms, N = 5000 and Δt = 0.05 ms. Direct perturbations in panel D were made with a square wave current pulse (amplitude 3 mV, duration 5 ms) on the full network, and in panel J with a square wave (amplitude 8 ms, duration 0.8 mV) on the mean-field system (2).

https://doi.org/10.1371/journal.pcbi.1010363.g003

Note that, as stated in the introduction, the PRC is a general measure that can be applied to any oscillating dynamical system. For instance, it has been defined for regular spiking cells producing periodic fining and has been argued to reflect the single neurons’ intrinsic excitability properties [50]. In this contribution we computed a macroscopic PRC for the oscillations emerging at the network level. In this setting, individual cells within the network do not have to be all oscillators (with a significant proportion being excitable), and so, the PRC of an individual neuron may not be defined. Indeed, in our networks we have considered stochastic cells forced to fire by random noise which do not have a periodic firing. However, at the network scale, an oscillation emerges from the interaction of irregular spiking activity. We term the PRC computed for the network as the macroscopic PRC (mPRC).

ISI density and hazard rate

In this section, we briefly recall how to construct the hazard rate function S(h, r) for neurons modeled as time-dependent renewal processes. The class of renewal processes is a wide class of neuron models. Interestingly, it can also be constructed using the interspike interval (ISI) density. We also show how the hazard rate function can be related to the PRC and define its characteristics.

The estimation of the ISI density from experimental data is indeed very common. The interval distribution can be interpreted as a conditional probability density. It is the probability that the next spike occurs in the interval (t, t + dt) given that the last spike occurred at time zero. The hazard rate, also called age-dependent death rate or hazard has the following interpretation that, in order to emit a spike at time r, the neuron has to “survive” without firing during the time interval (0, r) and then fire at a time r. The hazard rate can be determined from the ISI density and its expression is known for decades, see for instance [39]: which can also be written as: Given the interdependence of the hazard rate and the ISI density, one of these functions suffices to apply our theoretical finding exposed in the previous sections and fully determine the PRC. It only requires to have the numerical solutions of q(t, r) and Is(t) along the oscillatory cycle, and the expression of the derivative of the hazard rate: The only assumption underlying our methodology to compute the phase-resetting curve for collective rhythms is to have renewal-type spiking neurons.

The difficulty resides in the expression of S and its derivative which are written as quotient. In practice, it can be hard to express numerically the values of the hazard rate and its derivative. It is for instance the case for a popular model widely used in theoretical neuroscience—the leaky integrate-and-fire (LIF) neuron model. Although the ISI density function of the noisy LIF is known, it can be written as a Volterra integral or as an inverse Laplace transform of hypergeometric functions. In practice, it becomes difficult to implement numerically an expression of S and its derivative. Note that the difficulty is numerical and not theoretical.

Another difficult example to deal with is the gamma function. Often used in the literature, the gamma function is known to provide a good fit to the ISI distribution of actual data. It is given by: Once again, having the expression of the ISI distribution is sufficient to determine the hazard rate function and from there apply our finding to extract the PRC. However, numerical simulations become tricky and the computation of the derivative of the hazard rate is very unstable. Once again, the difficulty is numerical and not theoretical.

In Fig 4 we present several examples used in textbooks to model ISIs or hazard rates of actual neurons [39]. The hazard rate is shown together with the mPRC extracted from our adjoint theory. To compare with results obtained in the previous section, the network is chosen to be purely excitatory. As we can see from the extracted mPRC, the hazard rate function—or the ISI distribution—of the neuron shapes the mPRC differently. This allows to link dynamics of individual neurons, cell type, and the network connectivity to the properties of emerging oscillations at the network scale. Indeed, single cell dynamics clearly have an impact on the macroscopic synchronization properties of the networks. This has been shown in numerous studies. For example, seminal results in [51] showed, using weakly coupled oscillator analysis, how the intrinsic properties of the neurons impact their synchronization properties. It was also shown that the macroscopic oscillation can be shaped by the ISI distribution of single cells [32]. These single cell properties are well documented to be reflected in the hazard function or the ISI distribution [34, 46] and hence a link can be drawn between these and global synchronization of such diverse neurons, see for instance [35] for the role of adaptation.

thumbnail
Fig 4. Macroscopic phase-resetting curve and hazard rate.

The figure gives the hazard rate function (top panels) together with the resulting mPRC for an excitatory network (bottom panels). A-C) Hazard rate functions. D-F) ISI densities. G-I) Solution of the mPRC (6). Parameters: A-D-G) S(h, r) = exp(h)H(rTref)ε(rTref)), Iext = 2.5 mV, Tref = 6 ms, τs = 10 ms, Js = 4 mV.ms, ε = 3 and Δt = 0.05 ms. B-E-H) S(h, r) = exp(h)H(rTref) tanh(exp(h)(rTref)), Iext = 2.5 mV, Tref = 5 ms, τs = 10 ms, Js = 3 mV.ms and Δt = 0.05 ms. C-F-I) S(h, r) = exp(h)H(rTref) tanh(exp(h)(rTref)) (1 + ε cos(ωr)), Iext = 2.5 mV, Tref = 10 ms, τs = 3 ms, τ = 5 ms, Js = 15 mV.ms,ε = 3,ω = 1 and Δt = 0.05 ms. On panels A-B-C) and D-E-F) the hazard rate function and corresponding ISI densities are plotted for h = Iext.

https://doi.org/10.1371/journal.pcbi.1010363.g004

The phase equation and emerging locking modes

We now illustrate how the PRC can be used to investigate the dynamical emergence of phase locking states between oscillatory spiking circuits. Such an analysis relies on the assumption that synaptic interactions across networks remain sufficiently weak and that the connection between circuits is fully symmetric. Such an assumption, which guarantees that the perturbed macroscopic oscillations remain close to the unperturbed oscillation, allows us to place our study within the framework of weakly coupled oscillators [2, 3]. We emphasize that within each circuit, neurons are not weakly coupled. The assumption of weak coupling is only made upon the projection across circuits. Within the weakly coupled framework, see [2, 3] for instance, the bidirectionally delayed-coupled neural circuits reduce to a single phase equation (see Method for details): where θ(t) is the phase lag—or the phase difference—between circuits and the G-function the odd part of the shifted interaction function (see [2, 3]): Here d is the conduction delay between circuits and the interaction function H is given by: where T is the oscillation period, εGs denotes the connectivity strength between circuits, see Fig 5, and the activity A(t) in the equation is defined as the activity of one isolated circuit along the oscillatory cycle. Note that the coefficient ε is here to emphasize the weak coupling across circuits.

thumbnail
Fig 5. Locking modes of interacting circuits.

Top panel: illustration of the two circuits in interaction, εGs represents the coupling strength across networks, Js represents the internal coupling strength, d represents the delay across circuits. A) The panel displays one period of the activity as well as the mPRC. B) The G-function for different parameter values of the delay, dark/light colors correspond to small/large delay. C) Zeros of the G-function for different parameter values of the delays. The circles are filled for stable fixed point and empty for the unstable points. D-E-F) Raster plot of the spiking activity of the two neural networks, black dots indicate the spike timing of the first network, coloured dots indicate the spike timing of the second network. Parameters: S(h, r) = exp(h)H(rTref) (1 − exp(−(rTref) /τ)), Iext = 2 mV, Tref = 10 ms, τs = 10 ms, τ = 5 ms, Js = 15 mV.ms, N = 5000 Gs = 0.2 mV.ms, Δt = 0.005 ms for all the panels and D) d = 0.5 ms, E) d = 2.5 ms, F) d = 4.5 ms.

https://doi.org/10.1371/journal.pcbi.1010363.g005

Studying the emergence of a particular locking mode can be done by looking at the zeros of the G-function. Each zero of the G-function corresponds to a steady state phase lag and its stability can be assessed by looking at the sign of the derivative: zero crossings with a negative slope give stable phase-lags.

In Fig 5A, we display the two quantities of importance to compute the interaction function: one period of the activity and the mPRC obtained via the adjoint method. In Fig 5B we plot the resulting G-functions for different values of delay. To get a better understanding, we construct the corresponding bifurcation diagram (Fig 5C) which shows the phase mode positions with respect to delay across circuits. While the stability of the in-phase mode is kept for small delays, for larger transmission delays, a switch of stability takes place allowing the emergence of a whole possibility of phase lags, eventually for large enough delay anti-phase solutions become stable. In Fig 5D–5F we validate this theoretical prediction by showing rasters of the spiking circuits that reflects the modulation of the emerging phase lag by the delay.

In Fig 6, we illustrate how the phase transition is modulated when changing the ISI density of single cells, that is, the individual dynamical feature. It shows how single neuron dynamics (hazard rate/ISI densities) influence macroscopic synchronization properties of connected networks.

thumbnail
Fig 6. Locking modes of interacting circuits for different ISI densities.

A-C) The G-function for different parameter values of the delay. D-E) Zeros of the G-function for different parameter values of the delays. The circles are filled for stable fixed point and empty for the unstable points. Parameters: A-D) S(h, r) = exp(h)H(rTref)ε(rTref)), Iext = 2.5 mV, Tref = 6 ms, τs = 10 ms, Js = 4 mV.ms, ε = 3 and Δt = 0.05 ms. B-E) S(h, r) = exp(h)H(rTref) tanh(exp(h)(rTref)), Iext = 2.5 mV, Tref = 5 ms, τs = 10 ms, Js = 3 mV.ms and Δt = 0.05 ms. C-F) S(h, r) = exp(h)H(rTref) tanh(exp(h)(rTref)) (1 + ε cos(ωr)), Iext = 2.5 mV, Tref = 10 ms, τs = 3 ms, τ = 5 ms, Js = 15 mV.ms,ε = 3,ω = 1 and Δt = 0.05 ms.

https://doi.org/10.1371/journal.pcbi.1010363.g006

Complementary approach for conductance-based models

In this section, we remind the reader of another use of the renewal framework in Computational Neuroscience. In the seminal work [44], the authors have constructed a particularly relevant mapping between voltage-based models and the renewal equation at the core of this paper, see also [48] for a recent review. For instance, starting with the leaky integrate-and-fire model, see [52]: together with a threshold VT and a reset Vr to account for the emission of an action potential. Here h(t) is the total stimulus, C is the capacitance, G, the conductance, VL, the reversal potential, and σ the scaling of the white noise η. The authors have shown that this is equivalent to the formulation (see [52]): where u(t, r) is given by The boundary conditions of the two partial differential equations are given by and for u, it is given by The hazard rate function S has been computed for different models, see [48] for a review. It would therefore be extremely interesting to see how to extract the mPRC, that is, to compute the adjoint equation. Of course, simulations would have to be performed to see how the theoretical result compares with simulations. While a full treatment for two coupled partial differential equations is beyond the scope of this paper, our initial computations seem to carry out smoothly: in the appendix we lay out a pathway to compute the adjoint for this description.

Discussion

Rhythms are ubiquitous in the nervous system e.g., across the cortex as well as within the spinal cord [10]. They reflect synchronized spiking activity of neurons and are classified in frequency bands: delta (0.5–4 Hz), theta (4–10 Hz), alpha (8–12 Hz), beta (10–30 Hz) and gamma (30–100 Hz). Brain oscillations are known to be involved in numerous functions such as perception, motor coordination and cognition. Excess or deficit in oscillations or synchrony may lead to neurological disorders. To better understand the informational properties of neural oscillations, recent experimental studies have made use of numerically compiled neural population PRCs [53] to show how the brain rhythms react to inputs.

Previous efforts to go beyond these numerical compilations to population PRCs required restrictions on the neuronal models used [13]. Notably, in our previous work we computed macroscopic PRCs for exact reduced networks [5, 13] semi-analytically, but this required the single cells to be modelled by the quadratic-integrate-and-fire neurons with Lorentzian heterogeneity. Adjoint methods for wider classes of networks has been an outstanding question to be resolved.

Another recently developed approach deals also with computing phase response curves for infinite dimensional equations, in particular, for drift diffusion systems [12], see also [3] for a review on the subject of drift diffusion and reaction diffusion. However, in neural context, this method is limited in application to diffusion systems with periodic boundary conditions, and therefore allows, once again, to treat only the quadratic integrate-and-fire neuron model with threshold and reset at infinity. Our approach has the advantage to be more general and allows to treat in theory any neuron model belonging to the class of renewal processes.

In this computational methods paper, we develop a theoretical framework to compute the PRC of emergent macroscopic network-wide oscillations in population models described by refractory density equations. Our methodology, here applied to spiking networks of excitatory cells, provides a path to study the links between microscopic cellular excitability properties, the network coupling and the informational properties of the emerging brain rhythms.

Interestingly, recent studies have shown that the refractory framework, known as the von Foerster framework in Mathematical Biology [25], is powerful enough to entail many cell type dynamics. Such a generality of the refractory density approach relies on the quasi-renewal approximation [47]. This approximation was first introduced to include adaptation due to calcium entry after spikes and neurotransmitter release acting over larger time scales. A recent study has shown that quasi-renewal approximation permits the transition from General Linear Model (GLM) to the escape rate function [54]. GLM point-processes being able to encapsulate the dynamical aspects of most single cell types [46], the refractory density equation can serve as paradigmatic model to describe general network activity. Therefore, the methodology presented here can be applied to a wide variety of network models and architectures (see Methods for the generalization of our results to excitatory-inhibitory networks).

Importantly, as we just mentioned, our method is general enough to entail many cell types. Indeed, having the expression of the ISI distribution is sufficient to determine the hazard rate function and from there apply our theoretical finding to extract the mPRC. However, numerical simulations can be tricky and unstable. Let us emphasize that the difficulty is numerical and not theoretical, and therefore our approach is currently limited to neural dynamics having a closed form expression of the hazard rate function. Another current limitation is the simple architecture of the network. Although it is possible to compute the mPRC for an E-I network (see Method), it results in a system of two coupled partial differential equations which might be hard to solve numerically.

To illustrate our theoretical finding, we have studied the macroscopic phase-locking behaviour between two oscillatory circuits. Within the weakly coupled oscillator framework [3], we have illustrated how the mPRC allows us to construct a bifurcation diagram predicting locking modes between circuits depending on relevant parameters such as synaptic delay, connectivity, etc. Further applications could initiate further studies and benefit our understanding of brain oscillations. For instance, PRC can serve the study of entertainment to periodic inputs, coding and information transfer [6, 55, 56]; or, expanding on our previous work [5], to study the impact of cellular properties on the different phase-locking patterns underlying directed signaling and functional connectivity in single and intercoupled oscillatory networks [5761].

We believe that our approach can be applied widely to intercoupled networks with individual elements whose complexity can be incorporated into the mean-field continuity equations (e.g. cell proliferation [2022], population dynamics [23, 24], epidemiological models [1719]). The von-Foerster equation at the core of this paper is indeed a paradigmatic approach employed to study the population dynamics in many different contexts in Computational and Mathematical Biology [2628].

Methods

In the subsequent sections, we present the extended details yielding to the derivations of the framework introduced in the main text. The Method section is structured as follows: we remind the refractory density equation. Then give the expression the steady states and its stability properties are discussed. Next, we derive the main result: the adjoint equation giving access to the infinitesimal phase resetting curve of the network. The normalisation condition is presented. Numerical details about the procedure of to solve the adjoint equations are given. We finish describing the extension of our results to excitatory-inhibitory networks.

Mean-field description

Denoting q(t, r) the probability density for a neuron to have at time t an age r, the refractory density profile evolves according to the continuity equation: (8) The function S(h(t), r) is the escape rate which reflects the individual properties of neurons. The total input current h(t) is given by where Iext is the external current and Is the synaptic current: Here Js is the synaptic efficiency, A(t) the firing activity defined as and κ the normalized synaptic filter with τs the synaptic decay.

The mean-field Eq (8) is endowed with a boundary condition:

Steady state

The asynchronous state can be computed as the time independent solution of the refractory density equation. Let us denote q(r) the steady state, and A the mean firing rate. We have the following equation where we have noted The equation can be integrated and gives us where we have used the natural boundary condition Finally, the asynchronous mean firing rate can be computed using the conservation property of the neural network and we get Note that the mean firing rate is only implicitly given since h does depends on A.

With our choices of functions we can push further the computation, and after algebraic manipulations, we find that the mean firing activity A is solution of the nonlinear equation (9) which can be solved numerically.

Stability analysis

To study the stability of the asynchronous state, one needs the eigenvalues of the differential operator once a linearization around the steady state has been performed. We therefore consider a small perturbation and write the solution in the form Plugging these expressions into Eq (8)—keeping the first order terms only—yields the partial differential equation and for the activity Since we are interested in the long term behavior of the perturbation we express the perturbation in eigenvalue mode After algebraic manipulations, we get that the perturbation obeys to where we have introduced the Laplace transform κ: and for the activity Integrating this solution with the variation of constants method, we get which implies and we finally arrive on the equation We therefore write down the characteristic equation of the eigenvalues as With the special choice we can push further the computation, and after algebraic manipulations, we find:

The bifurcation line, which separates an oscillatory dynamic from an asynchronous regime, can be obtained numerically by solving

The adjoint equation

To compute the PRC, we first rewrite the synaptic filtering as a differential equation. Having is equivalent as having: We then assume that there is a stable oscillatory solution of period T for the mean-field equation. Considering a small perturbation around the stable solution, we write Plugging these expressions and only keeping the first order term, we get that the perturbation obeys to the following set of equations where and for the activity the boundary condition follows as with Now, we can define a bilinear form as

Different approaches exist to compute the PRC. These have been previously reviewed and the interested reader can look at the textbook [56] as well as at the review on experimental approaches to PRC measurement [62]. The PRC can be computed using a singular perturbation approach or a more geometrical approach relying on isochrons, see [56]. Whereas each approach has its own advantage, both of them are difficult to generalize when it comes to partial differential equations. Interestingly, a very simple method has been proposed relying only on dot products and algebraic computations [9], see also [56] for a review regarding the three different approaches. Namely, we use the fact that the asymptotic phase to an infinitesimal perturbation is independent of time for small perturbation qp, Ip. We recommend the reader to look at [9, 56] for a mathematical justification. Therefore the PRC (Zq, ZI) would be given by the following property Developing the first term we get that and plugging the expression of inside the equation, we obtain developing the terms lead to Applying an integration by parts we get Therefore we have which is equivalent to We now develop the second term and recalling the fact that we obtain Now, putting everything together which gives We now use the fact that we obtain Since this is true for every perturbation, the PRC must solve (10) and (11)

Normalization condition

The adjoint equation being linear, its solution is unique under a normalization condition. In what follows we check that The computations that follow give rise to long mathematical expressions. We thus drop the function variables. After algebraic manipulations, we find that the above condition is equivalent to where we have introduced the new notations: Now developing, we get We now use the fact that Using this expression, we get that Putting everything together, we arrive to We now remind that the adjoint system is given by and we therefore arrive to The mPRC will be the unique solution satisfying the normalization condition: where T is nothing but the period of the oscillation.

Numerical procedure

The mean-field Eq (8) can be readily integrated. We denote the discretization space/time variables, and the corresponding solution at the discretized points. Although theoretically r ∈ [0, ∞), for numerical purposes we need to truncate r. We have observed that the numerical methodology works as long as we truncate r at a value rmax large enough such that the whole population has produced a spike so q(t, r) → 0 for r > rmax. We have observed rmax ≈ 1.25Tref to be a good estimate.

Considering the initial state to be given, the mean-field Eq (8) can be numerically solved along the characteristic curves. On the characteristics, the dynamics reduce to a nonlinear differential equation that can be integrated with the following first order numerical scheme: (12) The proposed numerical scheme (12) is thus well defined and produces results in excellent agreement with simulations of the full network.

Using procedure (12) we find solutions of period T = MΔt for the mean mean-field Eq (8) which we denote as and . Next, we use the solutions and for solving the adjoint system (10) and (11).

Since the solution of the adjoint equation has an opposite stability with respect to the mean-field, we must integrate it backwards in time. We denote Considering the end state to be given, the adjoint system (10) and (11) can be once again numerically solved along the characteristic curves. On the characteristics, the dynamics of the adjoint system (10) and (11) reduce to a linear differential equation that can be integrated with the following backward first order numerical scheme: (13)

The proposed numerical scheme (13) is once again well defined and produces T periodic solutions and matching the PRC obtained by the direct perturbation method (see the main text). Next, we remark some numerical recipes which enhance the stability (and thus the convergence) of the procedure in (13). First, we iterate the scheme (13) over the periodic solutions and (recall ). We also recommend computing the integral in (11) (that is, the sum for in (13)) by using precise integration routines such as the trapezoidal rule or the Simpson’s method. Finally, since the procedure in (13) is based on backwards integration, it does not provide the value of Zq(tn, rj) at r = max(rj). This value can be obtained by simple extrapolation (as we propose in (13)) or by using accurate extrapolation routines taking into account a larger set of values of Zq(tn, rj). We remark that, although the smaller the Δt value the higher the accuracy of solutions, the usage of the above mentioned recipes generates very precise results for time steps Δt ≈ 0.005. We also remark that the procedure on (13) relies on the periodic solutions obtained from (12). To ensure stability of (13), it is necessary to consider integration times large enough so the periodic solutions γ(t, r) are accurate enough (that is, ‖γ(t, r) − γ(t + T, r)‖ ≈ 0, with ‖⋅‖ the Euclidian norm).

Coupled networks and the phase equation

Considering two bidirectionally delayed coupled networks where the coupling is made via long projections from one network to another, the whole system reduces to a set of coupled partial differential equation. For the first network, we have and The boundary conditions are given by and The total input current is still given by the synaptic current Is(t) is computed as and Here Gs denotes the connectivity strength across circuits, the parameter ε emphasises the weak coupling assumption, and the parameter d is the conduction delay between the two networks. Assuming that the two networks are oscillating and placing our study within the framework of weakly coupled oscillators, that is, if we assume that we can reduce the bidirectionally delayed-coupled neural circuits description to a single phase equation: Here θ(t) is the phase difference (or phase lag) between the circuits and the G-function is the odd part of the shifted interaction function (the H-function), see [56] for instance: with d, the time delay between the two circuits. In our case, the interaction function is mathematically described as where T is the oscillation period.

Excitatory-inhibitory network

In the thermodynamic limit the network description of a pair of excitatory-inhibitory populations reduces to a set of coupled partial differential equations. Denoting qe(t, r) the probability density for a excitatory neuron to have at time t an age r, and qi(t, r) for the inhibitory population, the evolution of the density profiles evolve according to the continuity equations: and The boundary conditions are given by and The total input current is still given by the synaptic current Is(t) is computed as We can now define the corresponding bi-linear form: Assuming to be known the periodic solution, and , computations similar to what is presented within the adjoint section, we find that the PRC must solves: and similarly and Incoming perturbation should get through the synapse, should be interpreted as the mPRC of the macroscopic oscillation. Two PRCs can therefore be defined and at the same time. The PRC defined by corresponds to excitatory input arriving upon the E-cells, while corresponds to excitatory input arriving upon the I-cells.

The normalisation condition is now given by: with again T the oscillation period.

Complementary approach for conductance-based models

In this section, we recall another framework in use in Computational Neuroscience. In [44], the authors have constructed a mapping between voltage-based models and the von Foerster equation. For instance, starting withe the integrate-and-fire model, see [52]: together with a threshold VT and a reset Vr to account for an action potential. Here h(t) is the stimulus, C, the capacitance, G, the conductance, VL, the reversal potential, and σ, the scaling of the white noise η. It has been shown that this is equivalent to the following equations [52]: where u(t, r) is given by The boundary conditions of the two partial differential equations are given by and for u, it is given by Defining the corresponding bi-linear form: and assuming to be known the periodic solution, , computations similar to what is presented within the adjoint section, we find that the mPRC must solves: and and where we have used the notation: Incoming perturbation should get through the synapse, should be interpreted as the mPRC of the macroscopic oscillation. The normalisation condition is now given by: with again T the oscillation period.

Supporting information

S1 Python script. Python script to compute the solution of the mean-field equation and its associated adjoint.

https://doi.org/10.1371/journal.pcbi.1010363.s001

(ZIP)

References

  1. 1. Winfree A. The Geometry of Biological Time. Springer-Verlag, London, 1980.
  2. 2. Ashwin P., Coombes S., and Nicks R. Mathematical frameworks for oscillatory network dynamics in neuroscience. The Journal of Mathematical Neuroscience, 6(1):2, 2016. pmid:26739133
  3. 3. Nakao H. Phase reduction approach to synchronisation of nonlinear oscillators. Contemporary Physics, 57(2):188–214, 2016.
  4. 4. Achuthan S. and Canavier C. C. Phase-resetting curves determine synchronization, phase locking, and clustering in networks of neural oscillators. Journal of Neuroscience, 29(16):5218–5233, 2009. pmid:19386918
  5. 5. Dumont G. and Gutkin B. Macroscopic phase resetting-curves determine oscillatory coherence and signal transfer in inter-coupled neural circuits. PLOS Computational Biology, 15(5):1–34, 05 2019. pmid:31071085
  6. 6. Kirst C., Timme M., and Battaglia D. Dynamic information routing in complex networks. Nature Communications, 7:11061 EP–, 04 2016. pmid:27067257
  7. 7. Stiefel K. M. and Ermentrout G. B. Neurons as oscillators. Journal of Neurophysiology, 2016. pmid:27683887
  8. 8. Brown S. A., Kunz D., Dumas A., Westermark P. O., Vanselow K., Tilmann-Wahnschaffe A., Herzel H., and Kramer A. Molecular insights into human daily behavior. Proceedings of the National Academy of Sciences, 105(5):1602–1607, 2008. pmid:18227513
  9. 9. Brown E., Moehlis J., and Holmes P. On the phase reduction and response dynamics of neural oscillator populations. Neural Computation, 16(4):673–715, 2016/10/31 2004. pmid:15025826
  10. 10. Buzsaki G. Rhythms of the Brain. Oxford University Press, 2006.
  11. 11. Kopell N., Kramer M., Malerba P., and Whittington M. Are different rhythms good for different functions? Frontiers in Human Neuroscience, 4:187, 2010. pmid:21103019
  12. 12. Akao A., Ogawa Y., Jimbo Y., Ermentrout G. B., and Kotani K. Relationship between the mechanisms of gamma rhythm generation and the magnitude of the macroscopic phase response function in a population of excitatory and inhibitory modified quadratic integrate-and-fire neurons. Phys. Rev. E, 97:012209, Jan 2018. pmid:29448391
  13. 13. Dumont G., Ermentrout G. B., and Gutkin B. Macroscopic phase-resetting curves for spiking neural networks. Phys. Rev. E, 96:042311, Oct 2017. pmid:29347566
  14. 14. Canavier C. C. Phase-resetting as a tool of information transmission. Current Opinion in Neurobiology, 31:206–213., 2015. SI: Brain rhythms and dynamic coordination. pmid:25529003
  15. 15. Sharpe F. and Lotka A. A problem in age-distribution. Philosophical Magazine, 6:435–438, 1911.
  16. 16. McKendrick A. Applications of mathematics to medical problems. Proc. Edinburgh Math. Soc, 44(98-130), 1926.
  17. 17. Castillo-Chavez C., Hethcote H. W., Andreasen V., Levin S. A., and Liu W. M. Epidemiological models with age structure, proportionate mixing, and cross-immunity. Journal of Mathematical Biology, 27(3):233–258, 1989. pmid:2746140
  18. 18. Franceschetti A. and Pugliese A. Threshold behaviour of a sir epidemic model with age structure and immigration. Journal of Mathematical Biology, 57(1):1–27, 2008. pmid:17985131
  19. 19. Keyfitz B. and Keyfitz N. The mckendrick partial differential equation and its uses in epidemiology and population study. Mathematical and Computer Modelling, 26(6):1–9, 1997.
  20. 20. Billy F., Clairambault J., Delaunay F., Feillet C., and Robert N. Age-structured cell population model to study the influence of growth factors on cell cycle dynamics. Math Biosci Eng., 10(1):1–17, 2013. pmid:23311359
  21. 21. Billy F., Clairambaultt J., Fercoq O., Gaubertt S., Lepoutre T., and Ouillon T. Synchronisation and control of proliferation in cycling cell population models with age structure. Mathematics and Computers in Simulation, 96:66–94., 2014. Differential and Integral Equations with Applications in Biology and Medicine.
  22. 22. Gabriel P., Garbett S. P., Quaranta V., Tyson D. R., and Webb G. F. The contribution of age structure to cell population responses to targeted therapeutics. Journal of Theoretical Biology, 311:19–27, 2012. pmid:22796330
  23. 23. Betti M. I., Wahl L. M., and Zamir M. Age structure is critical to the population dynamics and survival of honeybee colonies. Royal Society Open Science, 3(11), 2016. pmid:28018627
  24. 24. Betti M. I., Wahl L. M., and Zamir M. Reproduction number and asymptotic stability for the dynamics of a honey bee colony with continuous age structure. Bulletin of Mathematical Biology, Jun 2017. pmid:28631108
  25. 25. Foerster H. V. Some remarks on changing populations. Kinetics of Cellular Proliferation, pages 382–399, 1959.
  26. 26. Britton N. Essential Mathematical Biology. Springer-Verlag, London, 2003.
  27. 27. Murray J. D. Mathematical Biology: an introduction. Interdisciplinary Applied Mathematics. Mathematical Biology, 2002.
  28. 28. Perthame B. Transport equation in biology. Birkhauser Verlag, Basel, 2007.
  29. 29. Gerstner W. and van Hemmen J. L. Associative memory in a network of spiking neurons. Network: Computation in Neural Systems, 3(2):139–164, 1992.
  30. 30. Chevallier J., Caceres M. J., Doumic M., and Reynaud-Bouret P. Microscopic approach of a time elapsed neural model. Mathematical Models and Methods in Applied Sciences, 25(14):2669–2719, 2015.
  31. 31. Pakdaman K., Perthame B., and Salort D. Dynamics of a structured neuron population. Nonlinearity, 23(1):55, 2010.
  32. 32. Gerstner W. Population dynamics of spiking neurons: Fast transients, asynchronous states, and locking. Neural Computation, 12(1):43–89, 2000. pmid:10636933
  33. 33. Deger M., Helias M., Cardanobile S., Atay F. M., and Rotter S. Nonequilibrium dynamics of stochastic point processes with refractoriness. Phys. Rev. E, 82:021129, Aug 2010. pmid:20866797
  34. 34. Pietras B., Gallice N., and Schwalger T. Low-dimensional firing-rate dynamics for populations of renewal-type spiking neurons. Phys. Rev. E, 102:022407, Aug 2020. pmid:32942450
  35. 35. Deger M., Schwalger T., Naud R., and Gerstner W. Fluctuations and information filtering in coupled populations of spiking neurons with adaptation. Phys. Rev. E, 90:062704, Dec 2014. pmid:25615126
  36. 36. Dumont G., Payeur A., and Longtin A. A stochastic-field description of finite-size spiking neural networks. PLOS Computational Biology, 13(8):1–34, 08 2017. pmid:28787447
  37. 37. Meyer C. and Vreeswijk C. v. Temporal correlations in stochastic networks of spiking neurons. Neural Computation, 14(2):369–404, 2002. pmid:11802917
  38. 38. Schwalger T., Deger M., and Gerstner W. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLOS Computational Biology, 13(4):1–63, 04 2017. pmid:28422957
  39. 39. Gerstner W., Kistler W. M., Naud R., and Paninski L. Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge University Press, Cambridge, 2014.
  40. 40. Dumont G., Henry J., and Tarniceriu C. O. Noisy threshold in neuronal models: connections with the noisy leaky integrate-and-fire model. Journal of mathematical biology, 73(6-7):1413–1436, 2016. pmid:27040970
  41. 41. Dumont G., Henry J., and Tarniceriu C. O. Theoretical connections between mathematical neuronal models corresponding to different expressions of noise. Journal of Theoretical Biology, 406:31–41, 2016. pmid:27334547
  42. 42. Dumont G., Henry J., and Tarniceriu C. O. A theoretical connection between the noisy leaky integrate-and-fire and the escape rate models: The non-autonomous case. Math. Model. Nat. Phenom., 15:59, 2020.
  43. 43. Chizhov A. V. Conductance-based refractory density model of primary visual cortex. Journal of Computational Neuroscience, 36(2):297–319, Apr 2014. pmid:23888313
  44. 44. Chizhov A. V. and Graham L. J. Population model of hippocampal pyramidal neurons, linking a refractory density approach to conductance-based neurons. Phys. Rev. E, 75:011924, Jan 2007. pmid:17358201
  45. 45. Chizhov A. V. and Graham L. J. Efficient evaluation of neuron populations receiving colored-noise current based on a refractory density method. Phys. Rev. E, 77:011910, Jan 2008. pmid:18351879
  46. 46. Weber A. I. and Pillow J. W. Capturing the Dynamical Repertoire of Single Neurons with Generalized Linear Models. Neural Computation, 29(12):3260–3289, 12 2017. pmid:28957020
  47. 47. Naud R. and Gerstner W. Coding and decoding with adapting neurons: A population approach to the peri-stimulus time histogram. PLOS Computational Biology, 8(10):1–14, 10 2012.
  48. 48. Schwalger T. and Chizhov A. V. Mind the last spike — firing rate models for mesoscopic populations of spiking neurons. Current Opinion in Neurobiology, 58:155–166., 2019. Computational Neuroscience. pmid:31590003
  49. 49. Dumont G. and Henry J. Synchronization of an Excitatory Integrate-and-Fire Neural Network. Bulletin of Mathematical Biology, 75: 629–648, 2013. pmid:23435645
  50. 50. Gutkin B., Ermentrout G. B., and Reyes A. D. Phase response curves give the responses of neurons to transient inputs Journal of Neurophysiology, 94: 1623–1635, 2005. pmid:15829595
  51. 51. Vreeswijk C. v., Abbott L and Ermentrout G. B. When inhibition not excitation synchronizes neural firing Journal of Computational Neuroscience, 1:313–321, 1994. pmid:8792237
  52. 52. Chizhov A. V. Conductance-based refractory density approach: comparison with experimental data and generalization to lognormal distribution of input current. JBiological Cybernetics, 111:353364, Aug 2017. pmid:28819690
  53. 53. Akam T., Oren I., Mantoan L., Ferenczi E., and Kullmann D. M. Oscillatory dynamics in the hippocampus support dentate gyrus-ca3 coupling. Nat Neurosci, 15(5):763–768, 05 2012. pmid:22466505
  54. 54. Gerhard F., Deger M., and Truccolo W. On the stability and dynamics of stochastic spiking neuron models: Nonlinear hawkes process and point process glms. PLOS Computational Biology, 13(2):1–31, 02 2017. pmid:28234899
  55. 55. Ermentrout G. B., Galan R. F., and Urban N. N. Relating neural dynamics to neural coding. Phys. Rev. Lett., 99:248103, Dec 2007. pmid:18233494
  56. 56. Ermentrout G. B. and Terman D. Mathematical foundations of neuroscience. Springer, 2010.
  57. 57. Pérez-Cervera A., Ashwin P., Huguet G., Seara T. M., and Rankin J. The uncoupling limit of identical hopf bifurcations with an application to perceptual bistability. The Journal of Mathematical Neuroscience, 9(1):1–33, 2019. pmid:31385150
  58. 58. Pérez-Cervera A., Seara T. M., and Huguet G. Phase-locked states in oscillating neural networks and their role in neural communication. Communications in Nonlinear Science and Numerical Simulation, 80:104992, 2020.
  59. 59. Pérez-Cervera A., and Hlinka J. Perturbations both trigger and delay seizures due to generic properties of slow-fast relaxation oscillators. PLoS Computational Biology, 17.3 (2021): e1008521. pmid:33780437
  60. 60. Pariz A., Fischer I., Valizadeh A. and Mirasso C. Transmission delays and frequency detuning can regulate information flow between brain regions. PLOS Computational Biology, 17.4 (2021): e1008129. pmid:33857135
  61. 61. Reyner-Parra D. and Huguet G. Phase-locking patterns underlying effective communication in exact firing rate models of neural networks. PLOS Computational Biology, 18.5 (2022): e1009342. pmid:35584147
  62. 62. Torben-Nielsen B., Uusisaari M. and Stiefel K. M. NA comparison of methods to determine neuronal phase-response curves. Frontiers in Neuroinformatics, 2010. pmid:20431724