Skip to main content
  • Loading metrics

Network mechanisms underlying the role of oscillations in cognitive tasks

  • Helmut Schmidt ,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Writing – original draft

    Current address: Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1A, 04103 Leipzig, Germany

    Affiliations Centre de Recerca Matemàtica, Campus de Bellaterra Edifici C, 08193 Bellaterra, Barcelona, Spain, Barcelona Graduate School of Mathematics, Campus de Bellaterra Edifici C, 08193 Bellaterra, Barcelona, Spain

  • Daniele Avitabile,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Supervision, Writing – original draft

    Affiliations School of Mathematical Sciences, University of Nottingham, University Park, Nottingham NG7 2QL, United Kingdom, Inria Sophia Antipolis Méditerranée Research Centre, MathNeuro Team, 2004 route des Lucioles - Boîte Postale 93 06902 Sophia Antipolis, Cedex, France

  • Ernest Montbrió,

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Supervision, Writing – original draft

    Affiliation Center for Brain and Cognition, Department of Information and Communication Technologies, Universitat Pompeu Fabra, C. Ramon Trias Fargas 25 - 27, 08005 Barcelona, Spain

  • Alex Roxin

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Supervision, Writing – original draft

    Affiliations Centre de Recerca Matemàtica, Campus de Bellaterra Edifici C, 08193 Bellaterra, Barcelona, Spain, Barcelona Graduate School of Mathematics, Campus de Bellaterra Edifici C, 08193 Bellaterra, Barcelona, Spain


Oscillatory activity robustly correlates with task demands during many cognitive tasks. However, not only are the network mechanisms underlying the generation of these rhythms poorly understood, but it is also still unknown to what extent they may play a functional role, as opposed to being a mere epiphenomenon. Here we study the mechanisms underlying the influence of oscillatory drive on network dynamics related to cognitive processing in simple working memory (WM), and memory recall tasks. Specifically, we investigate how the frequency of oscillatory input interacts with the intrinsic dynamics in networks of recurrently coupled spiking neurons to cause changes of state: the neuronal correlates of the corresponding cognitive process. We find that slow oscillations, in the delta and theta band, are effective in activating network states associated with memory recall. On the other hand, faster oscillations, in the beta range, can serve to clear memory states by resonantly driving transient bouts of spike synchrony which destabilize the activity. We leverage a recently derived set of exact mean-field equations for networks of quadratic integrate-and-fire neurons to systematically study the bifurcation structure in the periodically forced spiking network. Interestingly, we find that the oscillatory signals which are most effective in allowing flexible switching between network states are not smooth, pure sinusoids, but rather burst-like, with a sharp onset. We show that such periodic bursts themselves readily arise spontaneously in networks of excitatory and inhibitory neurons, and that the burst frequency can be tuned via changes in tonic drive. Finally, we show that oscillations in the gamma range can actually stabilize WM states which otherwise would not persist.

Author summary

Oscillations are ubiquitous in the brain and often correlate with distinct cognitive tasks. Nonetheless their role in shaping network dynamics, and hence in driving behavior during such tasks is poorly understood. Here we provide a comprehensive study of the effect of periodic drive on neuronal networks exhibiting multistability, which has been invoked as a possible circuit mechanism underlying the storage of memory states. We find that oscillatory drive in low frequency bands leads to robust switching between stored patterns in a Hopfield-like model, while oscillations in the beta band suppress sustained activity altogether. Furthermore, inputs in the gamma band can lead to the creation of working-memory states, which otherwise do not exist in the absence of oscillatory drive.


Oscillations are ubiquitous in neuronal systems and span temporal scales over several orders of magnitude [1]. Some prominent rhythms, such as occipital alpha waves during eye-closure [2] or slow-oscillations during non-REM sleep [3] are indicative of a particular behavioral state. Other rhythms have been specifically shown to correlate with memory demands during working memory tasks, including theta (4–8Hz) [47], alpha/beta (8–30Hz) [810] and gamma (20–100Hz) [1113]. Understanding the physiological origin and functional role of such oscillations is an area of active research.

Here we study how oscillatory signals in distinct frequency bands can serve to robustly and flexibly switch between different dynamical states in cortical circuit models of working memory and memory storage and recall. In doing so we characterize the dynamical mechanisms responsible for some of the computational findings in an earlier study [14]; we go beyond that work to include new results on oscillatory control of network states. Specifically, we consider the response of multistable networks of recurrently coupled spiking neurons to external oscillatory drive. We make use of recent theoretical advances in mean-field theory to reduce the spiking networks to a low-dimensional macroscopic description in terms of mean firing rate and membrane potential, which is exact in the limit of large networks [15]. This allows us to perform a systematic and detailed exploration of network states analytically or with numerical bifurcation analysis, which informs us about suitable parameter sets for numerical simulations. The latter serve to give representative examples of the dynamical phenomena investigated here. As a result, we can completely characterize the dynamics of the forced system.

Specifically, we consider networks which exhibit multistability in the absence of forcing. Such attracting network states have been proposed as the neural correlate of memory recall [16, 17], and as a possible mechanism for sustaining neuronal activity during working memory tasks [1820]. We find that an external oscillatory drive interacts with such multistable networks in highly nontrivial ways. Low-frequency oscillations are effective in switching on states of elevated activity in simple bistable networks, while in higher dimensional multistable networks they allow for robust switching between stored memory states. Higher frequencies, in the beta range, destabilize WM states through a resonant interaction which recruits spike synchrony. Such oscillatory signals can therefore be used to clear memory buffers. Finally, when networks operate outside the region of multistability, e.g. due to reduced excitability, an oscillatory signal in the gamma range can be used to recover robust memory recall.


Oscillatory drive can selectively turn on or off WM states

Networks of recurrently coupled excitatory neurons can exhibit bistability given sufficiently strong synaptic weights. Such networks act as binary switches: a transient input can cause a transition from a baseline state to a state of elevated activity, or vice-versa. We asked to what extent an oscillatory signal alone could also drive transitions between states in such a network. In particular we were interested in knowing if the directionality of the transition, and hence the final state of the system, could be controlled via the frequency of the oscillatory drive.

To investigate this we simulated a network of recurrently coupled excitatory quadratic integrate-and-fire neurons, see Methods for details. Fig 1 shows an illustration of the network dynamics as a function of the stimulus frequency and initial state of the network. In particular, at low frequencies, the oscillations push the system from the state of low activity into the state of high activity, which persists under such forcing, see Fig 1A. As the frequency is increased past a critical value, it is no longer effective in driving a transition, and the network remains in its initial state, see Fig 1B. A further increase then shows the opposite effect: The state of high activity becomes unstable under the forcing, whereas the state of low activity persists, Fig 1C. At large enough frequencies we then observe again that no transitions occur and the initial network state persists, Fig 1D.

Fig 1. Frequency response of a network of QIF neurons.

Here we show the response of an excitatory network of 104 all-to-all coupled QIF neurons with distributed input currents to periodic forcing. The model parameters are chosen such that the network is bistable, see also Fig 2A. Each panel shows the network-averaged firing rate (black: network of QIF neurons; orange: result of mean-field equations Eq 1) and raster plot of the response for an initial condition in the low-activity state (top, r ≈ 6Hz) and high-activity state (bottom, r ≈ 73Hz), as well as the oscillatory forcing I(t). A At low enough frequencies, the system is pushed from the low- to the high-activity state. B At slightly higher frequencies, both states persist under the forcing. C Driven with forcing from an intermediate range of frequencies, the state with high firing activity destabilizes in favor of the state with low firing activity. D At high frequencies, both states persist under the forcing. Parameters: τ = 20ms, η = −10, Δ = 2, , A = 1.

The results from Fig 1 show that the frequency of an external oscillatory drive can be used to selectively destabilize a given network state. For the parameter values used here, oscillation frequencies in the delta range result in a WM state while frequencies in the beta range force the system to the “ground” state, essentially clearing the WM state, a result seen also in [14]. Oscillations outside these ranges are ineffective in driving transitions. We seek to understand the mechanisms underlying these transitions, and additionally to determine to what extent the precise frequency ranges are influenced by the network parameters. To do this we will take advantage of recent work in which the authors derived a set of simple equations for the mean firing rate and mean membrane potential in a network of recurrently coupled quadratic integrate-and-fire (QIF) neurons [15]. In the large-system limit these equations are exact and fluctuations can be neglected. The exact correspondence between the low-dimensional mean-field equations and the original network allows us to use standard dynamical systems techniques to fully characterize the range of dynamical states in the network.

Model equations and network analysis

The dynamics in networks of recurrently coupled QIF neurons can be described exactly under the assumptions of all-to-all coupling and quenched neuronal variability, i.e. static distributions in cellular or network properties. For the case of a single network of excitatory cells in which the input currents to individual neurons are distributed, the resulting mean-field equations are [15]: (1)

Here, r is the network average of the firing rate and v is the network average of the membrane potential, J is the strength of synaptic weights. In the derivation of the mean-field equations each synaptic weight is scaled as 1/N, where N is the system size, leading to an order one contribution to the mean input in the thermodynamic limit, whereas fluctuations vanish. η and Δ are, respectively, the center and width of the static distribution of inputs, which is considered to be Lorentzian. External, time-variant forcing is represented here by I(t). The time constant τ is the membrane time constant of the individual neurons and is set to 20ms throughout.

This macroscopic model permits a straightforward investigation of the stationary states in the full network. For sufficiently strong synaptic coupling two stable fixed points co-exist over a range of mean external inputs, see Fig 2A. Linear stability analysis further reveals that the stable high-activity fixed point is a focus for sufficiently high rates, whereas the stable low-activity fixed point is a node [15]. The network therefore shows a damped oscillatory response to external perturbations in the high-activity state. This response reflects transient spike synchrony which decays over time due to the heterogeneity; the characteristic time scale of the desynchronization is in fact proportional to the width of the distribution of input currents Δ [21]. This type of spike synchrony is seen ubiquitously in networks of both heterogeneous and noise-driven spiking neurons operating in the mean-driven regime, in which neurons fire as oscillators [15, 22, 23], and is captured in Eq 1 by the interplay between the mean sub-threshold membrane potential and mean firing rate [24].

Fig 2. Switching behavior at the macroscopic scale.

A Bifurcation analysis of the stationary states identifies a bistable regime for large enough J where a stable focus (red) and a stable node (blue) coexist, separated by a saddle (dotted green). The color-coded curve represents the bifurcation diagram for the value of used here, and the grey curves represent the bifurcation diagrams at different values (left to right: 4J/3, 2J/3, J/3). B The different dynamic regimes of the forced system are shown here as a function of the amplitude A and the frequency f of the forcing. Green: Recall; Red: Clearance; Grey: no switching. Orange: only one globally stable periodic orbit exists due to the system being entrained to the forcing, hysteretically switching between the node and the focus. C Example time traces from B, with initial conditions chosen to be the focus (red) or the node (blue). D The heuristic firing-rate equations Eq 4 show an equivalent fixed point structure, with the exception that the focus is a node here. E Clearance does not occur in the firing-rate equations, as the node cannot be destabilized by nonlinear resonance. Parameters: τ = 20ms, η = −10, Δ = 2, , A = 1.

We use this macroscopic description to systematically investigate the network response to periodic forcing with amplitude A and frequency f, see Eq 8. Fig 2B shows a phase diagram of the network dynamics as a function of these two parameters. As in Fig 1 we keep track of the final state of the network as a function of the initial state. For sufficiently slow frequencies and over a range of amplitudes the network is always driven to the high-activity state (green). This region therefore corresponds to recall of the memory state, see Fig 2C (left). For an intermediate range of frequencies a sufficiently strong forcing always drives the network to the low-activity state (red), which corresponds to clearance, Fig 2C (right). The frequency band for clearance is essentially set by the frequency of intrinsic oscillations of the high-activity state, i.e. it is a resonant effect, see Fig 3. Weak forcing and forcing at very high frequencies fail to drive any transitions, while strong forcing at low enough frequencies can enslave the network dynamics entirely (orange). For the parameter values used here recall occurs for frequencies below about 2Hz and clearance in the range between 10-30Hz.

Fig 3. Linear and nonlinear response.

A Linear response of focus, saddle and node to sinusoidal and non-sinusoidal inputs, with the focus showing a characteristic resonant response at approximately 40Hz. The response of the focus to non-sinusoidal input shows additional sub-harmonic resonances. B Nonlinear response of the fixed points by means of bifurcation analysis in the forcing frequency for different amplitudes. Non-sinusoidal forcing leads to a richer bifurcation structure. C Bifurcation diagram in f for non-sinusoidal forcing with A = 1, and comparison with numerical results (bottom). The bifurcation structure is governed by saddle-node bifurcations (SN) and period-doubling bifurcations (PD). Branches of period-doubled solutions are omitted here. D A two-parameter bifurcation analysis of the focus reveals the loci of saddle-node bifurcations (red) and period-doubling bifurcations (blue) in the (f, A)-plane. We compare these with the logarithmic mean squared deviation (log MSD) from the fixed point (color scale), obtained by time simulations. Grey areas indicate regions where the system leaves the basin of attraction of the focus. Parameters: τ = 20ms, η = −10, , Δ = 2.

In order to characterize the role of spike synchrony in determining the network response, we derive a reduced firing rate equation with the identical fixed-point structure as in the original, exact mean-field equations Eq 1, but without the subthreshold dynamics. Specifically, the fixed-point value of the firing rate in Eq 1 can be written as (2) where Φ is the steady-state f-I curve, which in the case of Eq (1) is (3)

We use the steady-state f-I curve to construct a heuristic firing rate model given by (4) and investigate its response to periodic forcing I(t). Eq 4 is similar in form to the classic Wilson-Cowan firing rate model for a single population [25]. In this case the high-activity branch of the firing rate is a node, i.e. it no longer shows damped oscillations in response to perturbations, see Fig 2D. Furthermore, the region of “clearance” has completely vanished in the phase diagram in Fig 2E, confirming that in the original network it was due to a resonance reflecting an underlying spike synchrony mechanism.

Recall and clearance occur due to forcing-induced bifurcations

Given the simplicity of the mean-field equations Eq 1 we can calculate the linear response of the system analytically, without the need for extensive numerical simulations. The response of the focus to weak sinusoidal inputs (linear response) already shows a clear resonance for the high-activity state (Fig 3A), where the resonant frequency is (5) see Methods. Furthermore, additional, sub-harmonic resonance peaks occur when the forcing is sharply peaked, leading to a broadening of the resonance spectrum (Fig 3A, right); this effect is due to the presence of many sub-harmonics of the linear resonance in the forcing term itself. Conversely, the node does not show such a resonance, indicating a qualitative difference in the response of the two stable fixed points.

However, the switching behaviors seen in Figs 1 and 2 and the corresponding destabilization of network states cannot be attributed to this linear resonance alone—nonlinear effects have to be taken into account. This can be seen by plotting the bifurcation diagram for the response of the network to the forcing for several values of the forcing amplitude. For relatively weak, but finite forcing, the network response consists of a periodic orbit in the vicinity of the corresponding unforced fixed-point, Fig 3B (top). As the forcing amplitude is increased, the resonance peak of the focus moves towards slower frequencies, akin to a softening spring. Then, a pair saddle-node bifurcations leads to a range of frequencies in which three periodic orbits coexist, see Fig 3B middle-right.

At large enough amplitudes for the sharply-peaked, non-sinusoidal forcing two additional bifurcations occur which are responsible for the “recall” and “clearance” behaviors respectively, see Fig 3B (bottom-right) and Fig 3C. Specifically, for sufficiently large frequencies, the stable periodic orbit due to the low-activity node (blue line) coexists with the unstable one due to the saddle-point (green line), and with a third state, emanating from the focus (red line). When the forcing frequency is sufficiently small, only the latter solution persists. This can be understood as quasi-stationary response of the system due to the slow forcing, see the Methods section for details. In other words, the forcing here is slow and large enough to push the system beyond the bistable regime into the basin of attraction of the focus. Therefore, at low frequencies the only solution is the periodic orbit in the vicinity of the high-activity focus, which explains why low frequencies are effective in switching on the high-activity state, i.e. for “recall”.

On the other hand, in the range of frequencies over which the network response is resonant, period-doubling bifurcations of the focus lead to a frequency band in which all periodic orbits around the focus are unstable. This is due to the rapid occurrence of further period-doubling bifurcations, leading to the emergence of chaotic responses to the forcing. As we show in the Methods section, there exist narrow frequency bands in which these chaotic responses are stable, but a numerical investigation of these shows that they quickly become unstable as the frequency of the forcing is changed. Therefore, the periodic orbit in the vicinity of the low-activity node is the only stable solution. Frequencies in this range are therefore effective in switching off the high-activity state, i.e. for “clearance”. As we show in Fig 3D, the loci of bifurcations that periodic orbits around the focus undergo, explain well the parameter range in the (A, f)-plane in which clearance is observed, i.e. the red area in Fig 2B.

Higher-dimensional memory circuits

A single bistable network of neurons serves as a canonical illustration of a memory circuit. However, such a network can only store a single bit of information; actual memory circuits must be capable of storing more information. In terms of neuronal architecture this can be achieved by having a network which is comprised of several or many neuronal clusters [16, 17, 26]. We asked to what extent the frequency-selective switching behavior seen in a single bistable network could also be found in a clustered network. We look first at a simple, two-cluster network and then the more general case of a higher-dimensional multi-clustered network.

Two competing neuronal populations.

We set up a network of two identical populations with recurrent excitation and mutual inhibition, see Fig 4A. This network of two competing neuronal populations may be regarded as the substrate of a number of cognitive tasks, such as perceptual bistability (visual [27, 28], auditory [29], or olfactory [30]), or forced two-choice decision making [31, 32].

Fig 4. Switching in a network of two competing populations of neurons.

A We consider two identical populations with recurrent excitatory connections and mutual inhibition. B Bifurcation diagram of the fixed points of the system. The system can be in a symmetric state (black) or asymmetric state (grey). We choose a point in the tri-stable regime (η = −6, vertical line), where either both populations are quiescent, or one population is active and the other quiescent. The insets show the stable states (two asymmetric, one symmetric). C Applying global forcing with slow frequency (2Hz) does not lead to the activation of either of the asymmetric patterns, due to the lack of symmetry breaking mechanisms. D Driving the system with independent noise sources (zero-mean Ornstein-Uhlenbeck process) with small noise amplitude σ does not lead to reliable switching due to long residence times. E Combining noise with a protocol that generates oscillations of different frequencies over different time intervals leads to the reliable (but random) activation of one of the two asymmetric patterns and switching between these at 2Hz, and the clearing of a sustained pattern at 40Hz. Parameters: η = −6, Δ = 2, , τ = 20ms, A = 2, σ = 0.05.

We choose parameters such that there is one stable fixed point at which both populations are in the low firing regime, and two stable fixed points in which one population is in the low firing regime and the other is in the high firing regime. The latter two are symmetric with respect to a swap of population indices, i.e. reflection symmetric, see the bifurcation diagram in Fig 4B. The system is placed near a sub-critical pitchfork bifurcation, where the former fixed point would become unstable.

With both populations being in the low firing regime, global oscillatory forcing does not generate switching behavior at any frequency, see Fig 4C. If the two populations are driven by weak, independent noise sources, we also fail to observe any switching on relevant time scales, see Fig 4D. However, the noise sources serve to break the symmetry of the system, and combining them with global oscillatory drive now allows for frequency-selective recall and clearance, as in the single-population network, see Fig 4E. Specifically, low frequency drive switches the network from a symmetric state to one in which one of the populations is active (2Hz stimulation in Fig 4E); continued low-frequency forcing generates ongoing stochastic switching between the two activated states. When this drive is released the currently active configuration is stabilized (between 40 and 60 seconds in Fig 4E). Finally, an intermediate range of frequencies is effective in clearing the currently held active state (40Hz stimulation in Fig 4E) and stabilizing the symmetric, low-activity state. An analysis of the bifurcation structure in this network as a function of forcing amplitude and frequency reveals that bifurcations analogous to those responsible for recall and clearance in the single-population model, i.e. Fig 3, also occur here.

A many-cluster network.

Here we consider a network of 100 neuronal populations which interact via effective interactions which may be excitatory or inhibitory. The connectivity is chosen so that 10 distinct, random activity patterns are encoded; in each pattern five neuronal populations are active, i.e. the coding sparseness is 5%. The patterns and connectivity matrix are shown in Fig 5A and 5B respectively. Simulations again reveal a frequency-selective response of the network similar to the two-population model. Namely, low frequency inputs in the presence of weak noise switch on the activated state and allow for robust switching, while over a range of intermediate frequencies all activated states are cleared, see Fig 5C. The relative contribution of the neuronal populations that participate in a specific pattern to the activity of the entire network is shown in Fig 5D.

Fig 5. Hopfield network with random patterns.

A A network of 100 neural populations is chosen to encode ten patterns with five populations each. The patterns can overlap. B The corresponding connectivity matrix of the network. C We apply an activation/deactivation protocol. The encoded patterns are randomly activated in the presence of slow oscillations (2Hz), sustained in the absence of oscillations (grey, from 60s to 80s), and deactivated in the presence of fast oscillations (40Hz). All populations have independent noise sources (Ornstein-Uhlenbeck) with amplitude σ. D Relative contribution of each pattern to the total activity of the network. Parameters: η = −8, Δ = 2, J = 10, τ = 20ms, A = 5, σ = 0.2.

Generating burst-like oscillations

Thus far we have treated oscillations as an extrinsic effect, i.e. we are agnostic as to their origin. To be effective for flexible control of memory states, the oscillatory forcing we have considered here must fulfill two requirements: First, it must have a broad range of possible frequencies, and secondly, it must have a burst-like shape. Here we show that a simple circuit comprised of interacting excitatory and inhibitory populations can satisfy both these requirements.

Specifically, we construct a network of QIF neurons consisting of an E-I circuit which spontaneously oscillates, and drives a downstream population of E cells, which itself is bistable, see Fig 6. Using the corresponding mean-field equations for the E-I circuit, we found a broad region of oscillatory states of the E-I network as a function of the mean external drive to the E and I populations, ηe and ηi respectively, see the phase diagram Fig 6B. By adjusting the external drive to the E and I populations alone we can tune the output frequency over an order of magnitude. This allows us to selectively switch the downstream network on and off, as shown in Fig 6C.

Fig 6. Forced switching with oscillations from an E-I network.

A The network is built such that an excitatory population (E1) and an inhibitory population (I) form a circuit that can generate oscillatory output via the excitatory population (E1), which is fed into another excitatory populations (E2). The latter is in the bistable regime. B Bifurcation diagram of the E1-I-circuit in the parameters ηe and ηi. The organizing bifurcations are a pair of saddle-node bifurcations (SN) of the fixed points, and a Hopf branch (H) that connects to one of the saddle-node branches via a Bogdanov-Takens codimension-two point (BT). (Limit cycles are found below the Hopf branch). C Firing rates and raster plots of the population outputs as a result of the parameter tuning. Time traces of population E2 are portrayed for both stable initial conditions (node and focus). By choosing ηe and ηi accordingly, recall (ηe = −4.4, ηi = −18), clearance (ηe = −1, ηi = −5.5) and bistable response (ηe = 0, ηi = −2) can be observed. Other parameters: , , Δ = 2, τ = 20ms. Mean current of E2: η = −10.

Gamma oscillations can generate memory states

Outside the region of bistability (or multistability in the case of clustered networks), neuronal networks will relax to a single stationary state in the response to a transient input. Here we show that this need not be the case if the network activity is subjected to ongoing oscillatory modulation.

As an illustration we take a single population of excitatory neurons with strong recurrent excitation, but insufficient tonic drive to place it in the region of bistability. As a result, the response of the network to a transient excitatory stimulus decays to baseline, as seen in Fig 7A (top). However, in the presence of an oscillatory input in the gamma range, which itself only very weakly modulates the network activity (Fig 7A middle), the transient input now switches the network to an activated state with prominent gamma modulation Fig 7A (bottom). Once the oscillations cease (green arrow) the activated state vanishes.

Fig 7. Forcing-induced bistability.

A Here we illustrate the interplay between high frequency forcing and a transient stimulus outside the bistable regime. In the absence of oscillations, a 40ms long stimulus with amplitude 6.8 (bar) does not produce sustained activity in the model system, neither do oscillations on their own. However, the combination of oscillations with the transient stimulus leads to sustained high activity, until the oscillations are turned off (arrow). B Loci of the saddle-node bifurcation representing the lower limit of the bistable area, as functions of η and the frequency of the forcing, for different amplitudes of forcing. The choice of parameters in A is indicated by a triangle. Parameters: η = −11.5, Δ = 2, , τ = 20ms, A = 2, f = 80Hz.

This phenomenon depends crucially on the presence of the spike-synchrony mechanism underlying the damped oscillatory response of the high-activity focus discussed earlier. Specifically, for the parameter values used in Fig 7 the only fixed-point solution which exists is the low-activity node. Nonetheless, oscillatory forcing at sufficiently high firing rates can still recruit and resonate with the damped oscillatory interaction between the mean firing rate and mean membrane potential in the network. The resulting resonant frequency can no longer be associated with the linear response of the focus as it is a fully nonlinear network property.

The phase diagram Fig 7B shows the regions of bistability given an oscillatory forcing, for different forcing amplitudes. For zero amplitude the curve corresponds to the saddle-node (SN) bifurcation of the unforced system (horizontal black line). Note that only sufficiently high frequencies allow for bistability given tonic inputs which place the network below the SN. Furthermore, there is a clear resonance in the range of 60–90Hz for these parameter values. As the forcing frequency f → ∞ the curves converge to the SN line of the unforced system. This is because the forcing we use has zero-mean and hence, given the low-pass filter property of neuronal networks, has no effect on the network dynamics at high frequencies.


In this article we have studied the role of oscillations in switching or maintaining specific brain states. Specifically, we identified distinct frequency bands: delta, beta, and gamma with specific functional roles. This finding is especially intriguing given that the networks we study are relatively simple. Connectivity is all-to-all and neurons are exclusively excitatory. For the multi-population networks, interactions between populations are assumed to be mediated by fast inhibition, leading to a winner-take-all behavior. Furthermore, synaptic transmission is considered to be instantaneous, with the only relevant time scale being the membrane time constant (τ = 20ms). The susceptibility of the networks to forcing of distinct frequencies therefore does not depend on the presence of multiple time scales associated with intrinsic currents, synaptic kinetics or sub-classes of inhibitory cells. Rather, the key dynamic factors are: bistability or multistability due to recurrent excitatory reverberation, and transient spike synchrony in response to external drive. Given this, we expect to see the same phenomenology in more biophysically realistic networks as long as there is bistability and external noise sources are not too strong. Additionally, none of the mechanisms we study depend crucially on the specific choice of neuron model, at least for type I spiking models. The “switching-on” at low frequencies depends only on the presence of a saddle-node bifurcation, which is ubiquitous in networks of spiking neurons in the bistable regime. Similarly, the “switching-off” or “clearance” depends only on recruiting spike synchrony, which occurs readily in both integrate-and-fire models as well as conductance-based spiking models [24]. In fact, in the mean-driven regime spiking networks in general robustly exhibit a resonance to oscillatory inputs, which reflects the underlying synchrony mechanism [20, 23, 33].

In the region of bistability, low frequencies are effective in pushing the network into a high-activity state; for not too large amplitudes the network remains in the activated state on the downswing of the input. The cut-off frequency for this “recall” signal is determined by the escape time of the network from the vicinity of the saddle-node bifurcation in the low-activity state, and here is a few Hertz, see Fig 2A. In multi-stable networks, this same mechanism allows for robust switching between distinct memory states. On the other hand, frequencies in the beta range are effective in switching off the high-activity state by resonantly driving bouts of spike synchrony. The precise frequency range depends on network parameters, see Fig 8. In both cases the relevant frequency ranges scale with the membrane time constant of the neurons. Therefore, e.g. choosing a time constant τ = 10ms will simply stretch the x-axis of the phase diagram in Fig 2B by a factor of two. Finally, we showed that forcing in the gamma range can allow for robust working memory states which otherwise do not exist, i.e. the system sits outside the region of bistability with oscillatory forcing. This mechanism once again depends on resonantly recruiting spike synchrony.

Fig 8. Change of resonant frequency with model parameters.

We compute the (linear) resonant frequency of the saddle for parameter values in the bistable regime (delimited by black lines), specifically where “recall” occurs using non-sinusoidal forcing (delimited by red line). As “clearance” is caused by nonlinear resonance of the focus, the corresponding frequency band is found near the linear resonant frequency. At fixed values of J the resonant frequency varies approximately by a factor of two across the range of values of η. Other parameters: Δ = 2, τ = 20ms.

We find that non-sinusoidal, burst-like drive is most effective in switching the network state, see Fig 3A and 3B. In fact, this is precisely the type of oscillation which readily emerges in a simple E-I network. Furthermore, the oscillation frequency can be modulated over a wide range through changes in the tonic drive to the E-I circuit alone, see Fig 6. This means that the state of downstream memory networks can be flexibly controlled via an E-I circuit through global changes in excitability alone.

While here we have considered networks in which intrinsic oscillatory activity is due to transient spike synchronization, spiking networks can also generate oscillatory activity due to E-I and I-I loops, which can occur in the absence of strong spike synchrony. For example, networks of coupled excitatory (E) and inhibitory (I) spiking neurons readily generate oscillations via a Hopf bifurcation when excitation is sufficiently strong and fast [34, 35]. The E-I loop, and in particular the ratio of E to I time constants, largely sets the frequency of these oscillations, which tend to lie in the gamma range (30–100Hz). On the other hand, the I-I loop itself can underlie the generation of fast oscillations (>100Hz), the frequency of which is set by the inhibitory synaptic delay [35, 36]. Both the E-I and I-I loops contribute to the population frequency in E-I networks, with the E-I loop dominating when recurrent excitation is strong. Resonant responses to periodic stimuli due to the E-I loop in neuronal circuits have been studied in firing rate models [3739] as well as in networks of LIF neurons [33].

Damped oscillatory activity due to the E-I loop can also arise in the high-activity state of the bistable regime of E-I networks [40]. In this scenario external periodic drive could also be used for “clearance” of the activated state by resonating with the E-I loop. While the phenomenology of this resonance would be similar to the resonance we have considered in this manuscript, the mechanism is nonetheless distinct as it does not involve spike synchrony. On the other hand, spike synchrony does robustly lead to resonances in E-I networks, as measured for example by the linear response [23]. In principle both resonances could be present in the bistable regime of E-I networks, allowing for an even more complex response to oscillatory input than we have studied here.

Current non-invasive brain-stimulation techniques, such as repetitive transcranial magnetic stimulation [41], or transcranial alternating current stimulation [42], apply transient oscillatory signals to large parts of the brain. Our study may be useful to investigate the impact of such signals on the dynamics of neuronal mass models and the psychological and behavioral effects of neuromodulation. Our results could also be of relevance for investigating the use of deep-brain stimulation to treat Parkinson’s disease [43] and (pharmacologically) treatment-resistant depression [44]. Although the model used here describes networks of spiking neurons with instantaneous synapses, future studies could also incorporate synaptic dynamics with appropriate time scales for excitatory and inhibitory transmission [24]. These time scales can be influenced by drugs, or (pathological) changes in neurotransmitters. The framework developed here may therefore serve as a tool to study the cause of functional deficiencies in synapse-related conditions, so-called “synaptopathies” [45, 46].


Mathematical model

Neural mass and neural field models are an important tool for understanding macroscopic neuronal dynamics. Classical models include the Wilson-Cowan model [25, 47] or the Amari model [48, 49]. However, such macroscopic models of brain activity often pose a stark simplification of the actual dynamics, and often miss important features from the spiking dynamics, such as spike synchronization. Recently, there have been advances in linking the microscopic and macroscopic dynamics of networks of spiking neurons [15, 23, 5057].

We consider a neural mass model that was recently derived from networks of all-to-all coupled quadratic integrate-and-fire neurons in the thermodynamic limit [15], see Eq (1). To simplify the mathematical treatment, we divide t by τ which represents the case of time being measured in units of τ, thus eliminating τ from the equations: (6)

Here, r represents the ensemble average of the firing rate of neurons, and v represents the ensemble average of the membrane potential. The parameters η and Δ represent the center and witdh of the Lorentzian distribution of time-invariant input currents into the neuronal ensemble, and J is the coupling constant between neurons. Time-varying external inputs are given by I(t). The original model (1) can then be recovered by tτt, rr/τ. As we set τ = 20ms, r = 1 here corresponds to a firing rate of r = 50Hz in the full model.

Here, we consider I(t) to be T-periodic, i.e. I(t + T) = I(t). We distinguish between two types of input: sinusoidal input, (7) and non-sinusoidal input, (8) where we take n = 20 for the simulations presented in this paper. The parameter A represents the amplitude of the forcing. The constant γ is chosen such that . We choose this type of zero-mean forcing to avoid any changes in network excitability which a tonic DC-offset might cause. In other words, the input models a reorganization of afferent spikes into periodic volleys without adding any additional spikes. In the non-sinusoidal case the spikes are more synchronized than in the sinusoidal case.

We compare the full model equations with its equivalent heuristic firing rate equation, which preserves the fixed point structure but reduces the dynamical behavior. This is done by considering stationary solutions given by (9)

Solving these equations for r is equivalent to solving Eq 2. Thus, the reduced heuristic firing rate equations can be expressed by (10) where the f-I function Φ(Jr + η) is given by Eq 3.

Linear response

Ignoring transient dynamics, the response of the model equations to the external input I(t) is T-periodic as well, at least in the limit of small amplitudes A (an exception are period-doubled solutions, which are a nonlinear phenomenon only relevant at larger A). In this case the corresponding Fourier spectra of the firing rate r(t) and of the membrane potential v(t) are discrete: (11)

For brevity of exposition we use here the angular frequency ω = 2πf instead of the ordinary frequency f. This approach describes the projection of solutions of r and v from a continuous space onto a discrete function space V, with orthogonal basis functions einωt, . The same Fourier decomposition applies to the input current I(t): (12)

To determine the linear response of the model equations [58], we first carry out a Fourier decomposition of the system linearized around the fixed points given by r0 and v0: (13)

Solving this set of linear equations, we obtain (14) with (15) where ω0 is the (angular) resonant frequency: (16)

The resonant frequency is state-dependent and changes with model parameters. Reintroducing the time scale τ, perturbations of the upper branch solution resonate at a frequency (17) where r0 is the value of the steady-state firing rate. This is true as long as the argument of the square root is positive. Therefore as the firing rate decreases along the upper branch, for decreasing external input, the frequency decreases to zero at which point the focus becomes a node. This point occurs before the saddle-node is reached unless Δ = 0 in which case it exactly coincides with the saddle-node.

Fig 8 shows how the linear resonant frequency of the stable focus in the bistable regime of a network of excitatory QIF neurons varies as a function of the mean external input η and the strength of synaptic coupling J. Recall is not possible to the left of the red curve given the nonlinear forcing used here. This line is determined by setting Amin = Amax, see Eqs (26) and (27) further below.

The time-dependent linear response of the firing rate and the membrane potential is now given by (18)

From this, we can derive the amplitude of the linear response of the firing rate, (19) and analogously of the membrane potential. Alternatively, one can derive the time-averaged linear response (“power”) of the system: (20) (21)

Here we have made use of the orthogonality of the basis functions, and the fact that T = 2π/ω.

Numerical continuation

In order to exhaustively and accurately trace the bifurcations that occur in the model equations, we make use of AUTO 07p [59]. Since this software is designed to deal with autonomous systems, we recast the (non-autonomous) model Eq (1) into a set of autonomous equations: (22)

The last two equations create the periodic stimulus x(t) = sin(ωt) in the model equations. We distinguish the sinusoidal case, (23) and the non-sinusoidal case (24)

Continuation of the forced system is performed by starting from a known fixed point (r0, v0) at A = 0, and continuing solutions by increasing A up to the desired value. We use the L2-norm as a scalar measure to represent periodic solutions: (25)

Where we perform this one-parameter continuation, we represent solution branches by plotting the L2-norm against the parameter that is being varied. Where we perform two-parameter continuation, we plot the loci of bifurcations against the two parameters being varied.

Mechanisms underlying switching

Here we illustrate in greater detail the mechanisms underlying the “switching on” of activated network states (or simple switching between attractors in the case of a multi-stable network) at low frequencies, and the “switching off” of activated states at frequencies in the beta range.


The effect of low frequencies can be understood by considering the quasi-stationary response, namely how changes in the external drive alter the steady-state network solution. Fig 9A–9C show the steady-state bifurcation diagram for a single excitatory population of QIF neurons as a function of the mean external drive η.

Fig 9. Mechanisms of switching: Quasi-stationary response.

A At amplitudes below the critical range no switching occurs (A = 0.7). B Amplitude values within the critical range lead to switching (A = 1). C At amplitudes above the critical range the system undergoes periodic hysteretic switching (A = 1.3). D Bifurcation diagram of stationary states with critical values for saddle-node bifurcations (ηc1, ηc2) and the choice of model parameter (η0). E Normalized non-sinusoidal forcing over one period (A = 1), with minimum and maximum values indicated. Parameters: η = −10, , Δ = 2, τ = 20ms, f = 0.1Hz.

On top of these bifurcation diagrams we plot the firing rate of the forced system against the x-axis, which is η + I(t) as the forcing can be understood to be a time-varying mean input current into the system. This is to illustrate that at low frequencies the system remains close to the fixed points (except when it switches between them), hence the term “quasi-stationary”. Given a mean input which places the system within the region of bistability, a small-amplitude, low-frequency forcing fails to push the system past the low-activity saddle-node, see Fig 9A. In a range of forcing amplitudes the network switches to the high-activity state and remains on the upper solution branch, see Fig 9B, while for larger amplitudes the network activity becomes slaved to the forcing, Fig 9C. The range of suitable amplitudes depends on the model parameter η, which is situated in the bistable regime. For clarity, we denote the chosen parameter by η0. The bistable regime is delimited by two saddle-node bifurcations, that occur at ηc1 and ηc2, respectively. Thus we have ηc1 < η0 < ηc2. The range of amplitudes also depends on the shape of the forcing representative of the case A = 1. In this case, the forcing is characterized by its minimum value Imin and its maximum value Imax. We assume Imin < 0 < Imax. The minimal amplitude required to push the system from the node to the saddle is then given by (26)

The maximum amplitude, up to which the system stays on the upper branch, is given by (27)

If the system parameters are such that AmaxAmin, then there is no amplitude regime at which recall occurs, and increasing the amplitude leads directly to hysteresis.


Fig 10 shows the details of the bifurcation structure of the periodically forced network which leads to the “clearance” behavior.

Fig 10. Mechanisms of switching: Nonlinear resonance.

A Bifurcation diagram of the focus with f as bifurcation parameter at A = 1. B Inset of A, with period-doubling bifurcations (orange dots) and emerging branches of period-doubled solutions shown. The period-doubling cascade gives rise to stable chaos (grey area), which becomes unstable at lower frequencies. C Example time series from B around the area where the chaotic attractor becomes unstable. Parameters: η = −10, , Δ = 2, τ = 20ms.

Specifically, a series of period-doubling bifurcations, a so-called period-doubling cascade, leads to the emergence of a chaotic orbit. This orbit is initially stable, but a further decrease of the frequency leads to global instability of the chaotic orbit, and the destabilization of the periodic orbit around the focus, see Fig 10A and 10B. The latter occurs just below a forcing frequency of f = 32.5Hz, see Fig 10B. In Fig 10C we show representative time traces for forcing frequencies of f = 32.45Hz and f = 32.5Hz. In the former case, the system leaves the forced focus in less than three seconds, whereas in the latter case the chaotic orbit persists for the whole simulation period of 103 seconds. We infer from this that the critical frequency at which the chaotic orbit loses stability globally is within this frequency range.

Memory networks

A natural extension of the single-population model is to consider a network of neural masses: (28) where the adjacency matrix A with entries Anm determines the connectivity structure between neural masses. The term σξn(t) describes an additional noise input, where σ is the noise amplitude, and ξn(t) is the random variable.

In this paper we consider two scenarios, the first of which is two neural populations with recurrent excitation and mutual inhibition. The adjacency matrix of such a network is given by (29) where Ji < 0 < Je.

In the second scenario, we examine the dynamics within a Hopfield network [16]. Rather than creating the network through learning algorithms, we build the network as follows. First, we choose the patterns that the network should encode and write them into an array U. Each column of this array represents one pattern, where we put 1 for populations that are active in this pattern, and 0 otherwise. As a result, the array U has the size N × Npat, where Npat is the number of patterns encoded, and N is the network size. Each pattern consists of Np active populations. The adjacency matrix of a network that encodes these patterns can then be constructed as follows [17], (30) with p = Np/N. The entries of A are capped at a maximum value of (1 − p)2Q to account for saturation effects in synaptic plasticity. Otherwise, the strength of connections in the network would steadily increase as patterns are added. The offset Q introduces global inhibition that stabilizes the encoded patterns. We set Q = 0.2.

Each population is subjected to an independent Ornstein-Uhlenbeck process ξn(t) to break the symmetry of the networks. The Ornstein-Uhlenbeck process is implemented as Langevin equation: (31) where ζn(t) are independent Gaussian white noise sources, 〈ζn(t)ζm(ts)〉 = δ(s)δmn, and τ is the characteristic time scale, which we set to τ = 20ms.

E-I circuit generating oscillations

To create a network that generates oscillations, we consider a network of an excitatory population interacting with an inhibitory one: (32)

For simplicity, we choose Je = −Ji = J. The two populations differ in terms of the means of their tonic input currents, ηe and ηi. We vary these two parameters to identify the regime where stable oscillations exists, and to change the frequency of these oscillations.

A canonical model for nonlinear resonance in the bistable regime

In the network model, the high-activity branch of solutions in the bistable regime exhibits damped oscillations. Periodic external drive can resonate with these intrinsic oscillations, leading to destabilizing period-doubling bifurcations as seen in the previous section. Here we show that this mechanism is present in the simplest possible model exhibiting a saddle-node bifurcation and for which the upper branch becomes a focus: (33) (34)

This model is a particular unfolding of the so-called Takens-Bogdanov normal form [60], for which there is no Hopf bifurcation, which is the relevant case for our network model. It is easily shown that a saddle-node bifurcation occurs in these equations at μ = 0 and that the fixed point solutions are and y0 = 0 for μ > 0, see Fig 11A.

Fig 11. The nonlinear resonance is captured in a canonical normal form for a saddle-“focus”.

A Bifurcation diagram of fixed points of this system, giving rise to a stable focus (red) and an unstable saddle (dotted green). B Top: The bifurcation structure of solutions resulting from non-sinusoidal forcing with f as bifurcation parameter (A = 1). The area between vertical bars contains unstable period-doubled solutions, which is evidence of the existence of a chaotic attractor. Bottom: Inverse of the time T that x needs to reach an absolute value of 106, which is evidence that solutions diverge due to the instability of the chaotic orbit. C Comparison of the nonlinear response of the reduced system (left) with the full system (right). Parameters of normal form: μ = 2, a = 0.4. Parameters of full model: η = −10, , Δ = 2, τ = 20ms.

Furthermore, the solution is a saddle, and is a stable focus for which the frequency goes to zero smoothly as μ → 0. Fig 11B shows that in the forced system there is a range of frequencies for which there is no stable solution; in the normal form equation the solution diverges while in the network model the system settles to a periodic orbit in the vicinity of the low-activity state. The instability is due to a series of period-doubling bifurcations as in the full system. Furthermore, comparison of the phase diagram of the normal form equation with that of the full system shows they are qualitative similar, see Fig 11C. This indicates that the nonlinear resonance seen in the network of QIF neurons is a generic feature of any system with a stable focus in the vicinity of a saddle-node bifurcation.


  1. 1. Buzsáki G. Rhythms of the brain. Oxford University Press; 2006.
  2. 2. Berger H. Ueber das Elektroenkephalogramm des Menschen. Arch Psychiat Nerven. 1929;87:527–570.
  3. 3. Steriade M, Nunez A, Amzica F. A novel slow (<1 Hz) oscillation of neocortical neurons in vivo: depolarizing and hyperpolarizing components. J Neurosci. 1993;13:3252–3265. pmid:8340806
  4. 4. Gevins A, Smith ME, McEvoy L, Yu D. High-resolution EEG mapping of cortical activation related to working memory: effects of task difficulty, type of processing and practice. Cereb Cortex. 1997;7:374–385. pmid:9177767
  5. 5. Tesche CD, Karhu J. Theta oscillations index human hippocampal activation during a working memory task. P Natl Acad Sci USA. 2000;97:919–924.
  6. 6. Raghavachari S, Kahana MJ, Rizzuto DS, Caplan JB, Kirschen MP, Bourgeois B, et al. Gating of human theta oscillations by a working memory task. J Neurosci. 2001;21:3175–3183. pmid:11312302
  7. 7. Lee H, Simpson GV, Logothetis NK, Rainer G. Phase locking of single neuron activity to theta oscillations during working memory in monkey extrastriate visual cortex. Neuron. 2005;45:147–156. pmid:15629709
  8. 8. Jokisch D, Jensen O. Modulation of gamma and alpha activity during a working memory task engaging the dorsal or ventral stream. J Neurosci. 2007;27:3244–3251. pmid:17376984
  9. 9. Spitzer B, Wacker E, Blankenburg F. Oscillatory correlates of vibrotactile frequency processing in human working memory. J Neurosci. 2010;30:4496–4502. pmid:20335486
  10. 10. Wimmer K, Ramon M, Pasternak T, Compte A. Transitions between multiband oscillatory patterns characterize memory-guided perceptual decisions in prefrontal cortex. J Neurosci. 2015;36:489–505.
  11. 11. Tallon-Baudry C, Bertrand O, Peronnet F, Pernier J. Induced gamma-band activity during the delay of visual short-term memory task in humans. J Neurosci. 1998;18:4244–4254. pmid:9592102
  12. 12. Pesaran B, Pezaris JS, Sahani M, Mitra PP, Andersen RA. Temporal structure in neuronal activity during working memory in macaque parietal cortex. Nat Neurosci. 2002;5:805–811. pmid:12134152
  13. 13. Howard MW, Rizzuto DS, Caplan JB, Madsen JR, Lisman J, Aschenbrenner-Scheibe R, et al. Gamma oscillations correlate with working memory load in humans. Cereb Cortex. 2003;13:1369–1374. pmid:14615302
  14. 14. Dipoppa M, Gutkin BS. Flexible frequency control of cortical oscillations enables computations required for working memory. P Natl Acad Sci USA. 2013;110:12828–12833.
  15. 15. Montbrió E, Pazó D, Roxin A. Macroscopic description for networks of spiking neurons. Phys Rev X. 2015;5:021028.
  16. 16. Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. P Natl Acad Sci USA. 1982;79:2554–2558.
  17. 17. Tsodyks M, Feigel’man MV. The enhanced storage capacity in neural networks with low activity level. Europhys Lett. 1989;46:101–105.
  18. 18. Amit DJ, Brunel N. Dynamics of a recurrent network of spiking neurons before and following learning. Network. 1997;8:373–404.
  19. 19. Compte A, Brunel N, Goldman-Rakic PS, Wang XJ. Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cereb Cortex. 2000;10:910–923. pmid:10982751
  20. 20. Brunel N, Chance F, Fourcaud N, Abbott L. Effects of synaptic noise and filtering on the frequency response of spiking neurons. Phys Rev Lett. 2001;86:2186–2189. pmid:11289886
  21. 21. Esnaola-Acebes JM, Roxin A, Avitabile D, Montbrió E. Synchrony-induced modes of oscillation of a neural field model. Phys Rev E. 2017;96:052407. pmid:29347806
  22. 22. Neltner L, Hansel D, Mato G, Meunier C. Synchrony in heterogeneous neural networks. Neural Comp. 2000;12:1607–1641.
  23. 23. Schaffer ES, Ostojic S, Abbott LF. A complex-valued firing-rate model that approximates the dynamics of spiking networks. PLOS Comput Biol. 2013;9:e1003301. pmid:24204236
  24. 24. Devalle F, Roxin A, Montbrió E. Firing rate equations require a spike synchrony mechanism to correctly describe fast oscillations in inhibitory networks. PLOS Comput Biol. 2017;13:e1005881. pmid:29287081
  25. 25. Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972;12:1–24. pmid:4332108
  26. 26. Dubreuil AM, Amit Y, Brunel N. Memory capacity of networks with stochastic binary synapses. PLOS Comput Biol. 2014;10:e1003727. pmid:25101662
  27. 27. Tong F, Meng M, Blake R. Neural bases of binocular rivalry. Trends Cog Sci. 2006;10:502–511.
  28. 28. Bressloff PC, Webber MA. Neural field model of binocular rivalry waves. J Comput Neurosci. 2012;32:233–252. pmid:21748526
  29. 29. Rankin J, Sussman E, Rinzel J. Neuromechanistic model of auditory bistability. PLOS Comput Biol. 2015;11:e1004555. pmid:26562507
  30. 30. Zhou W, Chen D. Binaral rivalry between the nostrils and in the cortex. Curr Biol. 2009;19:1561–1565. pmid:19699095
  31. 31. Bogacz R, Brown E, Moehlis J, Holmes P, Cohen JD. The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks. Psychol Rev. 2006;113:700–765. pmid:17014301
  32. 32. Roxin A, Ledberg A. Neurobiological models of two-choice decision making can be reduced to a one-dimensional nonlinear diffusion equation. PLOS Comput Biol. 2008;4:e1000046. pmid:18369436
  33. 33. Ledoux E, Brunel N Dynamics of networks of excitatory and inhibitory neurons in response to time-dependent inputs Frontiers Comp. Neurosci. 2011;5:25.
  34. 34. Ermentrout GB, Cowan JD Temporal oscillations in neuronal nets J. Math. Biol. 1979;7:265–280. pmid:224126
  35. 35. Brunel N, Wang XJ What determines the frequency of fast network oscillations with irregular neural discharges? J. Neurophysiol. 2003;90:415–430. pmid:12611969
  36. 36. Brunel N, Hakim V Fast global oscillations in networks of integrate-and-fire neurons with low firing rates Neural Comp. 1999;11:1621–1671.
  37. 37. Verduzco-Flores S, Ermentrout B, Bodner M. From working memory to epilepsy: Dynamics of facilitation and inhibition in a cortical network. Chaos. 2009;19:015115. pmid:19335019
  38. 38. Rule M, Stoffregen M, Ermentrout B. A Model for the Origin and Properties of Flicker-Induced Geometric Phosphenes. PLOS Comput Biol. 2011;7:e1002158. pmid:21980269
  39. 39. Veltz R, Sejnowski TJ. Periodic Forcing of Inhibition-Stabilized Networks: Nonlinear Resonances and Phase-Amplitude Coupling. Neural Comput. 2015;27:2477–2509. pmid:26496044
  40. 40. Roxin A, Compte A Oscillations in the bistable regime of neuronal networks Phys. Rev. E 2016;94:012410. pmid:27575167
  41. 41. Kobayashi M, Pascual-Leone A. Transcranial magnetic stimulation in neurology. Lancet Neurol. 2003;2:145–156. pmid:12849236
  42. 42. Herrmann CS, Rach S, Neuling T, Strüber D. Transcranial alternating current stimulation: a review of the underlying mechanisms and modulation of cognitive processes. Front Hum Neurosci. 2013;7:279. pmid:23785325
  43. 43. Benabid AL, Chabardes S, Mitrofanis J, Pollak P. Deep brain stimulation of the subthalamic nucleus for the treatment of Parkinson’s disease. Lancet Neurol. 2009;8:67–81. pmid:19081516
  44. 44. Mayberg HS, Lozano AM, Voon V, McNeely HE, Seminowicz D, Hamani C, et al. Deep Brain Stimulation for Treatment-Resistant Depression. Neuron. 2005;45:651–660. pmid:15748841
  45. 45. Brose N, O’Connor V, Skehel P. Synaptopathy: dysfunction of synaptic function? Biochem Soc Trans. 2010;38:443–444. pmid:20298199
  46. 46. Lepeta K, Lourenco MV, Schweitzer BC, Martino Adami PV, Banerjee P, Catuara-Solarz S, et al. Synaptopathies: synaptic dysfunction in neurological disorders—A review from students to students. J Neurochem. 2016;138:785–805. pmid:27333343
  47. 47. Wilson HR, Cowan JD. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik. 1973;13:55–80. pmid:4767470
  48. 48. Amari S. Homogeneous nets of neuron-like elements. Biol Cybern. 1975;17:211–220. pmid:1125349
  49. 49. Amari S. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern. 1977;27:77–87. pmid:911931
  50. 50. Ostojic S, Brunel N. From spiking neuron models to linear-nonlinear models. PLOS Comput Biol. 2011;7:e1001056. pmid:21283777
  51. 51. Buice MA, Chow CC. Dynamic Finite Size Effects in Spiking Neural Networks. PLOS Comput Biol. 2013;9:e1002872. pmid:23359258
  52. 52. Luke TB, Barreto E, So P. Complete classification of the macroscopic behavior of a heterogeneous network of theta neurons. Neural Comput. 2013;25:3207–3234. pmid:24047318
  53. 53. Laing CR. Derivation of a neural field model from a network of theta neurons. Phys Rev E. 2014;90:010901.
  54. 54. Visser S, Van Gils SA. Lumping Izhikevich neurons. EPJ Nonlinear Biomed Phys. 2014;2:6.
  55. 55. Mattia M. Low-dimensional firing rate dynamics of spiking neuron networks. arXiv preprint. 2016;arXiv:160908855.
  56. 56. Schwalger T, Deger M, Gerstner W. Towards a theory of cortical columns: from spiking neurons to interacting neural populations of finite size. PLOS Comput Biol. 2017;13:e1005507. pmid:28422957
  57. 57. Augustin M, Ladenbauer J, Baumann F, Obermayer K. Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: comparison and implementation. PLOS Comput Biol. 2017;13:e1005545. pmid:28644841
  58. 58. Robinson PA, Rennie CJ, Rowe DL. Dynamics of large-scale brain activity in normal arousal states and epileptic seizures. Phys Rev E. 2002;65:041924.
  59. 59. Eusebius J. Doedel, Alan R. Champneys, Fabio Dercole, et al. AUTO-07P: Continuation and Bifurcation Software for Ordinary Differential Equations (2007).
  60. 60. Wiggins S Introduction to Applied Nonlinear Dynamical Systems and Chaos Springer, 2nd edition 2003.