Network mechanisms underlying the role of oscillations in cognitive tasks

Oscillatory activity robustly correlates with task demands during many cognitive tasks. However, not only are the network mechanisms underlying the generation of these rhythms poorly understood, but it is also still unknown to what extent they may play a functional role, as opposed to being a mere epiphenomenon. Here we study the mechanisms underlying the influence of oscillatory drive on network dynamics related to cognitive processing in simple working memory (WM), and memory recall tasks. Specifically, we investigate how the frequency of oscillatory input interacts with the intrinsic dynamics in networks of recurrently coupled spiking neurons to cause changes of state: the neuronal correlates of the corresponding cognitive process. We find that slow oscillations, in the delta and theta band, are effective in activating network states associated with memory recall. On the other hand, faster oscillations, in the beta range, can serve to clear memory states by resonantly driving transient bouts of spike synchrony which destabilize the activity. We leverage a recently derived set of exact mean-field equations for networks of quadratic integrate-and-fire neurons to systematically study the bifurcation structure in the periodically forced spiking network. Interestingly, we find that the oscillatory signals which are most effective in allowing flexible switching between network states are not smooth, pure sinusoids, but rather burst-like, with a sharp onset. We show that such periodic bursts themselves readily arise spontaneously in networks of excitatory and inhibitory neurons, and that the burst frequency can be tuned via changes in tonic drive. Finally, we show that oscillations in the gamma range can actually stabilize WM states which otherwise would not persist.


Introduction
Oscillations are ubiquitous in neuronal systems and span temporal scales over several orders of magnitude [1]. Some prominent rhythms, such as occipital alpha waves during eye-closure [2] or slow-oscillations during non-REM sleep [3] are indicative of a particular behavioral state. Other rhythms have been specifically shown to correlate with memory demands during working memory tasks, including theta (4-8Hz) [4][5][6][7], alpha/beta   [8][9][10] and gamma   [11][12][13]. Understanding the physiological origin and functional role of such oscillations is an area of active research.
Here we study how oscillatory signals in distinct frequency bands can serve to robustly and flexibly switch between different dynamical states in cortical circuit models of working memory and memory storage and recall. In doing so we characterize the dynamical mechanisms responsible for some of the computational findings in an earlier study [14]; we go beyond that work to include new results on oscillatory control of network states. Specifically, we consider the response of multistable networks of recurrently coupled spiking neurons to external oscillatory drive. We make use of recent theoretical advances in mean-field theory to reduce the spiking networks to a low-dimensional macroscopic description in terms of mean firing rate and membrane potential, which is exact in the limit of large networks [15]. This allows us to perform a systematic and detailed exploration of network states analytically or with numerical bifurcation analysis, which informs us about suitable parameter sets for numerical simulations. The latter serve to give representative examples of the dynamical phenomena investigated here. As a result, we can completely characterize the dynamics of the forced system. Specifically, we consider networks which exhibit multistability in the absence of forcing. Such attracting network states have been proposed as the neural correlate of memory recall [16,17], and as a possible mechanism for sustaining neuronal activity during working memory tasks [18][19][20]. We find that an external oscillatory drive interacts with such multistable networks in highly nontrivial ways. Low-frequency oscillations are effective in switching on states of elevated activity in simple bistable networks, while in higher dimensional multistable networks they allow for robust switching between stored memory states. Higher frequencies, in the beta range, destabilize WM states through a resonant interaction which recruits spike synchrony. Such oscillatory signals can therefore be used to clear memory buffers. Finally, when networks operate outside the region of multistability, e.g. due to reduced excitability, an oscillatory signal in the gamma range can be used to recover robust memory recall.
transition from a baseline state to a state of elevated activity, or vice-versa. We asked to what extent an oscillatory signal alone could also drive transitions between states in such a network. In particular we were interested in knowing if the directionality of the transition, and hence the final state of the system, could be controlled via the frequency of the oscillatory drive.
To investigate this we simulated a network of recurrently coupled excitatory quadratic integrate-and-fire neurons, see Methods for details. Fig 1 shows an illustration of the network dynamics as a function of the stimulus frequency and initial state of the network. In particular, at low frequencies, the oscillations push the system from the state of low activity into the state Each panel shows the network-averaged firing rate (black: network of QIF neurons; orange: result of meanfield equations Eq 1) and raster plot of the response for an initial condition in the low-activity state (top, r % 6Hz) and high-activity state (bottom, r % 73Hz), as well as the oscillatory forcing I(t). A At low enough frequencies, the system is pushed from the low-to the high-activity state. B At slightly higher frequencies, both states persist under the forcing. C Driven with forcing from an intermediate range of frequencies, the state with high firing activity destabilizes in favor of the state with low firing activity. D At high frequencies, both states persist under the forcing. Parameters: τ = 20ms, η = −10, Δ = 2, J ¼ 15 ffiffiffi ffi D of high activity, which persists under such forcing, see Fig 1A. As the frequency is increased past a critical value, it is no longer effective in driving a transition, and the network remains in its initial state, see Fig 1B. A further increase then shows the opposite effect: The state of high activity becomes unstable under the forcing, whereas the state of low activity persists, Fig 1C. At large enough frequencies we then observe again that no transitions occur and the initial network state persists, Fig 1D. The results from Fig 1 show that the frequency of an external oscillatory drive can be used to selectively destabilize a given network state. For the parameter values used here, oscillation frequencies in the delta range result in a WM state while frequencies in the beta range force the system to the "ground" state, essentially clearing the WM state, a result seen also in [14]. Oscillations outside these ranges are ineffective in driving transitions. We seek to understand the mechanisms underlying these transitions, and additionally to determine to what extent the precise frequency ranges are influenced by the network parameters. To do this we will take advantage of recent work in which the authors derived a set of simple equations for the mean firing rate and mean membrane potential in a network of recurrently coupled quadratic integrate-and-fire (QIF) neurons [15]. In the large-system limit these equations are exact and fluctuations can be neglected. The exact correspondence between the low-dimensional mean-field equations and the original network allows us to use standard dynamical systems techniques to fully characterize the range of dynamical states in the network.

Model equations and network analysis
The dynamics in networks of recurrently coupled QIF neurons can be described exactly under the assumptions of all-to-all coupling and quenched neuronal variability, i.e. static distributions in cellular or network properties. For the case of a single network of excitatory cells in which the input currents to individual neurons are distributed, the resulting mean-field equations are [15]: Here, r is the network average of the firing rate and v is the network average of the membrane potential, J is the strength of synaptic weights. In the derivation of the mean-field equations each synaptic weight is scaled as 1/N, where N is the system size, leading to an order one contribution to the mean input in the thermodynamic limit, whereas fluctuations vanish. η and Δ are, respectively, the center and width of the static distribution of inputs, which is considered to be Lorentzian. External, time-variant forcing is represented here by I(t). The time constant τ is the membrane time constant of the individual neurons and is set to 20ms throughout.
This macroscopic model permits a straightforward investigation of the stationary states in the full network. For sufficiently strong synaptic coupling two stable fixed points co-exist over a range of mean external inputs, see Fig 2A. Linear stability analysis further reveals that the stable high-activity fixed point is a focus for sufficiently high rates, whereas the stable low-activity fixed point is a node [15]. The network therefore shows a damped oscillatory response to external perturbations in the high-activity state. This response reflects transient spike synchrony which decays over time due to the heterogeneity; the characteristic time scale of the desynchronization is in fact proportional to the width of the distribution of input currents Δ [21]. This type of spike synchrony is seen ubiquitously in networks of both heterogeneous and noise-driven spiking neurons operating in the mean-driven regime, in which neurons fire as oscillators [15,22,23], and is captured in Eq 1 by the interplay between the mean sub-threshold membrane potential and mean firing rate [24].
We use this macroscopic description to systematically investigate the network response to periodic forcing with amplitude A and frequency f, see Eq 8. Fig 2B shows a phase diagram of the network dynamics as a function of these two parameters. As in Fig 1 we keep track of the final state of the network as a function of the initial state. For sufficiently slow frequencies and over a range of amplitudes the network is always driven to the high-activity state (green). This region therefore corresponds to recall of the memory state, see Fig 2C (left). For an intermediate range of frequencies a sufficiently strong forcing always drives the network to the lowactivity state (red), which corresponds to clearance, Fig 2C (right). The frequency band for clearance is essentially set by the frequency of intrinsic oscillations of the high-activity state, i.e. it is a resonant effect, see Fig 3. Weak forcing and forcing at very high frequencies fail to drive any transitions, while strong forcing at low enough frequencies can enslave the network dynamics entirely (orange). For the parameter values used here recall occurs for frequencies below about 2Hz and clearance in the range between 10-30Hz. Orange: only one globally stable periodic orbit exists due to the system being entrained to the forcing, hysteretically switching between the node and the focus. C Example time traces from B, with initial conditions chosen to be the focus (red) or the node (blue). D The heuristic firing-rate equations Eq 4 show an equivalent fixed point structure, with the exception that the focus is a node here. E Clearance does not occur in the firing-rate equations, as the node cannot be destabilized by nonlinear resonance. Parameters: τ = 20ms, η = −10, Δ = 2, J ¼ 15 In order to characterize the role of spike synchrony in determining the network response, we derive a reduced firing rate equation with the identical fixed-point structure as in the original, exact mean-field equations Eq 1, but without the subthreshold dynamics. Specifically, the fixed-point value of the firing rate in Eq 1 can be written as where F is the steady-state f-I curve, which in the case of Eq (1) is We use the steady-state f-I curve to construct a heuristic firing rate model given by and investigate its response to periodic forcing I(t). Eq 4 is similar in form to the classic Wilson-Cowan firing rate model for a single population [25]. In this case the high-activity branch of the firing rate is a node, i.e. it no longer shows damped oscillations in response to perturbations, see Fig 2D. Furthermore, the region of "clearance" has completely vanished in the phase diagram in Fig 2E, confirming that in the original network it was due to a resonance reflecting an underlying spike synchrony mechanism.

Recall and clearance occur due to forcing-induced bifurcations
Given the simplicity of the mean-field equations Eq 1 we can calculate the linear response of the system analytically, without the need for extensive numerical simulations. The response of the focus to weak sinusoidal inputs (linear response) already shows a clear resonance for the high-activity state (Fig 3A), where the resonant frequency is see Methods. Furthermore, additional, sub-harmonic resonance peaks occur when the forcing is sharply peaked, leading to a broadening of the resonance spectrum ( Fig 3A, right); this effect is due to the presence of many sub-harmonics of the linear resonance in the forcing term itself. Conversely, the node does not show such a resonance, indicating a qualitative difference in the response of the two stable fixed points. However, the switching behaviors seen in Figs 1 and 2 and the corresponding destabilization of network states cannot be attributed to this linear resonance alone-nonlinear effects have to be taken into account. This can be seen by plotting the bifurcation diagram for the response of the network to the forcing for several values of the forcing amplitude. For relatively weak, but finite forcing, the network response consists of a periodic orbit in the vicinity of the corresponding unforced fixed-point, Fig 3B (top). As the forcing amplitude is increased, the resonance peak of the focus moves towards slower frequencies, akin to a softening spring. Then, a pair saddle-node bifurcations leads to a range of frequencies in which three periodic orbits coexist, see Fig 3B middle-right.
At large enough amplitudes for the sharply-peaked, non-sinusoidal forcing two additional bifurcations occur which are responsible for the "recall" and "clearance" behaviors respectively, see Fig 3B (bottom-right) and Fig 3C. Specifically, for sufficiently large frequencies, the stable periodic orbit due to the low-activity node (blue line) coexists with the unstable one due to the saddle-point (green line), and with a third state, emanating from the focus (red line). When the forcing frequency is sufficiently small, only the latter solution persists. This can be understood as quasi-stationary response of the system due to the slow forcing, see the Methods section for details. In other words, the forcing here is slow and large enough to push the system beyond the bistable regime into the basin of attraction of the focus. Therefore, at low frequencies the only solution is the periodic orbit in the vicinity of the high-activity focus, which explains why low frequencies are effective in switching on the high-activity state, i.e. for "recall".
On the other hand, in the range of frequencies over which the network response is resonant, period-doubling bifurcations of the focus lead to a frequency band in which all periodic orbits around the focus are unstable. This is due to the rapid occurrence of further period-doubling bifurcations, leading to the emergence of chaotic responses to the forcing. As we show in the Methods section, there exist narrow frequency bands in which these chaotic responses are stable, but a numerical investigation of these shows that they quickly become unstable as the frequency of the forcing is changed. Therefore, the periodic orbit in the vicinity of the lowactivity node is the only stable solution. Frequencies in this range are therefore effective in switching off the high-activity state, i.e. for "clearance". As we show in Fig 3D, the loci of bifurcations that periodic orbits around the focus undergo, explain well the parameter range in the (A, f )-plane in which clearance is observed, i.e. the red area in Fig 2B.

Higher-dimensional memory circuits
A single bistable network of neurons serves as a canonical illustration of a memory circuit. However, such a network can only store a single bit of information; actual memory circuits must be capable of storing more information. In terms of neuronal architecture this can be achieved by having a network which is comprised of several or many neuronal clusters [16,17,26]. We asked to what extent the frequency-selective switching behavior seen in a single bistable network could also be found in a clustered network. We look first at a simple, two-cluster network and then the more general case of a higher-dimensional multi-clustered network.
Two competing neuronal populations. We set up a network of two identical populations with recurrent excitation and mutual inhibition, see Fig 4A. This network of two competing neuronal populations may be regarded as the substrate of a number of cognitive tasks, such as perceptual bistability (visual [27,28], auditory [29], or olfactory [30]), or forced two-choice decision making [31,32].
We choose parameters such that there is one stable fixed point at which both populations are in the low firing regime, and two stable fixed points in which one population is in the low firing regime and the other is in the high firing regime. The latter two are symmetric with respect to a swap of population indices, i.e. reflection symmetric, see the bifurcation diagram in Fig 4B. The system is placed near a sub-critical pitchfork bifurcation, where the former fixed point would become unstable.
With both populations being in the low firing regime, global oscillatory forcing does not generate switching behavior at any frequency, see Fig 4C. If the two populations are driven by weak, independent noise sources, we also fail to observe any switching on relevant time scales, see Fig 4D. However, the noise sources serve to break the symmetry of the system, and combining them with global oscillatory drive now allows for frequency-selective recall and clearance, as in the single-population network, see Fig 4E. Specifically, low frequency drive switches the network from a symmetric state to one in which one of the populations is active (2Hz stimulation in Fig 4E); continued low-frequency forcing generates ongoing stochastic switching between the two activated states. When this drive is released the currently active configuration The insets show the stable states (two asymmetric, one symmetric). C Applying global forcing with slow frequency (2Hz) does not lead to the activation of either of the asymmetric patterns, due to the lack of symmetry breaking mechanisms. D Driving the system with independent noise sources (zero-mean Ornstein-Uhlenbeck process) with small noise amplitude σ does not lead to reliable switching due to long residence times. E Combining noise with a protocol that generates oscillations of different frequencies over different time intervals leads to the reliable (but random) activation of one of the two asymmetric patterns and switching between these at 2Hz, and the clearing of a sustained pattern at 40Hz. Parameters: is stabilized (between 40 and 60 seconds in Fig 4E). Finally, an intermediate range of frequencies is effective in clearing the currently held active state (40Hz stimulation in Fig 4E) and stabilizing the symmetric, low-activity state. An analysis of the bifurcation structure in this network as a function of forcing amplitude and frequency reveals that bifurcations analogous to those responsible for recall and clearance in the single-population model, i.e. Fig 3, also occur here.
A many-cluster network. Here we consider a network of 100 neuronal populations which interact via effective interactions which may be excitatory or inhibitory. The connectivity is chosen so that 10 distinct, random activity patterns are encoded; in each pattern five neuronal populations are active, i.e. the coding sparseness is 5%. The patterns and connectivity matrix are shown in Fig 5A and 5B respectively. Simulations again reveal a frequency-selective response of the network similar to the two-population model. Namely, low frequency inputs in the presence of weak noise switch on the activated state and allow for robust switching, while over a range of intermediate frequencies all activated states are cleared, see Fig 5C. The relative contribution of the neuronal populations that participate in a specific pattern to the activity of the entire network is shown in Fig 5D.

Generating burst-like oscillations
Thus far we have treated oscillations as an extrinsic effect, i.e. we are agnostic as to their origin. To be effective for flexible control of memory states, the oscillatory forcing we have considered here must fulfill two requirements: First, it must have a broad range of possible frequencies, and secondly, it must have a burst-like shape. Here we show that a simple circuit comprised of interacting excitatory and inhibitory populations can satisfy both these requirements. Specifically, we construct a network of QIF neurons consisting of an E-I circuit which spontaneously oscillates, and drives a downstream population of E cells, which itself is bistable, see Fig 6. Using the corresponding mean-field equations for the E-I circuit, we found a broad region of oscillatory states of the E-I network as a function of the mean external drive to the E and I populations, η e and η i respectively, see the phase diagram Fig 6B. By adjusting the external drive to the E and I populations alone we can tune the output frequency over an order of magnitude. This allows us to selectively switch the downstream network on and off, as shown in Fig 6C.

Gamma oscillations can generate memory states
Outside the region of bistability (or multistability in the case of clustered networks), neuronal networks will relax to a single stationary state in the response to a transient input. Here we show that this need not be the case if the network activity is subjected to ongoing oscillatory modulation.
As an illustration we take a single population of excitatory neurons with strong recurrent excitation, but insufficient tonic drive to place it in the region of bistability. As a result, the response of the network to a transient excitatory stimulus decays to baseline, as seen in Fig 7A  (top). However, in the presence of an oscillatory input in the gamma range, which itself only very weakly modulates the network activity (Fig 7A middle), the transient input now switches the network to an activated state with prominent gamma modulation Fig 7A (bottom). Once the oscillations cease (green arrow) the activated state vanishes.
This phenomenon depends crucially on the presence of the spike-synchrony mechanism underlying the damped oscillatory response of the high-activity focus discussed earlier. Specifically, for the parameter values used in Fig 7 the only fixed-point solution which exists is the low-activity node. Nonetheless, oscillatory forcing at sufficiently high firing rates can still recruit and resonate with the damped oscillatory interaction between the mean firing rate and mean membrane potential in the network. The resulting resonant frequency can no longer be associated with the linear response of the focus as it is a fully nonlinear network property.
The phase diagram Fig 7B shows the regions of bistability given an oscillatory forcing, for different forcing amplitudes. For zero amplitude the curve corresponds to the saddle-node (SN) bifurcation of the unforced system (horizontal black line). Note that only sufficiently high frequencies allow for bistability given tonic inputs which place the network below the SN. Furthermore, there is a clear resonance in the range of 60-90Hz for these parameter values. As the forcing frequency f ! 1 the curves converge to the SN line of the unforced system. This is because the forcing we use has zero-mean and hence, given the low-pass filter property of neuronal networks, has no effect on the network dynamics at high frequencies.

Discussion
In this article we have studied the role of oscillations in switching or maintaining specific brain states. Specifically, we identified distinct frequency bands: delta, beta, and gamma with specific functional roles. This finding is especially intriguing given that the networks we study are relatively simple. Connectivity is all-to-all and neurons are exclusively excitatory. For the multipopulation networks, interactions between populations are assumed to be mediated by fast inhibition, leading to a winner-take-all behavior. Furthermore, synaptic transmission is considered to be instantaneous, with the only relevant time scale being the membrane time constant (τ = 20ms). The susceptibility of the networks to forcing of distinct frequencies therefore does not depend on the presence of multiple time scales associated with intrinsic currents, synaptic kinetics or sub-classes of inhibitory cells. Rather, the key dynamic factors are: bistability Role of oscillations in cognitive tasks or multistability due to recurrent excitatory reverberation, and transient spike synchrony in response to external drive. Given this, we expect to see the same phenomenology in more biophysically realistic networks as long as there is bistability and external noise sources are not too strong. Additionally, none of the mechanisms we study depend crucially on the specific choice of neuron model, at least for type I spiking models. The "switching-on" at low frequencies depends only on the presence of a saddle-node bifurcation, which is ubiquitous in networks of spiking neurons in the bistable regime. Similarly, the "switching-off" or "clearance" depends only on recruiting spike synchrony, which occurs readily in both integrate-and-fire models as well as conductance-based spiking models [24]. In fact, in the mean-driven regime spiking networks in general robustly exhibit a resonance to oscillatory inputs, which reflects the underlying synchrony mechanism [20,23,33].
In the region of bistability, low frequencies are effective in pushing the network into a highactivity state; for not too large amplitudes the network remains in the activated state on the downswing of the input. The cut-off frequency for this "recall" signal is determined by the escape time of the network from the vicinity of the saddle-node bifurcation in the low-activity state, and here is a few Hertz, see Fig 2A. In multi-stable networks, this same mechanism allows for robust switching between distinct memory states. On the other hand, frequencies in the beta range are effective in switching off the high-activity state by resonantly driving bouts of spike synchrony. The precise frequency range depends on network parameters, see Fig 8. In both cases the relevant frequency ranges scale with the membrane time constant of the neurons. Therefore, e.g. choosing a time constant τ = 10ms will simply stretch the x-axis of the phase diagram in Fig 2B by a factor of two. Finally, we showed that forcing in the gamma range can allow for robust working memory states which otherwise do not exist, i.e. the system Role of oscillations in cognitive tasks sits outside the region of bistability with oscillatory forcing. This mechanism once again depends on resonantly recruiting spike synchrony. We find that non-sinusoidal, burst-like drive is most effective in switching the network state, see Fig 3A and 3B. In fact, this is precisely the type of oscillation which readily emerges in a simple E-I network. Furthermore, the oscillation frequency can be modulated over a wide range through changes in the tonic drive to the E-I circuit alone, see Fig 6. This means that the state of downstream memory networks can be flexibly controlled via an E-I circuit through global changes in excitability alone.
While here we have considered networks in which intrinsic oscillatory activity is due to transient spike synchronization, spiking networks can also generate oscillatory activity due to E-I and I-I loops, which can occur in the absence of strong spike synchrony. For example, networks of coupled excitatory (E) and inhibitory (I) spiking neurons readily generate oscillations via a Hopf bifurcation when excitation is sufficiently strong and fast [34,35]. The E-I loop, and in particular the ratio of E to I time constants, largely sets the frequency of these oscillations, which tend to lie in the gamma range . On the other hand, the I-I loop itself can underlie the generation of fast oscillations (>100Hz), the frequency of which is set by the inhibitory synaptic delay [35,36]. Both the E-I and I-I loops contribute to the population frequency in E-I networks, with the E-I loop dominating when recurrent excitation is strong. Resonant responses to periodic stimuli due to the E-I loop in neuronal circuits have been studied in firing rate models [37][38][39] as well as in networks of LIF neurons [33].
Damped oscillatory activity due to the E-I loop can also arise in the high-activity state of the bistable regime of E-I networks [40]. In this scenario external periodic drive could also be used for "clearance" of the activated state by resonating with the E-I loop. While the phenomenology of this resonance would be similar to the resonance we have considered in this manuscript, the mechanism is nonetheless distinct as it does not involve spike synchrony. On the other hand, spike synchrony does robustly lead to resonances in E-I networks, as measured for example by the linear response [23]. In principle both resonances could be present in the bistable regime of E-I networks, allowing for an even more complex response to oscillatory input than we have studied here.
Current non-invasive brain-stimulation techniques, such as repetitive transcranial magnetic stimulation [41], or transcranial alternating current stimulation [42], apply transient oscillatory signals to large parts of the brain. Our study may be useful to investigate the impact of such signals on the dynamics of neuronal mass models and the psychological and behavioral effects of neuromodulation. Our results could also be of relevance for investigating the use of deep-brain stimulation to treat Parkinson's disease [43] and (pharmacologically) treatmentresistant depression [44]. Although the model used here describes networks of spiking neurons with instantaneous synapses, future studies could also incorporate synaptic dynamics with appropriate time scales for excitatory and inhibitory transmission [24]. These time scales can be influenced by drugs, or (pathological) changes in neurotransmitters. The framework developed here may therefore serve as a tool to study the cause of functional deficiencies in synapserelated conditions, so-called "synaptopathies" [45,46].

Mathematical model
Neural mass and neural field models are an important tool for understanding macroscopic neuronal dynamics. Classical models include the Wilson-Cowan model [25,47] or the Amari model [48,49]. However, such macroscopic models of brain activity often pose a stark simplification of the actual dynamics, and often miss important features from the spiking dynamics, such as spike synchronization. Recently, there have been advances in linking the microscopic and macroscopic dynamics of networks of spiking neurons [15,23,[50][51][52][53][54][55][56][57].
We consider a neural mass model that was recently derived from networks of all-to-all coupled quadratic integrate-and-fire neurons in the thermodynamic limit [15], see Eq (1). To simplify the mathematical treatment, we divide t by τ which represents the case of time being measured in units of τ, thus eliminating τ from the equations: Here, r represents the ensemble average of the firing rate of neurons, and v represents the ensemble average of the membrane potential. The parameters η and Δ represent the center and witdh of the Lorentzian distribution of time-invariant input currents into the neuronal ensemble, and J is the coupling constant between neurons. Time-varying external inputs are given by I(t). The original model (1) can then be recovered by t ! τt, r ! r/τ. As we set τ = 20ms, r = 1 here corresponds to a firing rate of r = 50Hz in the full model.
Here, we consider I(t) to be T-periodic, i.e. I(t + T) = I(t). We distinguish between two types of input: sinusoidal input, and non-sinusoidal input, where we take n = 20 for the simulations presented in this paper. The parameter A represents the amplitude of the forcing. The constant γ is chosen such that R T 0 IðtÞdt ¼ 0. We choose this type of zero-mean forcing to avoid any changes in network excitability which a tonic DC-offset might cause. In other words, the input models a reorganization of afferent spikes into periodic volleys without adding any additional spikes. In the non-sinusoidal case the spikes are more synchronized than in the sinusoidal case.
We compare the full model equations with its equivalent heuristic firing rate equation, which preserves the fixed point structure but reduces the dynamical behavior. This is done by considering stationary solutions given by Solving these equations for r is equivalent to solving Eq 2. Thus, the reduced heuristic firing rate equations can be expressed by where the f-I function F(Jr + η) is given by Eq 3.

Linear response
Ignoring transient dynamics, the response of the model equations to the external input I(t) is T-periodic as well, at least in the limit of small amplitudes A (an exception are period-doubled solutions, which are a nonlinear phenomenon only relevant at larger A). In this case the corresponding Fourier spectra of the firing rate r(t) and of the membrane potential v(t) are discrete: For brevity of exposition we use here the angular frequency ω = 2πf instead of the ordinary frequency f. This approach describes the projection of solutions of r and v from a continuous space R onto a discrete function space V, with orthogonal basis functions e inωt , n 2 Z. The same Fourier decomposition applies to the input current I(t): To determine the linear response of the model equations [58], we first carry out a Fourier decomposition of the system linearized around the fixed points given by r 0 and v 0 : inor n ¼ 2v 0 r n þ 2r 0 v n ; inov n ¼ Jr n þ I n þ 2v 0 v n À 2p 2 r 0 r n : Solving this set of linear equations, we obtain with where ω 0 is the (angular) resonant frequency: The resonant frequency is state-dependent and changes with model parameters. Reintroducing the time scale τ, perturbations of the upper branch solution resonate at a frequency where r 0 is the value of the steady-state firing rate. This is true as long as the argument of the square root is positive. Therefore as the firing rate decreases along the upper branch, for decreasing external input, the frequency decreases to zero at which point the focus becomes a node. This point occurs before the saddle-node is reached unless Δ = 0 in which case it exactly coincides with the saddle-node.
From this, we can derive the amplitude of the linear response of the firing rate, and analogously of the membrane potential. Alternatively, one can derive the time-averaged linear response ("power") of the system: Here we have made use of the orthogonality of the basis functions, and the fact that T = 2π/ω.

Numerical continuation
In order to exhaustively and accurately trace the bifurcations that occur in the model equations, we make use of AUTO 07p [59]. Since this software is designed to deal with autonomous systems, we recast the (non-autonomous) model Eq (1) into a set of autonomous equations: The last two equations create the periodic stimulus x(t) = sin(ωt) in the model equations. We distinguish the sinusoidal case, and the non-sinusoidal case Continuation of the forced system is performed by starting from a known fixed point (r 0 , v 0 ) at A = 0, and continuing solutions by increasing A up to the desired value. We use the L 2norm as a scalar measure to represent periodic solutions: Where we perform this one-parameter continuation, we represent solution branches by plotting the L 2 -norm against the parameter that is being varied. Where we perform twoparameter continuation, we plot the loci of bifurcations against the two parameters being varied.

Mechanisms underlying switching
Here we illustrate in greater detail the mechanisms underlying the "switching on" of activated network states (or simple switching between attractors in the case of a multi-stable network) at low frequencies, and the "switching off" of activated states at frequencies in the beta range.
Recall. The effect of low frequencies can be understood by considering the quasistationary response, namely how changes in the external drive alter the steady-state network solution. Fig 9A-9C show the steady-state bifurcation diagram for a single excitatory population of QIF neurons as a function of the mean external drive η.
On top of these bifurcation diagrams we plot the firing rate of the forced system against the x-axis, which is η + I(t) as the forcing can be understood to be a time-varying mean input current into the system. This is to illustrate that at low frequencies the system remains close to the fixed points (except when it switches between them), hence the term "quasi-stationary". Given a mean input which places the system within the region of bistability, a small-amplitude, lowfrequency forcing fails to push the system past the low-activity saddle-node, see Fig 9A. In a range of forcing amplitudes the network switches to the high-activity state and remains on the upper solution branch, see Fig 9B, while for larger amplitudes the network activity becomes slaved to the forcing, Fig 9C. The range of suitable amplitudes depends on the model parameter η, which is situated in the bistable regime. For clarity, we denote the chosen parameter by η 0 . The bistable regime is delimited by two saddle-node bifurcations, that occur at η c1 and η c2 , respectively. Thus we have η c1 < η 0 < η c2 . The range of amplitudes also depends on the shape of the forcing representative of the case A = 1. In this case, the forcing is characterized by its minimum value I min and its maximum value I max . We assume I min < 0 < I max . The minimal amplitude required to push the system from the node to the saddle is then given by The maximum amplitude, up to which the system stays on the upper branch, is given by If the system parameters are such that A max A min , then there is no amplitude regime at which recall occurs, and increasing the amplitude leads directly to hysteresis.
Clearance. Fig 10 shows the details of the bifurcation structure of the periodically forced network which leads to the "clearance" behavior.
Specifically, a series of period-doubling bifurcations, a so-called period-doubling cascade, leads to the emergence of a chaotic orbit. This orbit is initially stable, but a further decrease of the frequency leads to global instability of the chaotic orbit, and the destabilization of the periodic orbit around the focus, see Fig 10A and 10B. The latter occurs just below a forcing frequency of f = 32.5Hz, see Fig 10B. In Fig 10C we show representative time traces for forcing frequencies of f = 32.45Hz and f = 32.5Hz. In the former case, the system leaves the forced focus in less than three seconds, whereas in the latter case the chaotic orbit persists for the whole simulation period of 10 3 seconds. We infer from this that the critical frequency at which the chaotic orbit loses stability globally is within this frequency range.

Memory networks
A natural extension of the single-population model is to consider a network of neural masses: where the adjacency matrix A with entries A nm determines the connectivity structure between neural masses. The term σξ n (t) describes an additional noise input, where σ is the noise amplitude, and ξ n (t) is the random variable. In this paper we consider two scenarios, the first of which is two neural populations with recurrent excitation and mutual inhibition. The adjacency matrix of such a network is given by where J i < 0 < J e . In the second scenario, we examine the dynamics within a Hopfield network [16]. Rather than creating the network through learning algorithms, we build the network as follows. First, we choose the patterns that the network should encode and write them into an array U. Each column of this array represents one pattern, where we put 1 for populations that are active in this pattern, and 0 otherwise. As a result, the array U has the size N × N pat , where N pat is the number of patterns encoded, and N is the network size. Each pattern consists of N p active populations. The adjacency matrix of a network that encodes these patterns can then be constructed as follows [17], with p = N p /N. The entries of A are capped at a maximum value of (1 − p) 2 − Q to account for saturation effects in synaptic plasticity. Otherwise, the strength of connections in the network would steadily increase as patterns are added. The offset Q introduces global inhibition that stabilizes the encoded patterns. We set Q = 0.2. Each population is subjected to an independent Ornstein-Uhlenbeck process ξ n (t) to break the symmetry of the networks. The Ornstein-Uhlenbeck process is implemented as Langevin equation: where z n (t) are independent Gaussian white noise sources, hz n (t)z m (t − s)i = δ(s)δ mn , and τ is the characteristic time scale, which we set to τ = 20ms.

E-I circuit generating oscillations
To create a network that generates oscillations, we consider a network of an excitatory population interacting with an inhibitory one: _ v e ¼ v 2 e þ J e r e þ J i r i þ Z e À p 2 r 2 e ; For simplicity, we choose J e = −J i = J. The two populations differ in terms of the means of their tonic input currents, η e and η i . We vary these two parameters to identify the regime where stable oscillations exists, and to change the frequency of these oscillations.

A canonical model for nonlinear resonance in the bistable regime
In the network model, the high-activity branch of solutions in the bistable regime exhibits damped oscillations. Periodic external drive can resonate with these intrinsic oscillations, leading to destabilizing period-doubling bifurcations as seen in the previous section. Here we show that this mechanism is present in the simplest possible model exhibiting a saddle-node bifurcation and for which the upper branch becomes a focus: This model is a particular unfolding of the so-called Takens-Bogdanov normal form [60], for which there is no Hopf bifurcation, which is the relevant case for our network model. It is easily shown that a saddle-node bifurcation occurs in these equations at μ = 0 and that the fixed point solutions are x 0 ¼ AE ffiffiffi m p and y 0 = 0 for μ > 0, see Fig 11A. Furthermore, the solution x 0 ¼ À ffiffiffi m p is a saddle, and x 0 ¼ ffiffiffi m p is a stable focus for which the frequency goes to zero smoothly as μ ! 0. Fig 11B shows that in the forced system there is a range of frequencies for which there is no stable solution; in the normal form equation the solution diverges while in the network model the system settles to a periodic orbit in the vicinity of the low-activity state. The instability is due to a series of period-doubling bifurcations as in the full system. Furthermore, comparison of the phase diagram of the normal form equation with that of the full system shows they are qualitative similar, see Fig 11C. This indicates that the nonlinear resonance seen in the network of QIF neurons is a generic feature of any system with a stable focus in the vicinity of a saddle-node bifurcation.