Skip to main content
Advertisement
  • Loading metrics

Co-existence of synaptic plasticity and metastable dynamics in a spiking model of cortical circuits

  • Xiaoyu Yang,

    Roles Data curation, Formal analysis, Investigation, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Graduate Program in Physics and Astronomy, Stony Brook University, Stony Brook, New York, United States of America, Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America, Center for Neural Circuit Dynamics, Stony Brook University, Stony Brook, New York, United States of America

  • Giancarlo La Camera

    Roles Conceptualization, Data curation, Funding acquisition, Methodology, Project administration, Resources, Software, Supervision, Writing – original draft, Writing – review & editing

    giancarlo.lacamera@stonybrook.edu

    Affiliations Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America, Center for Neural Circuit Dynamics, Stony Brook University, Stony Brook, New York, United States of America

Abstract

Evidence for metastable dynamics and its role in brain function is emerging at a fast pace and is changing our understanding of neural coding by putting an emphasis on hidden states of transient activity. Clustered networks of spiking neurons have enhanced synaptic connections among groups of neurons forming structures called cell assemblies; such networks are capable of producing metastable dynamics that is in agreement with many experimental results. However, it is unclear how a clustered network structure producing metastable dynamics may emerge from a fully local plasticity rule, i.e., a plasticity rule where each synapse has only access to the activity of the neurons it connects (as opposed to the activity of other neurons or other synapses). Here, we propose a local plasticity rule producing ongoing metastable dynamics in a deterministic, recurrent network of spiking neurons. The metastable dynamics co-exists with ongoing plasticity and is the consequence of a self-tuning mechanism that keeps the synaptic weights close to the instability line where memories are spontaneously reactivated. In turn, the synaptic structure is stable to ongoing dynamics and random perturbations, yet it remains sufficiently plastic to remap sensory representations to encode new sets of stimuli. Both the plasticity rule and the metastable dynamics scale well with network size, with synaptic stability increasing with the number of neurons. Overall, our results show that it is possible to generate metastable dynamics over meaningful hidden states using a simple but biologically plausible plasticity rule which co-exists with ongoing neural dynamics.

Author summary

Neural activity in cortex is often characterized by sequences of transient patterns called metastable states. These metastable states have been found to code for important task variables such as stimuli, expectations, and decisions; their switching dynamics can be thought of as reactivations of the corresponding neural representations. This dynamics can be explained by a neural network model wherein neurons are organized in clusters, with neurons inside the same cluster connected by stronger synapses than neurons belonging to different clusters. But what is the origin of this structure? In this work, we demonstrate how such a structure may emerge via synaptic plasticity. In our model, the synapses continuously adapt their values based on the activities of the neurons they connect. This leads to the structuring of the network and metastable dynamics in the presence of repeated stimuli, but also to robustness to the network’s self-generated fluctuations in the absence of consistent stimuli. Most importantly, our model uncovers a self-organizing mechanism that keeps the network in a regime where metastability is the only steady dynamics: weaker or stronger synapses would both lead to the absence of metastable states, but the synaptic plasticity rule interacts with the neural dynamics to keep the values of the synaptic weights within the necessary bounds.

Introduction

Cortical circuits express ongoing neural dynamics that is often found to be metastable, i.e., to unfold as a sequence of neural activity patterns in which ensembles of neurons keep approximately constant firing rates for transient periods of time. A stable vector of firing rates across simultaneously recorded neurons may be thought of as a ‘hidden state’, of which spike trains emitted by the neurons are noisy observations. In the cortex of rodents and non-human primates, metastable states last between a few hundred of ms to a few seconds and their identity and dynamics have been linked to sensory processes, attention, expectation, navigation, decision making and behavioral accuracy (see [1, 2] for reviews). One way to model this type of metastable activity is to organize a neural network in clusters, or cell assemblies [3, 4]. The neurons in each clusters are connected by synaptic weights whose average value is larger than the average weight between neurons of different clusters. A key question is then to understand how such structure subserving metastable dynamics can emerge from, for example, experience-dependent plasticity. This problem of structuring a neural circuit in clusters via synaptic plasticity is basically the same problem of the stable formation of Hebb assemblies leading to persistent activity reflecting the memory of learned stimuli [58].

The problem of the formation of stable cell assemblies has been of interest for a long time (see [9, 10] for textbook reviews). The most recent efforts in this direction have included the combination of spike-timing-dependent plasticity (STDP) [11, 12] and a number of homeostatic mechanisms [13, 14] to keep the neural activity bounded during learning. However, while most efforts so far have focused on the formation of stable neural clusters with the purpose of representing retrievable memories or the development of receptive fields, here we focus on metastable dynamics. In other words, instead of focusing on stable neural dynamics following the presentation and removal of a stimulus, the aim of this study is to obtain neural dynamics that continuously switches among a set of hidden states, which have been stored in the network structure by training. The ensuing switching dynamics can be interpreted as a continuous reactivation of internal representations. Aside from potential computational consequences, we motivate our quest from the observation of switching dynamics in many neuroscience studies (reviewed in [1, 2]).

In pursuing this effort, we require the synaptic plasticity rule to be biologically plausible and leading to the formation of neural clusters that are stable for random perturbations. Moreover, we require that the neural clusters generate metastable dynamics that coexists with ongoing plasticity (i.e., in the absence of external stimuli). We further aim to obtain a model where new information can be accommodated, so that training with a new set of stimuli will lead to cluster rearrangement producing metastable dynamics among the new states. The requirement of biological plausibility mostly means that the plasticity rule must be local and must depend only on presynaptic and postsynaptic activity in a way that is accessible to the synapse—in particular, it must depend only on presynaptic spikes and postsynaptic variables related e.g. to membrane voltage or calcium transients [1518].

To our knowledge, one previous study has provided a model of synaptic plasticity that can produce metastable dynamics, seen as a signature of slow fluctuations in neural circuits [7]. This model uses a combination of STDP and inhibitory plasticity, plus a non-local mechanism of synaptic renormalization. In contrast, here we present a plasticity rule that relies only on presynaptic spikes, postsynaptic membrane potential and postsynaptic spikes. Long-term potentiation (LTP) and long-term depression (LTD) are obtained by comparing a voltage-sensitive internal variable to an adapting threshold, reminiscent of the BCM rule [19]. The threshold adapts in a way to produce transient LTP among co-active neurons, followed by LTD after prolonged activation. This leads to the stable formation of neural clusters while promoting a dynamics that is automatically metastable for a large range of network sizes. The learning rule keeps the synaptic weights near a critical line where metastable dynamics is the only equilibrium dynamics of the network, as the result of a self-tuning mechanism that keeps the network activity near the threshold for memory reactivation. This is accomplished without additional homeostatic mechanisms, such as inhibitory plasticity or synaptic scaling [2022], streamlining the plasticity rule to require only a handful of basic ingredients.

In summary, our plasticity rule thus provides a possible explanation of the emergence of metastable dynamics observed in many brain areas during both ongoing and evoked neural activity, and provides a self-tuning mechanism that keeps the network activity near a critical line characterized by slow fluctuations and spontaneous memory reactivations.

Results

The synaptic plasticity rule

We endowed a recurrent network of excitatory and inhibitory exponential integrate-and-fire (EIF) neurons (see Materials and methods) with the following plasticity rule for synapses connecting excitatory neurons. Given a synapse with efficacy wij from excitatory neuron j to excitatory neuron i, a change in synaptic efficacy was triggered by the arrival of a presynaptic spike, while its polarity and strength depended on the activity of the postsynaptic neuron: (1) Here, , is the presynaptic spike train and δ(t) is the Dirac delta function. [x]+ ≐ max(x, 0) is the rectified linear function and is a local postsynaptic variable which dictates the polarity of plasticity: the synapse undergoes LTP when , and it undergoes LTD when . To respect Dale’s law, the weights are constrained to be nonnegative.

The variable could represent a calcium variable or the running average of the membrane potential. In our case, was the low-pass filter of , where V is the membrane potential of the EIF neuron (see Materials and methods), and therefore it was driven most substantially during the emission of a spike. The strength of LTP was further modulated by an attenuation factor that constrains the ability of strong synapses to grow excessively.

As similar learning rules of this kind [9], this rule is unstable if θi is a constant threshold and the attenuation factor is missing. In such a case, stimulating a subset of neurons would result in higher firing rates and larger , which in turn would lead to higher firing rates, and so on. One way to prevent this problem is to use activity-dependent thresholds as in the BCM rule [19]. This idea requires the LTP and LTD thresholds to be a non-linear function of the postsynaptic activity , with faster dynamics than the dynamics of the synaptic weights. While the BCM rule uses a supralinear function of , we chose a hyperbolic tangent function of both and postsynaptic spiking activity, : (2) In this equation, is the low-pass filtered postsynaptic spike train, , θa is a constant in units of θi, g is a gain factor and γ is a constant. We use here the hyperbolic tangent function for convenience, however the exact form of the sigmoidal function is not essential.

The hyperbolic tangent in Eq (2) automatically adjusts the dynamics of the threshold θi(t) for different postsynaptic activities. The motivation behind this specific model is that it can lead naturally to switching dynamics of neural clusters. This can be understood from the dynamics of Eq (2), which adapts to the size of its argument (Fig 1A; see Materials and methods for details). For small arguments, the dynamics is fastest and θi closely follows , resulting in no average change in the synaptic weights (Fig 1B). This occurs when the postsynaptic membrane potential is characterized by subthreshold fluctuations. When the postsynaptic neuron fires a spike, its membrane potential rises significantly and θi is attracted to a new value with slower dynamics. As a result, θi will temporarily lag behind , producing a short temporal window for LTP, followed by a longer window for LTD (Fig 1C). An occasional spike during ongoing activity will not produce a meaningful synaptic change, however, repeated activation of the same postsynaptic neuron will produce a longer window for LTP (Fig 1D, blue shaded area). This will occur when the postsynaptic neuron engages in recurrent excitation, and will promote the formation of clusters of co-active neurons. Prolonged activation of the same cluster, however, will turn LTP into LTD. This is due to the term , so that the threshold θi will finally approach a value , causing LTD (Fig 1D, red shaded area). In summary, θi dynamics can help to form neural clusters via transient LTP, while capping the growth of synaptic efficacies via LTP to LTD transitions. Later we show that, upon repeated presentation of external inputs, our learning rule builds a synaptic structure that supports an equilibrium switching dynamics among co-active groups of neurons.

thumbnail
Fig 1. Illustration of the plasticity rule.

A: Time scale of as a function of Δ, where (see Eq (2)). Thalf is defined as the time it takes for θ(0) = 0 to reach half of Δ (in units of τθ), and for Δ ≳ 3 it increases linearly with Δ (see Materials and methods). B: The learning rule ignores the subthreshold fluctuations of the membrane potential when the neuron does not fire spikes. C: After the postsynaptic neuron fires a single spike, the synapse undergoes a short window for LTP (blue shaded area) followed by a longer window for LTD (red shaded area). D: Repeated activation of the postsynaptic neuron will cause a transition from LTP to LTD. Panels C and D are illustrative cartoons.

https://doi.org/10.1371/journal.pcbi.1012220.g001

Note that the additive form makes it easy to control plasticity for two different cases, one when the postsynaptic neuron is inactive (in which case , θi quickly follows , and no net plasticity ensues), and one during repetitive firing (in which case first LTP, and then LTD, ensues).

Formation of stable clusters with metastable dynamics

We initially tested our plasticity rule in a recurrent spiking network model comprising NE = 800 excitatory and NI = 200 inhibitory EIF neurons (see Materials and methods). During training, a set of Q = 10 sensory stimuli were presented to the network in random order, each targeting a fixed but randomly chosen subpopulation of excitatory neurons, which we call a cluster. Although each cluster was associated to a separate stimulus, each cluster contained f = 10% of randomly selected neurons, so that the same neuron could respond to more than one stimulus as typically observed in experiments [2327]. Thus, the mean number of neurons in each cluster was 80, but each neuron had a probability of about 0.26 to respond to at least two stimuli, and clusters had a probability 0.61 of sharing neurons with other clusters (see Materials and methods for details).

Stimulus presentations occurred every 2000 ms and each lasted 500 ms (however, using random inter-stimulus intervals did not alter the results). This training procedure lasted for 10 minutes and on average each sensory stimulus was presented 30 times.

As expected, after training the network exhibited metastable ongoing dynamics which was still present after 2 and 4 hours (Fig 2A). In this figure, spikes from neurons in the same cluster have the same color, and neurons belonging to multiple clusters are duplicated. The black curves superimposed to each cluster’s spikes measure the ‘overlap’ of the whole network’s activity with the stimulus associated to that cluster (see Materials and methods). For a given stimulus, the overlap varies between zero and one and approaches 1 when all active excitatory neurons of the network belong to the associated cluster, while the remaining neurons have negligible firing rate. The raster plots in Fig 2 show that the neural activity after training switches between networks states characterized by specific cluster activations. These states can therefore be interpreted as memories of the stimuli, with the ongoing dynamics being akin to a random walk among these memory states. The distribution of state durations was approximately exponential with mean around 250–300 ms (Fig 2C), reminiscent of the discrete Markov processes with fast state transitions found to describe ensembles of cortical spike trains [4, 2832]. The metastable dynamics observed after training is the consequence of potentiated synapses inside clusters and the emergence of block structure in the synaptic matrix (Fig 2B), a structure known to potentially produce metastable dynamics in spiking networks [3, 4, 33]. We present a more detailed analysis of this behavior in Sec. Mechanistic origin of neural metastability.

thumbnail
Fig 2. Cluster formation and metastable dynamics via excitatory plasticity in a network with 1, 000 spiking neurons (see the text).

A: Rasterplot of excitatory neurons taken immediately after training (left), 2 hours after training (middle), and 4 hours after training (right). Neurons were ordered according to cluster membership (different colors for different clusters), and if necessary also duplicated to multiple clusters according to the sensory stimuli they respond to. Black lines: overlap between the network’s neural activity and the stimulus associated to each cluster (see the text). B: Synaptic matrix of the network at the same times as in A showing the formation of clusters (evidence from the block structure of the matrix). C: Distribution of durations of cluster activations (see the text) for each corresponding epoch shown in A. Samples were taken over a period of 10 minutes. D: Averaged post-training excitatory synaptic weights as a function of time. w1: mean weights across synapses connecting neurons sharing at least one stimulus; w0: mean weights across synapses connecting neurons sharing no stimuli.

https://doi.org/10.1371/journal.pcbi.1012220.g002

We next illustrate the role of the attenuation factor in Eq (1) and the spiking term in Eq (2). Without attenuation, training is successful, however clusters are unstable and disappear within one hour after training (S1 Fig). Without the term in Eq (2), training produces uneven clusters that are unable to sustain cluster activation within one hour after training (S2 Fig).

To quantify the amount of learning, we measured the average synaptic weight among synapses connecting neurons sharing at least one sensory stimulus (dubbed ‘w1’), and the average weight among neurons that did not share any sensory stimuli (‘w0’): (3) where , is the number of synapses of type , and is the set of ij indices of synapses of type . Training significantly increased w1 compared to w0, thus mapping successfully the association between sensory inputs and the corresponding neural clusters activated by the stimuli. After training, the excitatory synapses were continuously modified by the plasticity rule during ongoing metastable activity, leading to the decrease of w1 and increase of w0 (Fig 2D). These synaptic changes seem to stabilize after 4 hours; after this time, plasticity coexists with metastable dynamics, the latter unfolding as a random walk among states associated with the training stimuli. This picture is confirmed in longer simulations of larger networks presented in later sections.

Robustness to perturbations and remapping

In the previous section we have shown that our learning rule generates metastable dynamics that co-exists with synaptic plasticity. We show next that this is also true in the presence of random stimuli. After training, we probed the network with external sensory inputs with similar physical features (i.e. magnitude and coding level) as the training stimuli. The occurrence schedule of these external stimuli was modeled as a Poisson process with a mean inter-event interval of 10 s. We considered three different scenarios. In the first scenario, the external stimuli were randomly sampled and used only once (i.e., the subsets of target neurons were uniformly and independently sampled at each occurrence; Fig 3A). This scenario mimics a purely noisy environment where there is no temporal correlation in the sensory inputs. In the second scenario, only half the sensory inputs were random inputs, while the other half were sampled from a finite set of 10 sensory stimuli targeting always the same neurons (Fig 3B). This scenario mimics a combination of meaningful stimuli occurring amid some random sensory background. Finally, in the third scenario the external stimuli were all sampled from a pre-defined set of 10 sensory stimuli, mimicking a situation where the network is being retrained with new stimuli (Fig 3C). In all cases, the network kept adjusting its synaptic weights according to our plasticity rule.

thumbnail
Fig 3. Sensory perturbations and remapping of the network after training (same network as Fig 2).

The network was trained with stimuli of Set 1 and then probed for after training, stimuli were drawn from a different set (Set 2) and delivered at random times. A: Perturbing stimuli were randomly sampled to mimic a noisy environment. B: 50% of perturbing stimuli were sampled randomly while the other 50% were sampled from a finite pre-defined set of reoccurring stimuli. C: All stimuli were sampled from a finite pre-defined set of reoccurring stimuli. The top panels show w1 (Eq (3)) after training for Set 1 (full) and Set 2 (dashed). The middle panels show raster plots of the neural activity 4 hours after training, together with the overlaps (black lines) with the stimuli of Set 1 (those used for training). The bottom panels show raster plots of the neural activity 4 hours after training, together with the overlaps with the stimuli of Set 2 (note that this set was never used in A). Vertical arrows indicate an external stimulation with either a random stimulus (grey arrows) or a stimulus from Set 2 (blue arrows).

https://doi.org/10.1371/journal.pcbi.1012220.g003

In the presence of random stimulation, the network maintained a significant memory of the learned stimuli for several hours, similar to what we found in the absence of sensory stimulation (Fig 3A, top and middle panels). In the third scenario, the network quickly learned the new sensory inputs (Fig 3C, bottom) while gradually forgetting the previously learned ones (Fig 3C, middle). Finally, in the ‘mixed’ stimulation scenario, the network could still learn the new sensory stimuli but at a much slower rate (Fig 3B-top, compare dashed lines in Fig 3B-top and Fig 3C-top). A raster plot of the neural activity shows the metastable activation of clusters of neurons representing stimuli in both the first and second set (Fig 3B, middle and bottom panels, respectively), showing that the network could accommodate the learning of new stimuli (Fig 3B-bottom) while maintaining a trace of the previously learned ones. Note how, in between stimulus presentations, the network dynamics was metastable in all cases.

Learning stability vs. network size

Although synaptic weights decay after training (Fig 2D), here we show that such decay slows down as a function of network size. We show this by estimating the rate of change for w0 and w1 as a function of N, the number of neurons in the network. Scaling up the network size can be done in several ways [34, 35]; we used two different scaling procedures, one in which QN and one in which .

In the first scenario, we scale up the number of clusters Q proportionally to NE = 0.8N while keeping constant the mean cluster size NE/Q = 80 (this corresponds to a coding level f = 1/Q ∝ 1/N, where f is the probability that a neuron is targeted by a stimulus; see Materials and methods). As shown in Fig 4, synaptic decay is slower in larger networks (Fig 4A) while ongoing metastable dynamics co-exists with synaptic plasticity (Fig 4B). In particular, nearly constant values of the mean synaptic weights are observed in networks of 20,000 neurons (rightmost panel of Fig 4A).

thumbnail
Fig 4. Effect of network size on synaptic dynamics after training.

A: Time dependence of average synaptic weights w1 and w0 after training for networks of different size N (same keys as Fig 2D). In each case, the network had NE = 0.8N excitatory neurons and Q = NE/80 = N/100 clusters (i.e., Q = 50, 100, 150, and 200 from left to right). B: Raster plots of the network’s activity 4 hours after training for the corresponding networks in A. C: Plots of NΔw0 vs. time after training for the different network in A. Observations were taken 0 to 4 hours post training (left panel) and 3 to 4 hours post training (right panel). In large networks, NΔw0 does not depend on N as predicted by Eq (4). D: Same as C for NΔw1 vs. time.

https://doi.org/10.1371/journal.pcbi.1012220.g004

Analytical arguments imply a synaptic decay rate ∝ N−1, i.e., (4) where is the average change in strength over the interval Δt for synapses of type (defined analogously to Eq (3); see Materials and methods) and is a function of time but not of N. To confirm this prediction, we plotted NΔw0 (Fig 4C) and NΔw1 (Fig 4D) as a function of time and for increasing network size (from N = 5, 000 to N = 20, 000). The plots show that, after entering the ongoing metastable regime, both curves NΔw0 and NΔw1 tend to overlap for large enough networks, as predicted by Eq (4). Empirically, is an increasing function for w0 synapses and a decreasing function for w1 synapses. The initial transients visible in the left panels of Fig 4C and 4D are due to either a small N or not having yet reached the stable regime of synaptic decay (which takes about 4 hours in small networks, see Fig 2D). For large N and starting 3 hours post training, the rate of change is independent of N (Fig 4C and 4D, right panels).

These results confirm the synaptic decay rate ∝ N−1 for w1 synapses, implying slower memory decay and more stable synaptic efficacies in larger networks, despite ongoing plasticity. Moreover, the dynamics produced by the networks of Fig 4 shows near-exponential distributions of the state durations (S3 Fig), with mean durations approaching stability 4 hours post-training and for N > 15, 000.

The slowdown of synaptic decay rate with N was found also in an alternative scaling scenario in which f = 1/Q with , giving and neurons in each cluster (with NE = 0.8N). In this case we have a smaller number of clusters (), but each cluster grows in size with N. We trained several networks under this scaling and found results analogous to those of Fig 4: training is successful, the synapses become more stable in larger networks, and metastable dynamics is reliable across network sizes (see S4 Fig). We could not estimate the synaptic rate of change in this case (see Materials and methods). Empirically, the 1/N law was not observed in this case (S4C and S4D Fig).

Mechanistic origin of neural metastability

What is the mechanism behind the the coexistence of synaptic plasticity and metastable dynamics? Due to the adaptive threshold in the learning rule, prolonged cluster activations are eventually terminated by LTD, however it is not clear what would cause the clusters’ re-activations ensuring an ongoing metastable regime. If, however, the synaptic structure reached by training satisfies some known criteria [4], metastable dynamics would emerge due to endogenously generated fluctuations in the spiking network, aided by quenched random connectivity, sufficient synaptic potentiation inside clusters, and recurrent inhibition. In such a case, metastable dynamics would be present at the end of training also in the absence of plasticity. To show that this is indeed the case, we switched off synaptic plasticity at various times post-training and observed the neural dynamics. Specifically, we ran the network for 24 hours after training in the presence of synaptic plasticity and stored the synaptic matrix at 0, 1, 8 and 24 hours post-training. For each time point, we performed network simulations using the corresponding stored synaptic matrix, both with and without plasticity, for 10 minutes. The results are shown in Fig 5 for a network of N = 5, 000 neurons and Q = 50 stimuli targeting non-overlapping clusters of neurons. Each row of the figure shows the synaptic weight distributions at a specific time point (left-most column), a snapshot of neural activity with (second column) and without (third column) ongoing plasticity, and normalized histograms of state durations with and without plasticity (right-most column). As shown in the figure, ongoing plasticity is not required for metastable dynamics, confirming that the latter is the consequence of endogenous fluctuations of the neural activity. The only appreciable difference between the two cases (plasticity ON vs. plasticity OFF) is seen 0 hours post-training: although metastable dynamics is present with or without plasticity, the histograms of state durations are different. This shows that, right after training, synaptic plasticity is not necessary for metastability, but it affects its statistics. One hour later (and in the following time points) the neural activities (and associated histograms) in the presence or absence of plasticity are indistinguishable (differences in mean state durations are the result of random fluctuations). This suggests that the synaptic weights were still converging, at time 0h post-training, towards a more stable region. In this latter region, metastable neural dynamics and synaptic dynamics coexist and generate the same neural activity that would be observed in the absence of plasticity. In such a phase, we expect the fluctuations in the synaptic weights caused by plasticity not to play a role in metastable dynamics.

thumbnail
Fig 5. Comparison of network dynamics with and without synaptic plasticity in a network with N = 5000 neurons and Q = 50 stimuli.

After training, the ongoing plasticity continued for 24 hours and we recorded the synaptic matrix at 0 (A), 1 (B), 8 (C) and 24 (D) hours post-training. Then we ran the network dynamics, starting with the corresponding stored synaptic matrix, for 10 minutes with and without plasticity respectively. Each row of the figure shows the synaptic weight distributions at each specific time point (left-most column), a snapshot of neural activity with (second column) and without (third column) ongoing plasticity, and histograms of state durations with and without plasticity (right-most column). The mean state durations are reported above the histograms.

https://doi.org/10.1371/journal.pcbi.1012220.g005

To confirm this prediction, we performed a mean field analysis of the neural activity (see Materials and methods). The analysis assumes, for simplicity, that only one cluster can be active at any given time. Above a critical value for the average synaptic value inside a cluster, metastable dynamics is possible and is revealed by a difference in firing rate between the active and non-active clusters [4].

Fig 6 (left panel) shows the mean field landscape of the neural activity. The landscape shows the mean-field predictions of the firing rate of the active cluster as a function of the mean and standard deviation of the synaptic weights inside the active cluster. The actual firing rates observed in the network are shown in the right panel. The contour lines shown in the figure are lines of equal firing rate. Below the lowest contour (firing rate ∼ 5 spikes/s), there is no predicted difference between the firing rates of active and inactive clusters, and the network is characterized by a spontaneous, low firing rate solution. Higher contour lines correspond to robust structuring of the synaptic weights, where the network is able to sustain persistent activity of its clusters [36]. With our learning rule, these higher lines are not reached due to LTP→LTD transitions.

thumbnail
Fig 6. Mean-field solutions to a simplified network model of N = 5000 neurons and Q = 50 non-overlapping clusters.

Left: mean field landscape of the network, showing the mean field solutions for the firing rates of active clusters as a function of the mean and standard deviation of the synaptic weights inside clusters. The contour lines mark the mean field firing rates of the active clusters, assuming that at most one cluster can be active. In the white region, all clusters are inactive. The black line shows the trajectory of the average synaptic weights (taken from the simulations shown in Fig 5) superimposed onto the mean-field landscape, with the circles marked a-d indicating 0, 1, 8 and 24 hours post-training, respectively. Right: Normalized histograms (pdf) of single clusters’ firing rates recorded in simulations at time points a-d. The firing rates are in good agreement with the mean field solutions in the left panel except for time point a, presumably due to faster dynamics of the synaptic weights compared to the other time points (mean field assumes fixed synapses). The red-shaded region (‘forbidden region’) is not accessible (see Materials and methods for details).

https://doi.org/10.1371/journal.pcbi.1012220.g006

The lowest contour line divides the phases with and without active clusters: this is the only region of the landscape where metastable dynamics is possible; we call it the ‘instability line’ because it separates two regions where neural activity is stable (but note that metastability is possible in a region of finite width around the instability line). Near the instability line, all memories can be quickly reactivated—in fact, they are spontaneously reactivated during metastable dynamics.

The solid black curve in Fig 6 shows the trajectory of the synaptic weights from a 24-hours network simulation (same simulation as Fig 5). During training, both the mean and variance of the synapses increase proportionally, leading to a dynamical regime (time point ‘a’) where prolonged (but still transient) activations of single clusters are predicted under mean field (Fig 5A, ‘plasticity OFF’ raster). However, such prolonged activations are not observed in simulations with plastic synapses (Fig 5A, ‘plasticity ON’ raster) because they would be terminated by the LTP→LTD transitions. The same mechanism keeps adjusting the synaptic weights after training (Fig 6 left, a→b) until the network activity and synaptic dynamics are, effectively, in equilibrium. In this regime, metastable dynamics is not the consequence of synaptic fluctuations, but is driven by the network’s own generated fluctuations in neural activity. At this point, switching the plasticity off has no effect on the network dynamics (Fig 5B and 5D, left panels).

The equilibrium dynamics reached post-training is characterized by very slow changes in the mean synaptic weights—much slower than neural metastable dynamics (Fig 7A). The equilibrium dynamics is also robust to occasional macroscopic changes that can occur in single synapses, as shown in Fig 7B, suggesting that memories are supported by the collective behavior of synapses inside clusters. The macroscopic changes observed in some synapses is reminiscent of synaptic volatility [37], and are presumably due to neural fluctuations in the metastable regime. Our model shows that memories are robust to some degree of synaptic volatility, and that such volatility is a consequence of the interplay between synaptic plasticity and neural dynamics. This interplay keeps the synaptic weights close to the instability line where memories can be quickly reactivated.

thumbnail
Fig 7. Recordings of single synapses from the network of Fig 5.

A: Time course of the average synaptic weights within each cluster. B: Time course of 20 randomly selected synapses within the first cluster. Note that synaptic dynamics is much slower than neural metastable dynamics.

https://doi.org/10.1371/journal.pcbi.1012220.g007

Discussion

Model features and main results

Metastable neural dynamics, defined as the repeated occupancy of a set of discrete neuronal states occurring at seemingly random transition times [2], has recently come to the fore as a potential mechanism mediating sensory coding, expectation, attention, navigation, decision making and behavioral accuracy (see [1, 2] for reviews). At the same time, model variations over a basic network of spiking neurons with clustered architecture [3, 4, 33, 38] have accounted for a wealth of data concerning this metastable dynamics [4, 32, 39, 40]. One is then led to the following question: how can a cortical circuit be shaped by internal dynamics and externally-driven events so as to converge to the clustered architecture producing metastable dynamics?

The model of synaptic plasticity introduced in this work answers this question by offering a simple yet biologically plausible mechanism capable of cluster formation and metastable dynamics. The plasticity rule builds clusters of neurons with strengthened synaptic connections; after training, the neural activity switches among a number of ‘states’ that can be interpreted as neural representations of the stimuli used for training. Notably, the metastable dynamics generated by learning occurs co-exists, after training, with ongoing synaptic plasticity. This coupled dynamical equilibrium of neural activity and synaptic dynamics is achieved via a self-tuning mechanism that keeps the synaptic weights around the instability line between quiescent and persistent activity (by quiescent activity we mean that no clusters are active). Around the instability line, metastable dynamics results from network ingredients (clustered architecture, quenched random connectivity, recurrent inhibition and the finite size of the network), and although it co-exists with synaptic plasticity, it does not require it. This allows to keep stable representations of learned stimuli in the face of ongoing plasticity, a long-lasting problem known as the stability-plasticity dilemma [4144].

Our plasticity rule has several other desirable features. One is biological plausibility, in at least two ways: (i) the plasticity rule is local, i.e., it depends only on quantities that are available to the synapse, and (ii) it works in networks of spiking neurons (rather than artificial neural networks or firing rate models). Another desirable feature of our rule is that it leads to metastable dynamics with similar properties as those observed in experimental data [4, 2932, 4547], including an exponential distribution of cluster activation durations with means of few hundreds of ms [4]. The distribution of synaptic weights after training is itself a slightly skewed unimodal distribution, as observed in real data [48, 49]. Our model also has a minimal number of mechanisms and parameters within a class of similar models that might produce metastable dynamics—for example, a model wherein the presynaptic spike train is replaced by a low-pass filter of it; or a model that imposes an upper bound to the synaptic weights. Those mechanisms are not needed in our plasticity rule. In addition to an adaptive threshold, our model requires only two mechanisms: an attenuation term for LTP (the term in Eq (1)) and a spiking term in the equation for the adaptive threshold (Eq (2)). These mechanisms require the introduction of parameters β and γ, which have a clear biological correlate. β controls the amount of LTP. Mechanisms reducing LTP but not LTD have been described in vitro by [50], who showed how an increase in total excitatory and inhibitory activity (for example due to cluster activation) can rapidly reduce the amplitude of LTP but not LTD. This mechanism could be mediated through changes in calcium dynamics in dendritic spines and has been proposed as an example of ‘rapid compensatory process’ by [51]. The parameter γ controls the spiking contribution to the threshold’s dynamics. This term could emerge from cellular mechanisms similar to those responsible for afterhyperpolarization currents [52, 53], which are often used in models of firing rate adaptation [5456]. Additional compensatory mechanisms, such as inhibitory plasticity or synaptic normalization, were not necessary in our model and therefore were not included.

It is interesting that while the mean synaptic weights inside clusters undergo little change post-training (Fig 7A), single synapses can undergo macroscopic changes, as shown in Fig 7B. Changes in synaptic weights (also observed in the absence of learning) has been named ‘synaptic volatility’ and presents a challenge to plasticity models underlying the formation of stable memories [37]. In our model, macroscopic changes in synaptic weights are presumably due to the fluctuations of the neural activity in the metastable regime. Our results show that our model can explain at least some form of synaptic volatility as the consequence of the interplay between synaptic plasticity and neural metastable dynamics. This interplay keeps the synaptic weights close to the instability line where memories can be quickly reactivated.

Another notable feature of our model is its behavior under network scaling. Cortical circuits are variable in size but they typically comprise large numbers of neurons [57]. It is therefore important to test whether models of plasticity maintain their properties as networks are scaled up in size. Scaling up a neural circuit is a necessary operation in several neuroscience theories (e.g., the theory of balanced networks [34, 58]) and it can be done in several ways. We have chosen here two ways, one that is most intuitive, with order N clusters but fixed cluster size, and one that is most commonly used, wherein both the size and the number of clusters grows as (see [35] for a discussion of the different scaling regimes). In both cases, we have found that training leads to the formation of stable clusters generating metastable dynamics, at least for networks up to 20,000 neurons (larger networks become computationally expensive to simulate). Importantly, both the properties of the dynamics (S3 Fig) and the learned synaptic weights (Figs 4 and S4) tend to stabilize as the number of neurons in the circuit grows. In the first scaling regime, we have found that the synaptic decay decreases very fast with N, i.e., ∝ 1/N.

The behavior of our model under scaling could potentially solve a conceptual problem arising in deterministic network models that try to combine attractor networks with balanced networks [34, 35]. These models produce metastable dynamics due to finite size effects, so that in very large networks stable (rather than metastable) cluster activations would prevail [1, 59, 60]. It is not clear how to obtain metastable dynamics in these models in the limit of very large network size. In the presence of our plasticity mechanism, however, metastable dynamics is the result of a synaptic self-tuning process that keeps the synapses inside the metastable region of the parameter space (Fig 6). We have directly tested this in a network with 20,000 neurons, with a few thousand synapses per neuron—a reasonable number for neocortex [57], and we expect that our self-tuning mechanism applies to even larger networks.

Comparison with previous work

Modeling the self-organization of neural circuits via synaptic plasticity has long been an important topic of research [9, 10] and has recently been studied in a number of works bearing various similarities to our work. However, the first successful simulations of spiking networks dynamics with ongoing synaptic plasticity have appeared only about 20 years ago [5, 6, 61]. Previous models closest to ours are models of voltage-based STDP rules wherein an internal variable (often interpreted as postsynaptic depolarization) is compared to different thresholds for induction of LTP and LTD [7, 8, 15, 17]. Part of the motivation for some of these models was to reproduce experimental results on both STDP [11, 62] and the dependence of synaptic plasticity on postsynaptic depolarization [6365]. When applied to self-organization of cortical circuits, most previous work in this context has focused on the formation of clusters (often called cell assemblies) for the formation of stable neural activity representing memories. The goal of these works was therefore to obtain stable attractor dynamics after learning. In contrast, the goal of our work was to obtain a stable synaptic matrix that co-exists with a rich, ongoing metastable dynamics.

A notable exception among previous efforts is the model by [7]. In their model, the authors combined the voltage-based STDP rule of [17] with inhibitory plasticity [66] and synaptic renormalization to obtain metastable dynamics in a network of adaptive EIF neurons. Inhibitory plasticity was used to maintain a target firing rate in the excitatory neurons and prevent the formation of winner-take-all clusters during training. This problem arises due to the fact that, during training, different stimuli target different numbers of neurons creating inhomogeneities in clusters size [35] (in our case, the initial inhomogeneity was 10%). We found that in our model, neither inhibitory plasticity nor synaptic normalization was required, presumably thanks to the presence of a dynamic threshold which keeps the synaptic weights bounded as in the BCM rule [19]. In our model, the dynamic threshold also produces LTD after prolonged cluster activation. This, together with LTP attenuation (the term in Eq (1); see S1 Fig) helps to prevent the formation of large clusters during training, while also keeping the firing rates within an acceptable range. This confers advantages as it frees up inhibitory plasticity (and possibly other network mechanisms) to accomplish other tasks.

More recently, [67] have proposed a plasticity rule that produces switching dynamics that resembles ours. In their model, rather than comparing a postsynaptic variable to a threshold for LTP/LTD, a Hebbian term is compared to each threshold. The Hebbian term is the low-pass filter of the product of the low-pass filters of the pre- and post-synaptic spike trains, and was interpreted as a calcium signal responsible for synaptic plasticity [68]. The main goal of [67] was to present a learning rule resulting in stable and unimodal weight distributions after learning. The authors show that their learning rule has intrinsic homeostatic properties without having to impose an additional homeostatic mechanism on timescales which are much shorter than observed experimentally [51]. Perhaps due to these homeostatic properties, they reported metastable dynamics with 2 or 3 stimuli after training. However, as the aim of their work was not to investigate the potential for metastable dynamics, training was only accomplished with few stimuli, and no scaling of the network size was attempted.

We have shown that our plasticity rule allows to learn new stimuli. A similar result was obtained with the plasticity rule of [7], which however requires inhibitory plasticity and synaptic renormalization. More recently, [69] have proposed a pure STDP plasticity rule that, similarly to our rule, does not require inhibitory plasticity or synaptic renormalization. However, [69] focus on the generation of stable clusters producing persistent activity rather than metastable dynamics. Their persistent activity has the signature of fast Poisson noise due to the stochastic nature of their model neurons, but lacks the slow fluctuations characteristic of ongoing metastable dynamics [3]. We also note that our results on remapping of sensory stimuli is different from the reorganization of clusters described in [69], which has been related to representational drift [7073]. In our model, new clusters form due to stimulation by a new set of stimuli rather than drifting of single neuron representations due to the noisy dynamics of plasticity as it occurs in [69].

Our model is reminiscent of the BCM rule [19], and in fact it could be interpreted as a BCM-like rule for spiking neurons in which the non-linearity of the threshold equation is given by a hyperbolic tangent. More common implementations of BCM utilize a power function (often a quadratic function; see e.g. [9, 74]). Another difference with our model is the presence, in the latter, of the additional term in the threshold dynamics, Eq (2). In our plasticity rule, this term is required to enforce LTD when the neuron is persistently active (i.e. ), as explained in Sec. The synaptic plasticity rule.

Limitations and possible extensions

Our model has a minimal number of ingredients and did not use widespread properties of cortical neurons, such as external noise, firing rate adaptation or short-term plasticity. One reason is model economy, i.e., the desire to use only minimal, essential ingredients. Another reason is that external noise, firing rate adaptation and short-term plasticity all facilitate metastable dynamics [33, 7580], leading to a possible confound on the role of long-term modifications on generating and maintaining ongoing metastable dynamics.

Although synaptic plasticity depends on pre- and postsynaptic calcium [16, 18, 81], we do not consider directly the role of calcium in our model. Rather, our learning rule belongs to the class of voltage-based STDP models [17]. These models aim to capture the dependence of LTP and LTD on the membrane potential [64, 65, 82] while producing STDP curves as an emergent phenomenon. Although voltage-based STDP can capture in vitro results where the membrane potential at the electrode is well defined, in a real neuron with an extended geometry the membrane potential is not generally uniform along the dendritic tree. At the time of a postsynaptic spike, one might assume that a back-propagating action potential imposes a uniform voltage on the dendritic tree, at least in proximal dendrites [83, 84], although faithful propagation depends on many factors including the order of the somatic action potential within a train and the location of the spines [85, 86]. At the moment of a presynaptic spike, however, the voltage at different locations will be different and probably reflect random subthreshold fluctuations. The plasticity rule in such a case would be at the whim of random fluctuations in membrane potential, unrelated to the timing correlation between pre and postsynaptic spikes. We argue that this does not pose a problem in our model because when our postsynaptic variable follows random subthreshold fluctuations, our plasticity rule produces no net average synaptic change (Fig 1B). On the other hand, when the postsynaptic neuron emits a spike, follows closely the transient exponential increase in membrane potential (Fig 1C), modeling the more uniform effect of the backpropagating action potential along the dendrites. It is even tempting to speculate that, during a prolonged period of active firing, the amplifying buildup of could reflect the facilitation of dendritic calcium spikes by trains of backpropagating action potentials [87].

Our model did not include inhibitory plasticity [88, 89]. A benefit of including inhibitory plasticity could be to autonomously set the level of inhibitory activity best suited for metastable dynamics [8]. This useful function of inhibitory plasticity contrasts with its homeostatic role in keeping the postsynaptic firing rates close to a desired target value [7, 66], a role that in our model is fulfilled by the adaptive threshold for excitatory LTP and LTD.

Conclusion

In conclusion, our plasticity rule can successfully structure a network of spiking neurons into an extensive number of clusters (cell assemblies), in a manner that spontaneously generates and maintains metastable ongoing activity. This results from a learning mechanism that keeps the synaptic weights steadily near the threshold for memory reactivation. As a result, one obtains metastable dynamics that coexists with synaptic plasticity. The metastable dynamics supports seemingly random switching among hidden states representing the stimuli used for training, and has several characteristic traits of the metastable dynamics observed in brain regions of rodents and primates. Both metastable dynamics and the learned synaptic structure are stable to random stimulus perturbations, but also flexible enough to be reshaped by new repeated stimuli. Our model could also provide an explanation for the existence of metastable dynamics in large deterministic network of spiking neurons.

Materials and methods

Spiking network model

Here we define the ‘basic’ network, from which all other networks were obtained by scaling up the number of neurons N and the number of stimuli Q (see ‘Network scaling’ below). The basic network comprised NE = 800 excitatory and NI = 200 inhibitory exponential integrate-and-fire (EIF) neurons (described below). The ratio of excitatory to inhibitory neurons was NE/NI = 4, in agreement with previous studies and experimental observations [36, 57, 90]. Before training, neurons were randomly connected with fixed probability (0.2 among excitatory neurons and 0.5 in all the other cases) and constant synaptic efficacies (wEE = 0.005, wEI = −0.34, wIE = 0.54 and wII = −0.46 mV). Only connected excitatory neurons underwent plasticity.

We modeled the impact of Q = 10 random stimuli as follows. Each neuron had a probability (‘coding level’) f = 0.1 to be targeted by a stimulus, meaning that the neuron would receive external input during the presentation of that stimulus (we call such neurons ‘responsive’). This resulted in a typical number of neurons targeted by each stimulus and a mean fraction (1 − f)Q ≈ 0.35 of non-responsive neurons in the basic network (in keeping with similar numbers found in the experimental literature, see e.g. [91]). We call a ‘cluster’ a subpopulation of neurons targeted by a given stimulus during training. There were a mean fraction 1 − (1 − f)Q ≈ 0.65 of neurons in (overlapping) clusters. A randomly picked neuron had a probability P≥2 = 1 − (1 − f)QQf(1 − f)Q−1 ≈ 0.26 to respond to at least two stimuli, which results in about Pov = 1 − (1 − f)Q−1 ≈ 0.61 probability for a responsive neuron to respond to multiple stimuli (and therefore belong to multiple clusters). Parameter values are summarized in Table 1.

Network scaling

For the analysis of Fig 4, N, NE and Q were increased with NE = 0.8N, Q = N/100 and coding level f = 1/Q, resulting in a constant mean cluster size of NQ = fNE = 80 neurons in all cases. For the analysis of S4 Fig, we used f = 1/Q with stimuli, resulting in neurons in each cluster (compatible with QNQ = 0.8N = NE excitatory neurons in total). The factors in the second scaling were chosen so as to agree with the ‘basic’ network for N = 1, 000 (f = 0.1, Q = 10, NQ = 80).

Single neuron dynamics

The membrane potential of the i-th EIF neuron followed the dynamical equation (5) where VL = 0 mV is the ‘leak’ potential (practically equal to the resting potential in this model), ΔT = 1 mV is a ‘sharpness’ parameter related to spike width and upstroke velocity, and VT = 20 mV is a reference potential somewhat related to the spike threshold (it would be the spike threshold in the limit ΔT → 0). A spike was said to be emitted when ViVpeak = 25 mV, after which the membrane potential was reset to Vr = 0 mV. The membrane time constant τm was 15 ms for excitatory neurons and 10 ms for inhibitory neurons. The term represents a constant stimulus input current which was present only during stimulus presentation. represents a constant external current, presumably coming from more distant regions or other brain areas. The term , where α ∈ {E, I}, is the recurrent synaptic input coming from excitatory and inhibitory neurons of the network, respectively. The synaptic input to neuron i obeyed the equation (6) where denotes the arriving time of the k-th spike of neuron j, wij is the synapse weight from neuron j to neuron i, and τsyn,α the synaptic time constant of the presynaptic inputs, equal to τsyn,E = 3 ms for excitatory inputs and τsyn,I = 2 ms for inhibitory inputs (see Table 1).

Plasticity rule and training protocol

In our model, only the synapses between excitatory neurons were plastic and they were subject to the following plasticity rule (with wij ≥ 0): (7) where is the presynaptic spike train and [x]+ ≔ max(x, 0) is the rectified linear function. The dynamical threshold θi(t) evolved according to Eq (2) of the main text: (8) where the notation indicates a variable obeying the equation , i.e., is a low-pass filter of the variable x(t). Specifically, is the exponential voltage term low-pass filtered with time constant τv = 50 ms, while is the postsynaptic spike train low-pass filtered with time constant τs = 1 s. τθ, θa and γ were constant (see Table 1).

The time-scale of θ dynamics is roughly a linear function of its argument (Fig 1A). Specifically, consider the equation , when a change in its fixed point Δ occurs at t = 0 (θ = 0 when t < 0). The time it takes for θ to reach half of Δ can be computed to be , implying a fixed time constant Thalfτθ ln 2 when Δ ≪ 1 and ThalfτθΔ/2 when Δ > 3 (Fig 1A).

In the basic network, initial training was performed by randomly presenting one stimulus out of Q every 2000 ms for a duration of 10 minutes (presentation at random times did not affect the results). Each stimulus lasted 500 ms and was presented an average of 30 times during the training period. The training time was linearly scaled in larger networks (i.e., 20 minutes for a network with 2Q clusters, and so on), resulting in the same mean number of presentations per stimulus in all networks. Perturbing stimuli after training (Fig 3), whether reoccurring or not, were presented for 200 ms at random times characterized by an exponential inter-event distribution with an average of 10 s.

Quantification of neural and synaptic activity

A cluster is defined as a subpopulation of excitatory neurons targeted by a given stimulus during training. An active cluster (see below) is interpreted as an internal representation of the corresponding stimulus, and therefore a memory of that stimulus in the absence of external stimulation (active clusters could also represent decision variables and other abstract variables, see e.g. [1]). Cluster activity was measured by the normalized firing rate of its neurons, specifically, for the q-th cluster, (9) where νi(t) is the firing rate of neuron i at time t and ηq,i = 1 if neuron i is a target of stimulus q, otherwise it is zero. We call mq the ‘overlap’ between the network activity and stimulus q. By definition, 0 ≤ mq ≤ 1: mq = 1 implies that only neurons responding to stimulus q have non-zero firing rates; mq = 0 means that the neurons targeted by stimulus q are all silent. A memory for stimulus q was said to be active if mq > 0.5. Since ∑q mq ≈ 1, this guarantees that at any given time there is at most one active memory state (∑q mq is not exactly 1 because neurons can be targeted by multiple stimuli).

To quantify the amount of learning, we defined the average synaptic weight among synapses connecting neurons sharing at least one sensory stimulus (dubbed ‘w1’), and the average weight among neurons that did not share any sensory stimuli (‘w0’) according to Eq (3), (10) where , is the number of synapses of type , and is the set of (i, j) indices of synapses of type . The mean synaptic change for synapses of type , , was defined analogously, i.e., by replacing wij with Δwij in Eq (10): (11)

Scaling laws for synaptic decay rate

Here we derive a qualitative estimate of the synaptic rate of change valid for large N under the scaling used in Fig 4, i.e., QNEN and f ∼ 1/Q as N → ∞. To derive this result, we make a number of assumptions: (i) the presynaptic spike trains are Poisson processes with fixed firing rate that does not depend on N; (ii) the post-synaptic term of the learning rule, , remains finite regardless of N; (iii) the initial values (post-training) of the synaptic weights do not depend on N (as evident from Fig 4); (iv) we can temporally separate pre- and postsynaptic terms in the learning rule Eq (7); and (v) only a finite number of clusters is active an any given time.

Assumptions (i)-(iii) are presumably valid when the cluster size is fixed for different N (as is the case in Fig 4), and are empirically corroborated by our simulations. Assumption (iv) depends on changes in ϕi being slow compared to the time scale of single presynaptic spikes (modeled by delta functions). A presynaptic spike δ ‘samples’ the value of ϕi, and since δ and ϕi have different time scales, they can be treated as approximately independent, especially when the neural activity in active clusters is asynchronous. Assumption (v) seems empirically correct based on our simulations (Fig 4), in the sense that only a few clusters seem to activate simultaneously.

According to the plasticity rule, Eq (7), the rate of change in synaptic weight wij is proportional to the product of a post-synaptic function of membrane potential, ϕi, and the presynaptic spike train . The total synaptic weight change for synapses of a given class (see Eq (10)) during an interval Δt, is therefore (12) where i, j in the right hand side are such that . The postsynaptic term is the sum of two contributions, one coming from active neurons and one coming from inactive neurons , where active neurons are those firing in an active cluster. Of these two contributions, only has non-zero mean during learning, because occasional pre- and post-synaptic spikes in inactive neurons produce no average synaptic change, as shown in Fig 1C.

We further assume that we can separate the pre- and postsynaptic terms in the integral (on account that the latter term is much slower than the former), obtaining (13) Assuming that only a finite number of clusters is active at any given time, the mean number of active postsynaptic neurons connected by a synapse of type is proportional to fNEfN (the mean number of neurons in each cluster), giving .

Assuming Poisson spike trains of average rate , the presynaptic term scales as (14) where is the probability that a synapse is of type . Putting all together, we get (15) From this equation and Eq (11) with , the rate of change of in an interval Δt is then (16) where is the average firing rate of the excitatory presynaptic neurons connected by a synapse of type . When f ∼ 1/N (Fig 4), the cluster size is fixed and the firing rates in the active clusters remain nearly constant with N (as empirically found in simulations), leading to Eq (4) of the main text.

We cannot derive a similar conclusion when (S4 Fig), since in that case the cluster size increases with N. In this case, both the firing rates inside active clusters and the initial synaptic weights post-training change with N in an unknown way.

Mean-field analysis of network dynamics

For the mean field analysis we considered a simplified network model comprising NE excitatory and NI inhibitory exponential integrate-and-fire (EIF) neurons with the same type of random (quenched) connectivity used in simulations. The excitatory population was uniformly partitioned into Q clusters: the synapses connecting neurons of the same cluster are denoted by w+ and those connecting neurons of different clusters are denoted by w. For simplicity, the clusters were non-overlapping. The w synapses were empirically small in simulations, and therefore we fixed them to a constant small value and ignored their dynamics. The w+ synapses were assumed to be independent samples from a distribution with given mean μ and variance σ2, and were further constrained between 0 and wmax = 4 mV. This choice of maximum synaptic weight was motivated by the observed distribution of synaptic weights shown in Fig 5. For a given mean value μwmax, the standard deviation is bounded by , and parameter values above the σmax(μ) curve are not allowed (red region in Fig 6; the bound is saturated by the distribution P(w+ = wmax) = μ/wmax, P(w+ = 0) = 1 − μ/wmax).

As customary for spiking networks, we performed the mean field analysis under the diffusion approximation, where the input current is characterized by the infinitesimal mean and variance of the synaptic inputs [36, 92]. The infinitesimal mean and variance to the neurons of the k-th excitatory cluster are given by (17) (18) where denotes the firing rate of the excitatory neurons in the k-th cluster and νI is the firing rate of the inhibitory neurons. is the expectation operator. The four terms in the above equations represent the contributions from the k-th excitatory cluster, the remaining excitatory clusters, the inhibitory population and the external inputs, respectively. Similarly, the infinitesimal mean and variance of the input to the inhibitory neurons are given by (19) (20) The vector of mean firing rates, , must satisfy the self-consistent equations (21) where Fα(μα, sα) is the transfer function of the EIF neuron [93]. Fα was evaluated numerically by integrating the steady state Fokker-Planck equation describing the EIF model under the diffusion approximation with the algorithm reported in [94].

Supporting information

S1 Fig. Results from training the basic network of Fig 2 with β = 0 in Eq (1).

A: Rasterplot of excitatory neurons taken immediately after training (left), 30 minutes after training (middle), and 60 minutes after training (right). Same keys as Fig 2 of the main text. B: Averaged post-training excitatory synaptic weights as a function of time. w1: mean weights across synapses connecting neurons sharing at least one stimulus; w0: mean weights across synapses connecting neurons sharing no stimuli. C: Synaptic matrix of the network at the same times as in A showing the formation of clusters from the block structure of the matrix.

https://doi.org/10.1371/journal.pcbi.1012220.s001

(TIFF)

S2 Fig. Results from training the basic network of Fig 2 with γ = 0 in Eq (2).

Same keys as S1 Fig.

https://doi.org/10.1371/journal.pcbi.1012220.s002

(TIFF)

S3 Fig. Distributions of durations of cluster activations vs. network size for the networks of Fig 4.

A: Distributions of durations right after training (left column), 2 hours post-training (middle) and 4 hours post-training (right) for different network sizes. Means tend to decrease with post-training time and increase with network size, approaching stability 4 hours post-training and for N ≥ 15, 000. B: Scatterplots of standard deviation vs. mean of durations for the corresponding networks in A, superimposed to the identity line. Each circle corresponds to a cluster. For the majority of the clusters, the standard deviations are approximately equal to the means as expected for an exponential distribution.

https://doi.org/10.1371/journal.pcbi.1012220.s003

(TIFF)

S4 Fig. Effect of network size on synaptic dynamics after training.

Same as Fig 4 of the main text but for a different scaling, namely (see the main text). A: Time dependence of average synaptic weights w1 and w0 after training for networks of different size N, with NE = 0.8N excitatory neurons. Scaling laws were f = 1/Q with , giving neurons in each cluster (where Q and NQ were rounded to the nearest integer). From left to right, Q = 22, 32, 39 and 45. B: Raster plots of the network’s activity 4 hours after training for the corresponding networks in A. C: Plots of and vs. time after training for the different network in A (note the difference with of Fig 4C of the main text). Observations were taken 0 to 4 hours post training. D: Same as C for vs. time.

https://doi.org/10.1371/journal.pcbi.1012220.s004

(TIFF)

Acknowledgments

We wish to thank Drs. Arianna Maffei, Yonatan Aljadeff and Paul Adams for a critical reading of the manuscript and very useful discussions.

References

  1. 1. La Camera G, Fontanini A, Mazzucato L. Cortical computations via metastable activity. Curr Opin Neurobiol. 2019; 58:37–45. pmid:31326722
  2. 2. Brinkman BAW, Yan H, Maffei A, Park IM, Fontanini A, Wang J, et al. Metastable dynamics of neural circuits and networks. Appl Phys Rev. 2022; 9(1):011313. pmid:35284030
  3. 3. Litwin-Kumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat Neurosci. 2012; 15(11):1498–505. pmid:23001062
  4. 4. Mazzucato L, Fontanini A, La Camera G. Dynamics of multistable states during ongoing and evoked cortical activity. J Neurosci. 2015; 35(21):8214–31. pmid:26019337
  5. 5. Del Giudice P, Fusi S, Mattia M. Modelling the formation of working memory with networks of integrate-and-fire neurons connected by plastic synapses. J Physiol Paris. 2003; 97(4-6):659–81. pmid:15242673
  6. 6. Amit DJ, Mongillo G. Spike-driven synaptic dynamics generating working memory states. Neural Comput. 2003; 15(3):565–96. pmid:12620158
  7. 7. Litwin-Kumar A, Doiron B. Formation and maintenance of neuronal assemblies through synaptic plasticity. Nat Commun. 2014; 5:5319. pmid:25395015
  8. 8. Zenke F, Agnes EJ, Gerstner W. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nat Commun. 2015; 6:6922. pmid:25897632
  9. 9. Dayan P, Abbott LF. Theoretical neuroscience: computational and mathematical modeling of neural systems. Cambridge, Mass.: Massachusetts Institute of Technology Press; 2005. Available from: http://www.loc.gov/catdir/toc/fy031/2001044005.html.
  10. 10. Gerstner W, Kistler WM, Naud R, Paninski L. Neuronal dynamics: from single neurons to networks and models of cognition. Cambridge University Press, UK; 2014. Available from: http://neuronaldynamics.epfl.ch.
  11. 11. Markram H, Lübke J, Frotscher M, Sakmann B. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science. 1997; 275(5297):213–5. pmid:8985014
  12. 12. Bi GQ, Poo MM. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci. 1998; 18(24):10464–72. pmid:9852584
  13. 13. Watt AJ, Desai NS. Homeostatic Plasticity and STDP: Keeping a Neuron’s Cool in a Fluctuating World. Front Synaptic Neurosci. 2010; 2:5. pmid:21423491
  14. 14. Turrigiano GG. The dialectic of Hebb and homeostasis. Philosophical Transactions of the Royal Society B: Biological Sciences. 2017; 372(1715):20160258. pmid:28093556
  15. 15. Fusi S, Annunziato M, Badoni D, Salamon A, Amit DJ. Spike-driven synaptic plasticity: theory, simulation, VLSI implementation. Neural Comput. 2000; 12(10):2227–58. pmid:11032032
  16. 16. Shouval HZ, Bear MF, Cooper LN. A unified model of NMDA receptor-dependent bidirectional synaptic plasticity. Proc Natl Acad Sci U S A. 2002; 99(16):10831–6. pmid:12136127
  17. 17. Clopath C, Büsing L, Vasilaki E, Gerstner W. Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nat Neurosci. 2010; 13(3):344–52. pmid:20098420
  18. 18. Graupner M, Brunel N. Calcium-based plasticity model explains sensitivity of synaptic changes to spike pattern, rate, and dendritic location. Proc Natl Acad Sci U S A. 2012; 109(10):3991–6. pmid:22357758
  19. 19. Bienenstock EL, Cooper LN, Munro PW. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J Neurosci. 1982; 2(1):32–48. pmid:7054394
  20. 20. Tetzlaff C, Kolodziejski C, Timme M, Wörgötter F. Synaptic scaling in combination with many generic plasticity mechanisms stabilizes circuit connectivity. Front Comput Neurosci. 2011; 5:47. pmid:22203799
  21. 21. Zenke F, Gerstner W. Hebbian plasticity requires compensatory processes on multiple timescales. Philos Trans R Soc Lond B Biol Sci. 2017; 372 (1715). pmid:28093557
  22. 22. Weidel P, Duarte R, Morrison A. Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks. Front Comput Neurosci. 2021; 15:543872. pmid:33746728
  23. 23. Miyashita Y, Chang HS. Neural correlate of pictorial short-term memory in the primate temporal cortex. Nature. 1988; 331:68–70. pmid:3340148
  24. 24. Miyashita Y. Neural correlate of visual associative long-term memory in the primate temporal cortex. Nature. 1988; 335:817–820. pmid:3185711
  25. 25. Curti E, Mongillo G, La Camera G, Amit DJ. Mean-Field and capacity in realistic networks of spiking neurons storing sparsely coded random memories. Neural Computation. 2004; 16:2597–2637. pmid:15516275
  26. 26. Jezzini A, Mazzucato L, La Camera G, Fontanini A. Processing of hedonic and chemosensory features of taste in medial prefrontal and insular networks. J Neurosci. 2013; 33(48):18966–78. pmid:24285901
  27. 27. Fusi S, Miller EK, Rigotti M. Why neurons mix: high dimensionality for higher cognition. Curr Opin Neurobiol. 2016; 37:66–74. pmid:26851755
  28. 28. Seidemann E, Meilijson I, Abeles M, Bergman H, Vaadia E. Simultaneously recorded single units in the frontal cortex go through sequences of discrete and stable states in monkeys performing a delayed localization task. J Neurosci. 1996; 16(2):752–768. pmid:8551358
  29. 29. Jones LM, Fontanini A, Sadacca BF, Miller P, Katz DB. Natural stimuli evoke dynamic sequences of states in sensory cortical ensembles. Proc Natl Acad Sci U S A. 2007; 104(47):18772–7. pmid:18000059
  30. 30. Maboudi K, Ackermann E, de Jong LW, Pfeiffer BE, Foster D, Diba K, et al. Uncovering temporal structure in hippocampal output patterns. Elife. 2018; 7:e34467. pmid:29869611
  31. 31. Benozzo D, La Camera G, Genovesio A. Slower prefrontal metastable dynamics during deliberation predicts error trials in a distance discrimination task. Cell Rep. 2021; 35(1):108934. pmid:33826896
  32. 32. Lang L, La Camera G, Fontanini A. Temporal progression along discrete coding states during decision-making in the mouse gustatory cortex. PLoS Comput Biol. 2023; 19(2):e1010865. pmid:36749734
  33. 33. Deco G, Hugues E. Neural network mechanisms underlying stimulus driven variability reduction. PLoS Comput Biol. 2012; 8(3):e1002395. pmid:22479168
  34. 34. Renart A, Moreno-Bote R, Wang XJ, Parga N. Mean-driven and fluctuation-driven persistent activity in recurrent networks. Neural Comput. 2007; 19(1):1–46. pmid:17134316
  35. 35. Doiron B, Litwin-Kumar A. Balanced neural architecture and the idling brain. Front Comput Neurosci. 2014; 8:56. pmid:24904394
  36. 36. Amit DJ, Brunel N. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cereb Cortex. 1997; 7(3):237–252. pmid:9143444
  37. 37. Mongillo G, Rumpel S, Loewenstein Y. Intrinsic volatility of synaptic connections—a challenge to the synaptic trace theory of memory. Curr Opin Neurobiol. 2017; 46:7–13. pmid:28710971
  38. 38. Miller P, Katz DB. Stochastic transitions between neural states in taste processing and decision-making. J Neurosci. 2010; 30(7):2559–70. pmid:20164341
  39. 39. Mazzucato L, Fontanini A, La Camera G. Stimuli Reduce the Dimensionality of Cortical Activity. Front Syst Neurosci. 2016; 10:11. pmid:26924968
  40. 40. Mazzucato L, La Camera G, Fontanini A. Expectation-induced modulation of metastable activity underlies faster coding of sensory stimuli. Nat Neurosci. 2019; 22(5):787–796. pmid:30936557
  41. 41. Fusi S. Hebbian spike-driven synaptic plasticity for learning patterns of mean firing rates. Biol Cybern. 2002; 87(5-6):459–70. pmid:12461635
  42. 42. Abraham WC, Robins A. Memory retention–the synaptic stability versus plasticity dilemma. Trends Neurosci. 2005; 28(2):73–8. pmid:15667929
  43. 43. Mermillod M, Bugaiska A, Bonin P. The stability-plasticity dilemma: investigating the continuum from catastrophic forgetting to age-limited learning effects. Front Psychol. 2013; 4:504. pmid:23935590
  44. 44. Benna MK, Fusi S. Computational principles of synaptic memory consolidation. Nat Neurosci. 2016; 19(12):1697–1706. pmid:27694992
  45. 45. Abeles M, Bergman H, Gat I, Meilijson I, Seidemann E, Tishby N, et al. Cortical activity flips among quasi-stationary states. Proc Natl Acad Sci U S A. 1995; 92(19):8616–8620. pmid:7567985
  46. 46. Ponce-Alvarez A, Nácher V, Luna R, Riehle A, Romo R. Dynamics of cortical neuronal ensembles transit from decision making to storage for later report. J Neurosci. 2012; 32(35):11956–11969. pmid:22933781
  47. 47. Recanatesi S, Pereira-Obilinovic U, Murakami M, Mainen Z, Mazzucato L. Metastable attractors explain the variable timing of stable behavioral action sequences. Neuron. 2022; 110(1):139–153.e9. pmid:34717794
  48. 48. Brunel N. Is cortical connectivity optimized for storing information? Nat Neurosci. 2016; 19(5):749–755. pmid:27065365
  49. 49. Campagnola L, Seeman SC, Chartrand T, Kim L, Hoggarth A, Gamlin C, et al. Local connectivity and synaptic dynamics in mouse and human neocortex. Science. 2022; 375(6585):eabj5861. pmid:35271334
  50. 50. Delgado JY, Gómez-González JF, Desai NS. Pyramidal neuron conductance state gates spike-timing-dependent plasticity. J Neurosci. 2010; 30(47):15713–25. pmid:21106811
  51. 51. Zenke F, Gerstner W, Ganguli S. The temporal paradox of Hebbian learning and homeostatic plasticity. Curr Opin Neurobiol. 2017; 43:166–176. pmid:28431369
  52. 52. Sah P. Ca2+-activated K+ currents in neurons: types, physiological roles and modulation. Trends Neurosci. 1996; 19:150–154. pmid:8658599
  53. 53. Andrade R, Foehring RC, Tzingounis AV. The calcium-activated slow AHP: cutting through the Gordian knot. Front Cell Neurosci. 2012; 6:47. pmid:23112761
  54. 54. Wang XJ. Calcium coding and adaptive temporal computation in cortical pyramidal neurons. J Neurophysiol. 1998; 79(3):1549–66. pmid:9497431
  55. 55. Ermentrout B. Linearization of F-I curves by adaptation. Neural Comput. 1998; 10(7):1721–9. pmid:9744894
  56. 56. La Camera G, Rauch A, Senn W, Lüscher HR, Fusi S. Minimal models of adapted neuronal response to in vivo-like input currents. Neural Computation. 2004; 16:2101–2124. pmid:15333209
  57. 57. Shepherd GM, Grillner S. Handbook of brain microcircuits. New York: Oxford University Press; 2010.
  58. 58. van Vreeswijk C, Sompolinsky H. Chaotic balanced state in a model of cortical circuits. Neural Comput. 1998; 10(6):1321–71. pmid:9698348
  59. 59. Huang C, Doiron B. Once upon a (slow) time in the land of recurrent neuronal networks…. Curr Opin Neurobiol. 2017; 46:31–38. pmid:28756341
  60. 60. Berlemont K, Mongillo G. Glassy phase in dynamically-balanced neuronal networks. bioRxiv. 2022; 2022.03.14.484348.
  61. 61. Del Giudice P, Mattia M. Long and short-term synaptic plasticity and the formation of working memory: A case study. Neurocomputing. 2001; 38-40:1175–1180.
  62. 62. Zhang LI, Tao HW, Holt CE, Harris WA, Poo M. A critical window for cooperation and competition among developing retinotectal synapses. Nature. 1998; 395(6697):37–44. pmid:9738497
  63. 63. Artola A, Bröcher S, Singer W. Different voltage-dependent thresholds for inducing long-term depression and long-term potentiation in slices of rat visual cortex. Nature. 1990; 347(6288):69–72. pmid:1975639
  64. 64. Ngezahayo A, Schachner M, Artola A. Synaptic activity modulates the induction of bidirectional synaptic changes in adult mouse hippocampus. J Neurosci. 2000; 20(7):2451–8. pmid:10729325
  65. 65. Sjöström PJ, Turrigiano GG, Nelson SB. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron. 2001; 32(6):1149–64. pmid:11754844
  66. 66. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science. 2011; 334(6062):1569–73. pmid:22075724
  67. 67. Wang B, Aljadeff J. Multiplicative Shot-Noise: A New Route to Stability of Plastic Networks. Phys Rev Lett. 2022; 129(6):068101. pmid:36018633
  68. 68. Inglebert Y, Aljadeff J, Brunel N, Debanne D. Synaptic plasticity rules with physiological calcium levels. Proc Natl Acad Sci U S A. 2020; 117(52):33639–33648. pmid:33328274
  69. 69. Manz P, Memmesheimer RM. Purely STDP-based assembly dynamics: Stability, learning, overlaps, drift and aging. PLoS Comput Biol. 2023; 19(4):e1011006. pmid:37043481
  70. 70. Ziv Y, Burns LD, Cocker ED, Hamel EO, Ghosh KK, Kitch LJ, et al. Long-term dynamics of CA1 hippocampal place codes. Nat Neurosci. 2013; 16(3):264–6. pmid:23396101
  71. 71. DeNardo LA, Liu CD, Allen WE, Adams EL, Friedmann D, Fu L, et al. Temporal evolution of cortical ensembles promoting remote memory retrieval. Nat Neurosci. 2019; 22(3):460–469. pmid:30692687
  72. 72. Rule ME, O’Leary T, Harvey CD. Causes and consequences of representational drift. Curr Opin Neurobiol. 2019; 58:141–147. pmid:31569062
  73. 73. Masset P, Qin S, Zavatone-Veth JA. Drifting neuronal representations: Bug or feature? Biol Cybern. 2022; 116(3):253–266. pmid:34993613
  74. 74. Intrator N, Cooper LN. Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions. Neural Networks. 1992; 5:3–17.
  75. 75. Moreno-Bote R, Rinzel J, Rubin N. Noise-induced alternations in an attractor network model of perceptual bistability. J Neurophysiol. 2007; 98:1125–1139. pmid:17615138
  76. 76. Giugliano M, La Camera G, Fusi S, Senn W. The response function of cortical neurons: theory and experiment. II. Time-varying and spatially distributed inputs. Biol Cybern. 2008; 99(4-5):279–301.
  77. 77. Cao R, Pastukhov A, Mattia M, Braun J. Collective Activity of Many Bistable Assemblies Reproduces Characteristic Dynamics of Multistable Perception. J Neurosci. 2016; 36(26):6957–72. pmid:27358454
  78. 78. Jercog D, Roxin A, Barthó P, Luczak A, Compte A, de la Rocha J. UP-DOWN cortical dynamics reflect state transitions in a bistable network. Elife. 2017; 6. pmid:28826485
  79. 79. Setareh H, Deger M, Petersen CCH, Gerstner W. Cortical Dynamics in Presence of Assemblies of Densely Connected Weight-Hub Neurons. Front Comput Neurosci. 2017; 11:52. pmid:28690508
  80. 80. Ballintyn B, Shlaer B, Miller P. Spatiotemporal discrimination in attractor networks with short-term synaptic plasticity. J Comput Neurosci. 2019; 46(3):279–297. pmid:31134433
  81. 81. Sjöström PJ, Nelson SB. Spike timing, calcium signals and synaptic plasticity. Curr Opin Neurobiol. 2002; 12(3):305–14. pmid:12049938
  82. 82. Wang L, Maffei A. Inhibitory plasticity dictates the sign of plasticity at excitatory synapses. J Neurosci. 2014; 34(4):1083–93. pmid:24453301
  83. 83. Stuart GJ, Sakmann B. Active propagation of somatic action potentials into neocortical pyramidal cell dendrites. Nature. 1994; 367(6458):69–72. pmid:8107777
  84. 84. Nuriya M, Jiang J, Nemet B, Eisenthal KB, Yuste R. Imaging membrane potential in dendritic spines. Proc Natl Acad Sci U S A. 2006; 103(3):786–90. pmid:16407122
  85. 85. Spruston N, Schiller Y, Stuart G, Sakmann B. Activity-dependent action potential invasion and calcium influx into hippocampal CA1 dendrites. Science. 1995; 268(5208):297–300. pmid:7716524
  86. 86. Waters J, Schaefer A, Sakmann B. Backpropagating action potentials in neurones: measurement, mechanisms and potential functions. Prog Biophys Mol Biol. 2005; 87(1):145–70. pmid:15471594
  87. 87. Larkum ME, Zhu JJ, Sakmann B. A new cellular mechanism for coupling inputs arriving at different cortical layers. Nature. 1999; 398(6725):338–41. pmid:10192334
  88. 88. Maffei A. The Many Forms and Functions of Long Term Plasticity at GABAergic Synapses. Neural Plasticity. 2011; 2011:1–9. pmid:21789285
  89. 89. Vogels TP, Froemke RC, Doyon N, Gilson M, Haas JS, Liu R, et al. Inhibitory synaptic plasticity: spike timing-dependence and putative network function. Front Neural Circuits. 2013; 7:119. pmid:23882186
  90. 90. Braitenberg V, Schüz A. Anatomy of the cortex. Berlin: Springer-Verlag; 1991.
  91. 91. Perin R, Berger TK, Markram H. A synaptic organizing principle for cortical neuronal groups. Proc Natl Acad Sci U S A. 2011; 108(13):5419–24. pmid:21383177
  92. 92. La Camera G. The Mean Field Approach for Populations of Spiking Neurons. In: Giugliano M, Negrello M, Linaro D (eds). Computational Modelling of the Brain. Advances in Experimental Medicine and Biology, vol 1359. Springer, Cham. 2022. p. 125–157. Available from: https://doi.org/10.1007/978-3-030-89439-9_6.
  93. 93. Fourcaud-Trocmé N, Hansel H, van Vreeswijk C, Brunel N. How spike generation mechanisms determine the neuronal response to fluctuating inputs. J Neurosci. 2003; 23:11628–11640. pmid:14684865
  94. 94. Richardson MJE. Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drive. Phys Rev E. 2007; 76:021919. pmid:17930077