Skip to main content
Advertisement
  • Loading metrics

Presynaptic inhibition rapidly stabilises recurrent excitation in the face of plasticity

  • Laura Bella Naumann ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    laura-bella.naumann@bccn-berlin.de

    Affiliations Modelling of Cognitive Processes, Berlin Institute of Technology, Berlin, Germany, Bernstein Center for Computational Neuroscience, Berlin, Germany

  • Henning Sprekeler

    Roles Conceptualization, Funding acquisition, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Modelling of Cognitive Processes, Berlin Institute of Technology, Berlin, Germany, Bernstein Center for Computational Neuroscience, Berlin, Germany

Abstract

Hebbian plasticity, a mechanism believed to be the substrate of learning and memory, detects and further enhances correlated neural activity. Because this constitutes an unstable positive feedback loop, it requires additional homeostatic control. Computational work suggests that in recurrent networks, the homeostatic mechanisms observed in experiments are too slow to compensate instabilities arising from Hebbian plasticity and need to be complemented by rapid compensatory processes. We suggest presynaptic inhibition as a candidate that rapidly provides stability by compensating recurrent excitation induced by Hebbian changes. Presynaptic inhibition is mediated by presynaptic GABA receptors that effectively and reversibly attenuate transmitter release. Activation of these receptors can be triggered by excess network activity, hence providing a stabilising negative feedback loop that weakens recurrent interactions on sub-second timescales. We study the stabilising effect of presynaptic inhibition in recurrent networks, in which presynaptic inhibition is implemented as a multiplicative reduction of recurrent synaptic weights in response to increasing inhibitory activity. We show that networks with presynaptic inhibition display a gradual increase of firing rates with growing excitatory weights, in contrast to traditional excitatory-inhibitory networks. This alleviates the positive feedback loop between Hebbian plasticity and network activity and thereby allows homeostasis to act on timescales similar to those observed in experiments. Our results generalise to spiking networks with a biophysically more detailed implementation of the presynaptic inhibition mechanism. In conclusion, presynaptic inhibition provides a powerful compensatory mechanism that rapidly reduces effective recurrent interactions and thereby stabilises Hebbian learning.

Author Summary

Synapses between neurons change during learning and memory formation, a process termed synaptic plasticity. Established models of plasticity rely on strengthening synapses of co-active neurons. In recurrent networks, mutually connected neurons tend to be co-active. The emerging positive feedback loop is believed to be counteracted by homeostatic mechanisms that aim to keep neural activity at a given set point. However, theoretical work indicates that experimentally observed forms of homeostasis are too slow to maintain stable network activity. In this article, we suggest that presynaptic inhibition can alleviate this problem. Presynaptic inhibition is an inhibitory mechanism that weakens synapses rather than suppressing neural activity. Using mathematical analyses and computer simulations, we show that presynaptic inhibition can compensate the strengthening of recurrent connections and thus stabilises neural networks subject to synaptic plasticity, even if homeostasis acts on biologically plausible timescales.

Introduction

Synaptic plasticity is widely believed to be the neuronal substrate for learning and memory. Hebbian plasticity [1], in particular, is the long-standing prime candidate for associative learning. It preferentially connects neurons that are co-active, thereby linking neural representations of events that co-occur in time. Notwithstanding its obvious suitability for learning, Hebbian plasticity also has the precarious side-effect of creating a positive feedback loop [2]. Pairs of neurons whose connection was strengthened by Hebbian plasticity tend to be even more co-active, which in turn further strengthens their connection. The standard argument why this vicious circle does not generate runaway activity in the brain is that there is a broad spectrum of homeostatic mechanisms that keep this instability at bay [24]. Such mechanisms have been demonstrated in various forms, including homeostatic scaling [5], intrinsic plasticity [6, 7], metaplasticity [8, 9] or plasticity of inhibition [10, 11]. While these mechanisms counteract modifications that take neuronal activity out of a functional regime, they all occur on a relatively long time scale of hours or days. They are therefore poorly suited to protect neural circuits against unwanted consequences of rapid changes in input or connectivity [12, 13]. Zenke et al. [13] recently suggested that the known homeostatic mechanisms must hence be complemented by rapid compensatory processes that render the circuit stable on shorter time scales and hence give homeostasis the time it needs.

We suggest that presynaptic inhibition—a form of inhibition that has attained little attention in computational modelling—could serve as a candidate for such a rapid compensatory process. Presynaptic inhibition is a mechanism that suppresses synaptic transmission by means of presynaptic receptors and can occur through a variety of pathways [14]. At excitatory synapses, activation of presynaptic GABAB receptors causes a reduction of neurotransmitter release [15] by inhibiting voltage-dependent Calcium channels [16, 17]. In turn, presynaptic GABAB receptors are activated by interneuron-mediated GABA release [18], potentially by means of GABA spillover [16, 19, 20]. Hence, excess activity in excitatory neurons can recruit inhibitory interneurons, which in turn activate presynaptic inhibition and thereby suppress recurrent excitatory connections. This provides a dynamic and reversible negative feedback loop onto recurrent excitation—a potent source of network instability—on timescales of hundreds of milliseconds [18, 21, 22]. GABAB-mediated presynaptic inhibition has been observed in a range of brain areas of different species (see for example [15, 18, 2325]), but the functional implications, in particular in recurrent circuits, remain elusive.

Here, we use simulations of rate-based networks as well as mathematical analyses to study the compensatory properties of presynaptic inhibition in recurrent circuits. We show that presynaptic inhibition ensures a gradual increase of firing rates for increases in recurrent excitation, in contrast to traditional excitatory-inhibitory networks [26]. This stabilises neural activity in networks subject to Hebbian plasticity even if homeostasis is slow. We find that the stabilising properties are robust to network parameters and details of the mechanism. Finally, we show that these results generalise to networks of spiking neurons.

Results

To study the compensatory effects of presynaptic inhibition, we simulated networks of excitatory and inhibitory rate-based neurons and analysed the effect of changes at excitatory recurrent weights on network activity. We start by systematically increasing recurrent excitation to illustrate the stabilising properties of presynaptic inhibition. Then, we study the interaction of presynaptic inhibition with the Bienenstock-Cooper-Munro (BCM) rule, a rate-based Hebbian plasticity rule that comprises a homeostatic control mechanism. We vary the timescale of homeostatic control and contrast the behaviour of networks with and without presynaptic inhibition.

Presynaptic inhibition in a recurrent network model

Our model of presynaptic inhibition is based on the following biophysical mechanism: GABA spillover from inhibitory synapses activates presynaptic GABAB receptors at excitatory synapses. This suppresses voltage-dependent calcium (Ca2+) channels, thereby inhibiting neurotransmitter release. Thus, activation of inhibitory neurons can lead to presynaptic inhibition of excitatory synaptic transmission [18] (see Fig 1A for a schematic illustration).

thumbnail
Fig 1. Presynaptic inhibition in a recurrent network.

A. Presynaptic inhibition mechanism: (1) Network activity drives inhibitory interneurons. (2) GABA released by interneurons can spill over to nearby excitatory synapses, where it binds to presynaptic GABAB receptors that (3) in turn inhibit voltage-sensitive Calcium (Ca2+) channels. (4) Reduced Ca2+ influx decreases the release of neurotransmitter Glutamate and (5) therefore the amplitude of EPSPs. B. Simplified rate-based model consisting of excitatory (red) and inhibitory population (blue) and a presynaptic inhibition mechanism (green). C. Release factor as a linear function of inhibitory neuron activity for different slopes β = 0.01, 0.03, …, 0.09. D. Population rate as a function of excitatory recurrence (wEE) without presynaptic inhibition for increasing strength of postsynaptic inhibition (weight wEI = 1, 1.5, …, 3). E. Same as D but for fixed postsynaptic inhibition and increasing strength of presynaptic inhibition (slope of transfer function β, see C). Markers are simulation results and solid lines analytically determined steady-state rates (see Methods and models).

https://doi.org/10.1371/journal.pcbi.1008118.g001

In the rate network, we use inhibitory activity as a proxy of GABA spillover, which in turn modulates excitatory synaptic transmission (Fig 1B). We model this presynaptic modulation as a multiplicative “release” factor that scales excitatory synaptic weights and decreases with increasing inhibitory activity. For the sake of analytical tractability, the release factor decreases linearly with inhibitory firing rate (Fig 1C). Therefore, the sensitivity of presynaptic inhibition to inhibitory firing rate (i.e. the strength of presynaptic inhibition) is determined by the slope of this linear transfer function. Here, we assume that the effect of presynaptic inhibition is homogeneous and diffuse enough such that all inhibitory neurons contribute to the modulation of a global release factor. This global factor then multiplicatively scales all excitatory recurrent weights in the network. Note that the presented results remain unchanged for model variants, in which presynaptic inhibition acts more locally, because we exclusively study random networks without spatial structure.

Presynaptic inhibition compensates for recurrent excitation

Hebbian plasticity adjusts recurrent synaptic weights according to correlations in neural activity. At the same time, changing recurrent weights affects the activity of interconnected neurons, forming a potentially destabilising positive feedback loop. Thus, how the overall firing rate increases with changes in recurrent excitatory weights is an indicator of stability in the presence of Hebbian plasticity. We therefore first study the effect of ad-hoc homogeneous increases in excitatory recurrence.

In a network without presynaptic inhibition, minor changes in overall excitatory recurrence cause major increases in the mean population firing rate (Fig 1D). Above a critical value, the recurrent excitation drives the network to pathologically high activity states. Increasing the strength of postsynaptic inhibition does not eliminate the supralinear dependence, but merely shifts it to higher values of the excitatory recurrence. With presynaptic inhibition in place, firing rates have a qualitatively different dependence on recurrence. Network activity gradually increases with excitatory weights for arbitrarily strong recurrent excitation (Fig 1E). We confirm these results in mathematical analyses and show that the mean population rate saturates at a finite value as recurrent weights increase (Methods and models). How much the rate increases with recurrence and where it saturates depends on the strength of presynaptic inhibition (Fig 1E), which is determined by the transfer function’s slope (Fig 1C).

In summary, while conventional networks of excitatory and inhibitory populations are prone to instabilities triggered by increases in excitatory recurrence, adding presynaptic inhibition allows for gradual increases of neural activity with growing excitation. This makes presynaptic inhibition a candidate mechanism to break the positive feedback loop generated by Hebbian plasticity and recurrent excitation.

Presynaptic inhibition prevents runaway excitation in the face of Hebbian plasticity

Does presynaptic inhibition also stabilise Hebbian plasticity at recurrent excitatory synapses? Recent work by Zenke et al. [12] has revealed that stability in spiking networks with Hebbian plasticity on excitatory synapses requires the timescale of homeostasis to be substantially shorter than that of plasticity. To show that presynaptic inhibition increases the range of stability, we first qualitatively reproduce these results in a recurrent rate network with plastic excitatory synapses. To this end, we use the BCM rule [8], which has a correspondence to the triplet rule used in the work of Zenke et al. [27, 28]. Homeostasis is implemented by a sliding threshold in the BCM rule that aims at keeping a running average of single neuron firing rates at a given target rate [8] (Fig 2A, left). The time constant of this running average determines the timescale of homeostasis and is therefore referred to as the homeostatic time constant in the following [12].

thumbnail
Fig 2. Presynaptic inhibition increases critical homeostatic time constant and maintains population firing rate at low rates.

A. Circuit diagrams for networks without (top) and with presynaptic inhibition (bottom). Excitatory recurrent synapses are subject to the BCM rule with sliding threshold (illustration on the left). B. Temporal evolution of the average firing rate for different homeostatic time constants τc for networks without (top) and with presynaptic inhibition (bottom). Plasticity is switched on after 6 min of simulation time (indicated by asterisk). C. Mean final population rate as a function of homeostatic time constant. Vertical dashed line marks critical homeostatic timescale from theory with presynaptic inhibition (see F). D. Final firing rate distribution for example runs with presynaptic inhibition for a faster (top) and slower (bottom) homeostatic time constant. Average firing rate is indicated by vertical, dotted line. E. Single neuron firing rates over time at the end of the simulation for τc = 70s. Neurons are sorted according to their activity in the beginning of this period. Target rate level (5 Hz) is yellow. F. Critical homeostatic time scale as a function of presynaptic inhibition strength β. For simulations the vertical bars indicate the range between largest stable and smallest unstable timescale tested. β = 0 corresponds to networks without presynaptic inhibition.

https://doi.org/10.1371/journal.pcbi.1008118.g002

In accordance with previous work, we find that as long as homeostasis is fast enough (on the order of a few seconds), the BCM rule keeps network and single neuron activity at the target rate (Fig 2B, top). However, above a critical homeostatic time constant, the sliding threshold is not able to control the positive feedback loop of recurrent excitation and Hebbian plasticity.

Including presynaptic inhibition in the network (Fig 2A, bottom) alleviates this instability. We observe two effects: First, the critical homeostatic time constant, below which all neurons remain at the target rate is increased more than tenfold (Fig 2B, bottom). Second, presynaptic inhibition prevents a pathological runaway of activity even beyond this critical homeostatic timescale (Fig 2C).

Above the respective critical time constant, the network behaviour is remarkably different between networks with and without presynaptic inhibition. Without it the firing rate jumps from target to the maximum rate even for small homeostatic time constants (Fig 2C). With presynaptic inhibition, the mean population rate remains low (Fig 2B, bottom and Fig 2C) although single neuron firing rates show a broad distribution of values (Fig 2D) and large temporal fluctuations (see Fig 2E, and S2A–S2C Fig). The low mean firing rate on the population level is the consequence of presynaptic inhibition, which limits the population rate by suppressing recurrent interactions. The broad distribution of firing rates is a result of the BCM rule: neurons firing above the target rate will further increase their activity due to the Hebbian nature of the BCM rule, and the converse applies to neurons firing below the target rate. The slow sliding threshold provides a delayed negative feedback loop to this diversification, leading to a perpetual—albeit non-periodic—turnover of those neurons firing at high rates (Fig 2E) that depends on the homeostatic time constant (S2 Fig). It is conceivable that this turnover is chaotic, as observed in a feed-forward model of BCM [29]. Since chaos on such a slow timescale may interfere with network functionality, we here concentrate on the dynamic regime below the critical time constant.

To understand what determines the increase in the critical time constant with presynaptic inhibition, we used a similar approach as in previous work [12] (Methods and models) to mathematically derive the critical homeostatic time constant with presynaptic inhibition in place (cf. Eq (34)):

This equation shows that the range of stability increases with

  • lower release factor at the target rate,
  • stronger excitatory recurrence,
  • a steeper decrease in release factor in response to overall GABA spillover,
  • a stronger dependence of GABA spillover on excitatory firing rate and
  • the amount of background input to the network in relation to the target rate.

While most terms in the formulation above have a clear interpretation, it might be counter-intuitive that excitatory recurrence increases the critical timescale. This can be understood in two ways: First, stronger recurrent weights diminish the relative magnitude of weight increases. As the weights are scaled by the release factor, this effectively decelerates Hebbian plasticity. Second, this intuitive version of the critical timescale is not an explicit equation (cf. Eqs (33) and (34)). Because the firing of the network is homeostatically controlled, the learned recurrent weight depends on the background input and can thus not be considered a truly free model parameter.

The mathematical analysis accurately predicts the critical homeostatic timescale above which firing rates in the network deviate from the target, as long as presynaptic inhibition is not too strong (Fig 2F). As the mechanism is neuron-unspecific, stronger presynaptic inhibition introduces more competition between neurons in addition to the BCM rule, which can further amplify existing heterogeneities in the network. These heterogeneities are not captured by the mean population model used in the mathematical analysis and thus the critical timescale in the full network can deviate from the theoretical prediction. If we remove the noise on initial excitatory weights—a prominent source of heterogeneity in these networks—the simulation results match the theory regardless of presynaptic inhibition strength (S3 Fig). Here we focus on the parameter regime in which the theoretical predictions for the critical homeostatic timescale hold despite heterogeneities in the network.

Note that homeostatic timescales need to be considered relative to timescales of plasticity. To limit simulation time, the timescale of plasticity was about one minute in the simulations—shorter than to be expected in biology. Zenke et al. [12] estimated the time scale of plasticity based on slice experiments and obtained an estimate of about 50 minutes. In slice experiments, it is commonly observed that after the induction protocol, synaptic potentials only gradually increase within tens of minutes [30]. With these relative time scales in mind, a critical homeostatic time constant that is similar to the time scale of plasticity (about 60 seconds) is on the order of almost an hour. The increase mediated by presynaptic inhibition may hence render the experimentally observed forms of homeostasis functional.

Presynaptic inhibition reduces sensitivity to strength of background input

In the model, the BCM rule raises or lowers excitatory recurrent weights to achieve a given target rate. As the background input also contributes to single neuron firing rates, its strength affects the magnitude of recurrent weights necessary to reach the target rate. In particular, providing weaker background input renders the network effectively more recurrent. Because the recurrent weights are critical for the stability of the network, we conjecture that the critical homeostatic timescale is strongly affected by the level of background input. Because presynaptic inhibition provides a negative feedback on the recurrent weights, we expect it to reduce the influence of the background input on network stability.

Indeed, we find that without presynaptic inhibition the range of stability of the network increases strongly with the external input (Fig 3A, top). For the relatively low background input we used so far, homeostasis needs to be at least ten times faster than the effective timescale of plasticity (5 compared to 60 seconds, Fig 3A, top left). Increasing the background input substantially extends the range of stability in networks without presynaptic inhibition (Fig 3A, top middle and right). If the input alone is strong enough to bring the network close to the target rate, homeostasis on the order of the plasticity timescale is sufficient to maintain stable firing (see also Fig 3D).

thumbnail
Fig 3. Sensitivity of networks with and without presynaptic mechanism to levels of external input.

A. Temporal evolution of the average firing rate for different homeostatic time constants τc for networks without (top) or with presynaptic inhibition (bottom) and different levels of background input strength (low—0.5 Hz, medium—1 Hz and high—2.5 Hz). Strength of presynaptic inhibition is β = 0.05. B. Target firing rate in Hz as a function of external input strength compared to mean recurrent excitatory weight in a network without plasticity and presynaptic inhibition. C. Mean population rate as a function of homeostatic time constant in networks with (blue-green) and without (grey) presynaptic inhibition for different strengths of background input. Markers and their colours correspond to A indicating the parametrisation (low, medium or high, respectively). D. Critical homeostatic timescale derived analytically as a function of presynaptic inhibition strength for increasing strengths of background input (I = 0.5, 1, 2, 3, 5).

https://doi.org/10.1371/journal.pcbi.1008118.g003

The (in)stability of networks without presynaptic inhibition can be understood by revisiting the dependence of output firing rate on recurrent excitatory weights (cf. Fig 1D). For lower background input, the mean recurrent weight has to be increased closer to the point of instability for the neurons in the population to reach a given target rate (Fig 3B). In consequence, the network is more prone to destabilisation by the Hebbian contribution of the BCM rule, because small changes in excitatory recurrent weights can push the network over this stability threshold if homeostatic control is too slow.

Presynaptic inhibition removes the sudden point of instability, and introduces a gradual increase of population activity with increasing recurrent excitation (cf. Fig 1E). In consequence, the range of stability in networks with presynaptic inhibition is less dependent on the strength of background input compared to target rate (Fig 3A, bottom). Although higher inputs increase the critical homeostatic timescale, this increase is less prominent than in networks without presynaptic inhibition (Fig 3C). This observation is in line with the analytically determined critical homeostatic timescale: The stronger presynaptic inhibition, the smaller is the difference in critical timescales when varying the strength of background input (Fig 3D). In addition, when homeostasis is slower than the critical timescale, the mean population rate decreases with higher inputs (Fig 3C). As higher inputs require weaker recurrence (Fig 3B), plasticity-induced fluctuations of single neuron firing rates are smaller, leading to a lower mean population rate.

In summary, the critical homeostatic time constant and, accordingly, network stability crucially depend on the relation of background input to the target rate. Weak inputs require compensation by strong recurrent excitation, bringing network dynamics closer to instability. As networks with presynaptic inhibition do not suffer from such instabilities, they have a weaker dependence on the level of background input or excitatory recurrence.

Presynaptic inhibition in a spiking network confirms results from rate-based model

So far we have considered a strongly simplified linear rate-based neuron model. Moreover, we incorporated GABA spillover indirectly through the inhibitory firing rate. In the following we validate our results in a more complex, spiking network model. To this end, we simulate networks of randomly connected leaky integrate-and-fire neurons with current-based synapses. We include GABA spillover more explicitly: spikes from a given inhibitory neuron cause a local increase in GABA concentration around those excitatory neurons it targets. The accumulated GABA then inhibits afferent excitatory transmission, modelled again as a multiplicative factor onto recurrent synaptic weights (Fig 4A). Thus higher GABA concentrations leads to a decrease in amplitude of excitatory postsynaptic potentials (EPSP) (Fig 4B). To simplify a comparison with the rate-based analysis, we rescale the GABA concentration to values comparable to inhibitory or excitatory firing rates in the system (see Methods and models) and use the same linear transfer function linking GABA concentration to release factor as before (Fig 4C).

thumbnail
Fig 4. Presynaptic inhibition in a spiking network.

A. Circuit diagram of the spiking network model consisting of an excitatory and inhibitory population of neurons as well as accumulation of local GABA concentration that triggers presynaptic inhibition. B. EPSP amplitude in the presence of presynaptic inhibition (β = 0.1) for low and high GABA concentration (1 and 5, respectively). C. Release factor as a function of GABA concentration for different examples of linear transfer functions. D. Population firing rate as a function of mean excitatory recurrent weight for increasing strengths of postsynaptic inhibition wEI (top) or presynaptic inhibition β (bottom). E. Temporal evolution of the population firing rate for different homeostatic time constants τc in networks without (top) and with presynaptic inhibition (bottom). F. Final firing rate distribution for two example runs: τc = 60s (pink) and τc = 120s (dark blue-grey) without (top) and with presynaptic inhibition (bottom). Average firing rate is indicated by triangle of respective colour. G. Mean final population rate as a function of homeostatic time constant. H. Raster plot of 1000 random neurons in the networks sorted by activity at time t0 (left) and with the same ordering 20 minutes of simulation time later.

https://doi.org/10.1371/journal.pcbi.1008118.g004

First we investigate how the mean firing rate depends on the overall excitatory recurrence. As in the rate model, networks without presynaptic inhibition exhibit a high sensitivity to the strength of recurrent excitatory weights (Fig 4D, top) [26, 31]. For weak recurrent excitation the network fires at low rates, whereas for stronger excitation it destabilises and fires at a pathologically high firing rate that is only limited by the refractory period. Stronger postsynaptic inhibition merely shifts the point of destabilisation to higher values of excitatory recurrence. In contrast, networks with presynaptic inhibition allow a gradual increase of the mean population firing rate with growing excitatory recurrence (Fig 4D, bottom). The dependence of the population firing rate on excitatory recurrent weight scales multiplicatively with the strength of presynaptic inhibition (Fig 4C). The qualitatively different behaviour in response to changes in excitatory recurrence for networks with and without presynaptic inhibition is in good agreement with the results obtained in the rate network (cf. Fig 1).

To verify whether presynaptic inhibition also stabilises spiking networks subject to Hebbian plasticity, we incorporate the triplet rule with homeostatically controlled, activity-dependent long-term depression [27] (see Methods and models). The timescale of homeostasis is related to the time constant of the neuron-specific rate detector that scales the amount of long-term depression [12]. As previous work by Zenke et al. [12] has shown, stability in spiking networks without presynaptic inhibition critically depends on this homeostatic timescale. While fast homeostasis keeps the population firing rate close to target, slow homeostasis above a critical threshold is unable to control the positive feedback loop of recurrent excitation and Hebbian plasticity, such that the network exhibits runaway activity (Fig 4E and 4F, top). Note that we observe higher critical homeostatic time constant than Zenke at al. [12]. This is a consequence of a higher background input in our spiking network (cf. Fig 3) and may also depend on other differences in model choice, such as current-based rather than conductance-based synapses. Including presynaptic inhibition in the spiking network produces qualitatively similar results as in the rate model: Presynaptic inhibition increases the critical homeostatic time constant below which network activity is homogeneous and at the target rate (Fig 4E, bottom). It also maintains low population firing rates beyond this critical timescale (Fig 4G), although the network shows a broader distribution of single neuron firing rates (Fig 4F). For slow homeostatic feedback control, we observe a similar turnover as in the rate network: the set of neurons that are most active changes over the duration of the simulation (Fig 4H).

We conclude that the results we obtained for rate models can be qualitatively reproduced in networks of spiking neurons that include presynaptic inhibition based on local GABA spillover. In consequence, the features of presynaptic inhibition that prevent runaway excitation in the presence of Hebbian plasticity generalise across network models on different levels of abstraction.

Discussion

In the present work we have demonstrated that presynaptic inhibition can robustly alleviate destabilisation of network activity caused by changes in recurrent excitation. In our model of presynaptic inhibition, synaptic transmission at excitatory synapses is inhibited presynaptically by GABA spillover from inhibitory synapses. Increases in recurrent excitation inevitably elevate inhibitory population activity and thus GABA release, forming a negative feedback loop on effective recurrent excitatory weights. This negative feedback loop serves as a rapid compensatory process that attenuates synaptic transmission if network activity is too high, a mechanism that differs fundamentally from classical forms of postsynaptic inhibition. While increases in network activity also recruit strong feedback inhibition, additive negative feedback can only compensate a limited amount of recurrent excitation. In contrast, by directly targeting recurrent excitation itself, presynaptic inhibition provides powerful multiplicative control. We show that it therefore prevents runaway excitation arising from Hebbian plasticity when homeostatic control mechanisms alone are not fast enough. By limiting the overall population activity, presynaptic inhibition allows for homeostasis on biologically realistic timescales without compromising network stability.

Multiplicative gain control

We model presynaptic inhibition of synaptic transmission as a multiplicative factor onto excitatory recurrent synapses, which decreases with (inhibitory) network activity. Thus, presynaptic inhibition can serve as a form of multiplicative inhibition, i.e., a gain control. This was previously acknowledged by Fink et al. [24], who demonstrated in a simple feedback model that presynaptic inhibition at the neuro-muscular junction can control sensory gain. Yet, multiplicative control of neural responses has typically been attributed to different processes such as shunting inhibition [32], short-term depression [33] or synaptic scaling [34].

Whether shunting inhibition—an increase in membrane conductance that short-circuits excitatory currents—can provide multiplicative inhibition [35] has been debated: Experimental and theoretical studies have shown that the overall effect of shunting inhibition on neural responses in fact is not multiplicative but subtractive [3638], unless specific assumptions such as dendritic saturation or certain noise levels are met [37, 39].

Similar to the presynaptic inhibition mechanism studied here, short-term plasticity alters synaptic transmission by affecting transmitter release [40, 41]. Indeed it has been shown, that short-term depression can provide dynamic gain control [33, 42]. A defining difference between presynaptic inhibition and short-term plasticity is that presynaptic inhibition depends on surrounding network activity, whereas short-term plasticity is driven by presynaptic activity in the synapse in question. Hence, presynaptic inhibition scales recurrent interactions globally and preserves the overall network structure, whereas short-term depression reduces the impact of highly active neurons on the network. While the spatial scale plays a minor role in the random homogeneous networks we considered here, we expect to observe marked differences between presynaptic inhibition and short-term plasticity in any network, in which heterogeneities between neurons or synapses should be preserved.

Another mechanism that is considered to exert multiplicative control is synaptic scaling [5, 34, 43]. Similar to presynaptic inhibition, synaptic scaling modulates synaptic efficacies in response to changes in neural activity. However, there are a few key differences: First, like other forms of homeostasis synaptic scaling acts on a timescale of hours [5, 13], whereas presynaptic inhibition acts on a much shorter time scale and can therefore adapt to fluctuations in network activity. Second, synaptic scaling operates on the spatial scale of single neurons [44], thus affecting the relative contribution of different neurons to network activity—albeit differently from short-term depression in that input and not output synapses are modulated. Presynaptic inhibition operates on the network level. Finally, synaptic scaling is homeostatic in that it scales synapses to maintain a certain target activity [3, 5]. Such a “set point” does not exist in our model of presynaptic inhibition.

We conclude that although shunting inhibition as well as short-term depression and synaptic scaling all have been linked to multiplicative gain control, their network effects are fundamentally different from presynaptic inhibition.

Role of presynaptic inhibition in learning

Here, we have focused on the potential of presynaptic inhibition to stabilise Hebbian plasticity at recurrent excitatory synapses. It conserves the underlying network connectivity and might therefore set the stage for stable learning. GABAB receptors are highly expressed in several regions traditionally linked to learning and memory [45, 46] and presynaptic inhibition is prominent in the hippocampus (see for example [15, 17, 47, 48]). It is therefore conceivable that presynaptic inhibition plays other roles in learning and memory beyond the stabilisation of ongoing plasticity demonstrated here. For example, experiments in hippocampus suggest that local, rapid and activity-dependent regulation of release probability serves to maintain synapses in an operational range, ensuring that synapses are optimally placed to undergo changes induced by learning mechanisms [49]. In addition, many models of learning rely on a coincidence of pre- and postsynaptic activity (i.e., Hebbian plasticity). By regulating information flow, presynaptic inhibition could thus act as a gating mechanism for the induction of plasticity [18, 50]. Indeed, defects in GABAB receptor expression have been shown to compromise long-term plasticity, leading to impairments in hippocampus-dependent memory [51].

Gating of long-term plasticity by presynaptic inhibition was also observed in the amygdala, where loss of presynaptic GABAB receptors led to a generalisation of conditioned fear to non-conditioned stimuli [52]. In this context it was suggested that presynaptic inhibition sets the balance between associative and non-associative long-term potentiation. In cerebellum, stimulation with physiological activity patterns leads to changes in presynaptic GABAB receptor expression, which was suggested to complement other forms of plasticity: A reduction in presynaptic inhibition increases synaptic transmission and could thus enhance long-term plasticity [53].

In summary, a range of experiments indicate that beyond providing stability during ongoing plasticity, presynaptic inhibition could serve as an activity-dependent gating mechanism for long-term plasticity.

Phenomenology and limitations

Like all computational models, the present one contains simplifying design choices. Because our goal was to investigate consequences of presynaptic inhibition on the network level, we adopted a phenomenological description of the mechanism. Although we motivated the model by a specific pathway that suppresses presynaptic calcium channels, activation of presynaptic GABAB receptors can impair the release machinery in other ways, e.g. by activation of potassium channels [17]. Furthermore, presynaptic terminals also express GABAA receptors that have been implicated in presynaptic inhibitory effects [54, 55]. Such alternative pathways also operate on timescales in a sub-second range, and should therefore not influence the validity of the mechanisms we suggest. Furthermore we show that the compensation of recurrent excitation does not require presynaptic inhibition to exclusively target recurrent excitatory connections (S1 Fig), supporting the generalisability of the mechanism.

Urban-Ciecko et al. have attributed presynaptic inhibition to somatostatin-positive (SOMs) but not parvalbumin-positive interneurons (PVs) [18]. Modelling circuits with different interneuron types, in which presynaptic inhibition is mediated by SOMs is beyond the scope of this work, however.

For analytical tractability we mostly used linear functions for presynaptic inhibition, but show that the stabilising effect is robust to the specific choice of transfer function (S4 Fig). Another simplifying assumption in our work is that we consider synaptic transmission to be deterministic, such that presynaptic inhibition merely affects a synaptic release factor. A natural extension would be to include a probabilistic release mechanism and a dynamic model of short-term plasticity. However, we expect that our results will still hold qualitatively, because on the slow timescales of synaptic plasticity, short-term plasticity will act mainly through changes in steady state [56], rather than through a short-term redistribution of synaptic release [57].

Finally, we concentrated on recurrent synapses. The problem of Hebbian instability is particularly drastic in this setting, because not only post-, but also presynaptic activity is affected by weight changes. For plasticity in feedforward synapses, this is not the case and the dynamics therefore less explosive. We therefore suspect that the suggested stabilization mechanism would not suffer from additional plasticity in the input synapses.

Spatial specificity of presynaptic inhibition

It is unclear how specific is the mechanism of presynaptic inhibition. A critical determinant of this specificity is the source of GABA at excitatory synapses. One possibility is the presence of axo-axonal synapses that release GABA in very close proximity to excitatory synapses and thereby mediate a potentially highly specific form of presynaptic inhibition [58]. For example, presynaptic inhibition at the neuro-muscular junction ensures smooth and stable movement patterns through axo-axonal synapses [24]. However, in most brain areas axo-axonal synapses are not numerous enough to account for the full range of presynaptic inhibition effects [18]. Presynaptic inhibition also occurs in the absence of axo-axonal synapses, potentially through GABA that diffuses from nearby inhibitory synapses [19, 59]. In fact, a specific class of interneurons—neurogliaform cells—has been found to release GABA in a target-independent way, generating non-specific forms of inhibitory control [60]. Regardless of the exact source of GABA spillover, the spatial specificity of presynaptic inhibition depends on diffusion coefficient of GABA as well as the input specificity of the respective cells.

We were interested in the interactions of presynaptic inhibition with plasticity in recurrent connections. We therefore considered a small network in which synaptic connections can essentially be considered random. Assuming that GABA release and spillover provide a spatially unspecific signal on this scale, we modelled presynaptic inhibition as a global effect. More specifically, activity of all inhibitory neurons modulates a single release factor that acts at every recurrent excitatory synapse. This leads to the control of mean population activity by presynaptic inhibition, whereas single neurons can exhibit heterogeneous firing rates. However, we do not expect that in the brain, presynaptic inhibition can act sufficiently locally to allow highly specific control, e.g., on a single neuron level. The reason is that the mechanism does not provide direct feedback but acts through a population of inhibitory neurons, thus losing spatial specificity.

Considering morphologically more complex neurons might give insights into the computational properties of presynaptic inhibition at the level of single compartments or even dendritic branches. Experiments have revealed that release probabilities within a dendritic branch are similar and change depending on recent dendritic activity [49]. However, an extension to networks of multi-compartment neurons is also beyond the scope of this project.

In conclusion, the spatial specificity of presynaptic inhibition is not yet resolved. Both local, specific mechanisms via axo-axonal synapses and broader mechanisms mediated by GABA spillover from other synapses have been demonstrated [20]. It is also conceivable that the degree of specificity of presynaptic inhibition varies between brain areas depending on the computational demands.

Outlook

In the present work, we focused on the role of presynaptic inhibition to stabilise recurrent excitation in the presence of plasticity. Besides the extensions mentioned in previous paragraphs, it would also be interesting to study the effects of presynaptic inhibition on sensory information processing, e.g., by providing a gating mechanism for sensory information at an early stage [20]. Presynaptic inhibition has been repeatedly observed in early sensory systems including the retina [61], lateral geniculate nucleus [62], somatosensory cortex [18] and the olfactory system of drosophila [25], and theoretical work has implicated presynaptic inhibition in extending the dynamic range in sensory processing [63]. Further theoretical work will be required to pinpoint the full functional repertoire of presynaptic inhibition.

Methods and models

To investigate the effect of presynaptic inhibition on network stability, we simulate large rate-based and spiking recurrent networks and mathematically analyse mean population dynamics.

Recurrent rate network

The recurrent rate network consists of NE = 1024 excitatory and NI = 256 inhibitory neurons described by their firing rates and . To ensure positive and finite firing rates, and are rectified and saturated at 200 Hz. The neurons are randomly connected with connection probability c = 100 × 2−10 ≈ 0.1 and fixed indegree. Neuron numbers were chosen as powers of two to increase simulation speed, and the connection probability guarantees an exact ratio of 4: 1 (100 and 25) excitatory and inhibitory ingoing synapses at each neuron. The firing rate dynamics are given by (1) (2) with synaptic weights (X, X ∈ {E, I}) and time constants τE = τI = 20 ms. Presynaptic excitatory or inhibitory input populations are given by indices from neuron-specific subsets exc and inh and excitatory neurons receive noisy, uncorrelated background input Ibg (with distribution ). Synaptic weights are scaled by connection probability and network size (see Table 1) and fixed with exception of the excitatory recurrent weights . Excitatory weights are either a free parameter (Fig 1) or subject to plasticity. Weights are chosen such that the network is balanced [26] and excitatory and inhibitory firing rates are similar.

thumbnail
Table 1. Rate network parameters (unless specified differently).

https://doi.org/10.1371/journal.pcbi.1008118.t001

Presynaptic inhibition mechanism.

The global release factor p in Eq (1) multiplicatively scales the total excitatory recurrent input to each neuron. In the absence of presynaptic inhibition we set p ≡ 1, which leaves a standard recurrent rate model of excitatory and inhibitory neurons. If the presynaptic inhibition mechanism is included, p is modulated by the total inhibitory input rate according to (3) where g is a monotonically decreasing function and τp = 500 ms. The time constant of presynaptic inhibition was chosen to reflect the timescale of metabotropic GABAB receptors [18]. For simplicity, we use a linear function g(r) = 1 − βr with slope , and rectify p to enforce positivity. The mechanism is global, meaning that a single release factor p is modulated by the total weighted inhibitory activity . The slope β as well as inhibitory weight wEI determine the strength of the mechanism. To account for the scaling of wEI, β is also normalised by connection probability c (see Table 2).

thumbnail
Table 2. Timescales, presynaptic inhibition and plasticity parameters in the rate network (unless specified differently).

https://doi.org/10.1371/journal.pcbi.1008118.t002

In S4 Fig we test sigmoid and exponential functions as alternatives. The sigmoid transfer function is parametrised as with slope parameter βs and shift (or reversal point) rshift. For an exponentially decaying transfer function we use with slope parameter βe.

Plasticity model.

We used the Bienenstock-Cooper-Munro (BCM) rule [8] as a model for synaptic plasticity. Plasticity only affects connections between excitatory neurons, changing the weights according to (4)

The effective timescale of plasticity is determined by time constant τw, initial weight w0 and learning rate η. Thus, the effective time constant of plasticity is given by (5)

As in the original formulation, this recurrent BCM rule is Hebbian and will attempt to potentiate afferent synapses of neurons above the target rate κ and depress the ones below, which is captured by the nonlinear threshold in Eq (4). The neuron-specific threshold depends on the target rate κ as well as a running estimate of neuron activity, given by a low-pass filter of that neuron’s firing rate: (6)

For constant input, the plasticity rule together with the threshold’s dependence on the rate estimate create a homeostatic system that tries to bring the firing rate of every neuron to the target κ = 5 Hz. The network is initialised close to the fixed point as determined by mathematical analyses: , p* = 1 − βκ and (cf. Eq (18)). Excitatory recurrent weights are limited to the interval from zero to 10w*.

A critical parameter for stability is the time constant τc of rate estimation [12], which in this context acts as timescale of homeostasis and is a free parameter of the system. Note that the homeostatic timescale needs to be seen in relation to the the effective timescale of plasticity. To improve simulation speed in the rate network, we rescaled the timescale of plasticity by a factor of 50 (τplast = 60s instead of 3000s used in previous work [12]). As all other timescales in the system are at least a factor of 100 faster, this does not influence the stability of the network. Thus, a homeostatic timescale of 60 seconds (τplast) in our network actually corresponds to homeostasis on the order of almost an hour of biological time.

Mean population rate model

To understand the dynamics of the recurrent rate network in more detail, we derive a mean population rate model from the full network. Assuming homogeneous neuron activity as well as a sufficiently large network, the excitatory and inhibitory populations can be described by their mean population firing rate rE and rI, respectively. Both populations receive recurrent and reciprocal input with a mean weight, such that their dynamics can be simplified to (7) (8) with weights and in particular , where wXY are the synaptic weight parameters from the full rate network (Table 1). As in the full model, firing rates are rectified and thresholded at 200 Hz and presynaptic inhibition is modelled by a monotonic decrease in release factor p (see Eq (3)) with . As does not depend on the connectivity or number of neurons, no further normalisation of β is needed.

In the mean population rate model the BCM rule with homeostatically sliding threshold reduces to (9) (10) as pre- and postsynaptic firing rates are both described by the mean excitatory population rate. Thus, the mean population rate model with presynaptic inhibition simplifies to a five-dimensional dynamical system.

Analytical steady-state firing rates

To compare the steady-state behaviour of the mean population rate model in the presence and absence of presynaptic inhibition, we derive the fixed point of Eqs (7) and (8) for unsaturated rates and linear presynaptic inhibition transfer function in Eq (3). In steady-state, the inhibitory rate is (11) and thus we can write the steady-state excitatory firing rate without presynaptic inhibition as (12)

Note that this fixed point only exists as long as .

In the following we define the total inhibition recruited by the excitatory population through the inhibitory population, and the excitatory weight as (13)

First, we assume that the mean background input I is weak enough to ensure that rE ≤ 1/(wI β). In this regime the transfer function g is linear such that Eq (7) can be rewritten to (14)

Solving this quadratic equation for rE gives the steady-state firing rate with presynaptic inhibition for linear transfer function: (15)

Note that the solution with a negative contribution of the square root does not give positive firing rates and thus is not relevant for this system.

It can be shown that for mean background input , presynaptic inhibition imposes an upper bound on the steady-state firing rate with respect to the recurrent excitatory weight w. Taking the limit of w to infinity gives (16)

Therefore, the upper bound of the excitatory firing rate for moderate external inputs depends on the strength of presynaptic inhibition β as well as the total effective inhibitory weight wI (see definition in Eq (11).

For external input strengths , the excitatory rate rE is larger than 1/(βwI) leading to p = [1 − βwI rE]+ = 0. In this case the steady-state firing rate reduces to (17) and hence is independent of the excitatory recurrent weight w.

Derivation of critical homeostatic timescale

To derive the critical homeostatic timescales in the presence and absence of presynaptic inhibition, we revisit the five-dimensional dynamical system given by Eqs (3) and (7)(10), and follow a similar approach as Zenke et al. [12]: first we reduce the system to two dimensions, then perform a linear stability analysis around the fixed point and finally determine how the stability of the fixed point depends on the timescale of homeostasis.

Reduction to two-dimensional system.

As the dynamics of rE, rI and p are much faster than those of w and (i.e., τE, τI, τpτw, τc), we can use a separation of timescales approach to reduce the full system to two dimensions. With respect to the slow plasticity and homeostasis dynamics, the fast variables are at their steady-state. For the excitatory firing rate we can therefore write (18) where we again use the definitions of w and wI from Eq (13) for readability. Taking the derivative with respect to w on both sides gives (19) where g′ is the derivative of g with respect to rE. Solving for leads to (20)

To eliminate w from the partial derivative above we reorder Eq (18) for w, insert the result and after simplifying the expression finally obtain (21)

Using the chain rule , we can express slow dynamics of the BCM-type plasticity rule in Eq (9) through its effect on the steady-state of rE. Thus the firing rate rE becomes the slow dynamic variable together with its running estimate and therefore the final two-dimensional system is described by (22) (23)

Linear stability analysis.

The system described by Eqs (22) and (23) is in the stationary state if the rate estimate and excitatory firing rate are at the target (). To obtain the Jacobian of the system, we calculate the partial derivates of the right hand side of Eqs (22) and (23) with respect to rE and and evaluate it at the fixed point : (24) where we introduced the auxiliary expression (25)

The characteristic polynomial then is (26) which determines the eigenvalues of the system linearised around the fixed point: (27)

If the real part of both eigenvalues is negative, the fixed point is stable (see for example [64]). In the following we use analogous reasoning to previous work [12] to prove that this condition is fulfilled as long as . First of all, Ψ is positive for moderate external input (I < κ(1 + wI)), because g is a monotonously decreasing function and all other variables are positive. The stability condition follows immediately if the square root is imaginary. In case the square root is real, we can write the larger eigenvalue as (28) (29) where γ is the second term of the product in the square root and has to be real given the previous assumption. If Ψκ4 τc > τw then and thus the fixed point is unstable.

If, on the other hand, Ψκ4 τc < τw it follows that (30) (31) (32) such that both eigenvalues are negative and therefore the fixed point is stable.

Critical homeostatic timescale.

Using the stability condition derived above, we can identify the critical homeostatic timescale as (33)

To obtain a more intuitive expression, we reorder Eq (18) at the fixed point (rE = κ) to give g(wI κ) = κ + wI κ + I and insert this into the Eq (33) to get (34)

In summary, we have shown that stability of the fixed point at the target rate κ is guaranteed as long as the homeostatic time constant τc is smaller than a critical value , which depends on the release factor at the target rate g(wI κ), excitatory recurrence w, the change g′(wI κ) of release factor with inhibitory acitvity (i.e. GABA spillover) and the dependence wI of this inibitory activity on excitatory firing rate. Note that the critical homeostatic timescale needs to be seen in relation to the effective time constant of plasticity .

Without presynaptic inhibition the two-dimensional system simplifies to (35) (36) which was previously analysed by Zenke et al. [12]. We can recover their results by inserting g(r)≡1 (implying g′(r)≡0) into Eq (33), which leads to the critical timescale without presynaptic inhibition (37)

S5 Fig shows the eigenvalues for increasing homeostatic time constant as well as the phase plane dynamics with fast and slow homeostasis, both with and without presynaptic inhibition.

Spiking network model

The recurrent spiking network we studied consists of 5000 leaky integrate-and-fire neurons (4000 excitatory and 1000 inhibitory neurons), which are randomly connected with current-based synapses [65]. In addition to the recurrent connections, both excitatory and inhibitory neurons receive random background input from an external population modelled by 2000 Poisson units with firing rate 5 Hz. Connection probability for recurrent and external connections is sparse (10%).

The membrane potential Vi of neuron i evolves according to (38) with reversal potential VR, membrane time constant τm and synaptic input currents from excitatory (IE), inhibitory (II) and external populations (Iext). If a voltage trace crosses the threshold VT, the neuron emits a spike and the voltage is reset to VR for an absolute refractory period of τref (see Table 3 for an overview of neuron and network parameters). The spike train of a single neuron i is defined as a sum over spikes k given by with spike times . Input currents are given by a temporally filtered, linear, weighted sum of the spikes from the respective population to neuron i: where wij are synaptic weights from neuron j to neuron i.

Presynaptic inhibition by GABA spillover.

In analogy to the rate network, the effect of presynaptic inhibition on synaptic transmission is modelled as a multiplicative factor (“release factor”) onto excitatory synaptic weights. However, in the spiking network we consider a more local version of the mechanism with neuron-specific release factors pi that are modulated by GABA spillover. More specifically, excitatory synaptic inputs follow

We estimate local GABA levels (can also be interpreted as concentration) by accumulating a fixed amount of GABA—given by AGABA—for every inhibitory spike according to

The local GABA levels are neuron-specific and affected by all inhibitory neurons projecting to the respective neuron. Thus, an inhibitory spike leads to both an inhibitory current and an increase in local GABA level at downstream excitatory neurons. The inhibition of release through GABA spillover is then again modelled by a linear decrease in release factor with increasing GABA levels, that is (39)

We set the time constant of this process to τp = 300 ms to roughly match the timescale of presynaptic inhibition observed experimentally [18]. As before, the slope β determines the strength of presynaptic inhibition and p ≡ 1 recovers the traditionally used current-based network.

Although we consider CGABA to be related to local GABA concentration, it is unitless and can not be mapped to any specific biophysical quantity. Nevertheless, we normalise AGABA by the number of ingoing inhibitory synapses, such that the local GABA level CGABA roughly relates to the inhibitory firing rate. This allows us to use a presynaptic inhibition strength paremeter β on the same order of magnitude as in the rate and mean population model (without further scaling). Table 4 contains a parameter summary for presynaptic inhibition and the synaptic weights. We omitted the units of the weights as they have no biophysical meaning in this simple current-based spiking model. Note that for consistency in the units of Eq (38) the unit would theoretically have to be Volt.

The triplet plasticity rule.

We model plasticity of excitatory synapses in the spiking network using the minimal triplet STDP model tuned to visual cortex data [27] and metaplasticity implemented by homoeostatically regulating the amount of long-term depression (LTD) depending on the postsynaptic firing rates [12, 27]. The amplitude of LTD is given by (40) where A+ is the amount of long-term potentiation (LTP), τ+, τslow and τ timescales of the triplet STDP model (see [12]) and κ the target rate at which the definition of ensures balance of LTD and LTP. Changes in the LTD amplitude of neuron i are driven by the moving average of its postsynaptic firing rate (41)

The overall timescale of this homoeostatic component of the metaplastic triplet STDP rule is therefore given by τc and as in the rate network a crucial parameter for network stability [12].

Weight modifications at excitatory recurrent synapses are limited to the interval 0 < wij < wmax during ongoing plasticity, whereas structural changes are not allowed (synapses initialised at zero strength remain absent). A summary of parameters related to plasticity is shown in Table 5. The effective time constant of plasticity can be approximated in a mean field model from the plasticity parameters by considering the expected mean weight update. Given the default parameters of this network, the effective timescale of plasticity amounts to τw = 2975 s (see [12] for details).

Numerical simulations

The numerical equations of the rate network were integrated using forward Euler with a timestep of 1 ms. Simulations were run in Python with support of Cython for speed improvement.

The spiking network was simulated using the Brian2 package [66] for Python. Numerical equations were integrated using Runge-Kutta method of 4th order for the neuron equations and forward Euler for synaptic interactions. The integration timestep was 0.1 ms. Source code for the network models is available on Github.

Supporting information

S1 Fig. Presynaptic inhibition also compensates for increases in recurrent excitation in alternative circuit motifs.

Top: Mean population models in which presynaptic inhibition not only affects excitatory recurrent synapses, but also A. inhibitory synapses onto excitatory cells, B. excitatory synapses onto inhibitory cells C., all recurrent synapses (including recurrent inhibitory connections) or D. background input. Bottom: Firing rate of excitatory population in respective circuit model as a function of excitatory recurrence for increasing strengths of presynaptic inhibition (β).

https://doi.org/10.1371/journal.pcbi.1008118.s001

(TIF)

S2 Fig. The homeostatic timescale affects the distribution and temporal correlation of single neuron firing rates.

Fast homeostasis (τc = 30 s) produces a narrow distribution around the target rate of 5 Hz. Slower homeostasis leads to a broad long-tailed distribution of firing rates with an inherent turnover but no temporal oscillations. A. Mean (solid line) and standard deviation (shaded area) of firing rate over time. Plasticity is switched on after 6 minutes of simulation time (asterisk). B. Distribution of single neuron firing rates. C. Firing rate of single neurons over time. Target rate is indicated by yellow colour. D. Temporal correlation of single neuron firing rates.

https://doi.org/10.1371/journal.pcbi.1008118.s002

(TIF)

S3 Fig. Removing heterogeneity from the network improves the fit of simulated to theoretical critical homeostatic timescale.

With heterogeneity in the initial excitatory recurrent weights, the theory holds as long as presynaptic inhibition is not too strong. Removing this source of heterogeneity from the network increases the critical timescale for strong presynaptic inhibition, and thereby provides a better match to the analytically determined critical timescales. A. Critical homeostatic time scale as a function of the presynaptic inhibition strength β in networks with (left) and without (right) heterogeneity. For simulations the vertical bars indicate the range between largest stable and smallest unstable timescale tested. B. Temporal evolution of the average firing rate for different homeostatic time constants τc with (left) and without (right) heterogeneity in the network for different strengths of presynaptic inhibition β.

https://doi.org/10.1371/journal.pcbi.1008118.s003

(TIF)

S4 Fig. The stabilising effect of presynaptic inhibition is robust to details of the transfer function.

We qualitatively recover the results obtained for linear transfer functions if the release probability decreases exponentially or as a sigmoid with increasing inhibitory rate. A. Exponentially decreasing (top) and sigmoid transfer functions (bottom two) linking inhibitory firing rate to release factor for different slope parameters (βe = 0.05, 0.1, 0.15, βs = 0.1, 0.2, 0.3, shift of sigmoid: 4 (middle) and 5 (bottom). Target rate of 5 Hz is indicated by gray dashed line. B. Increase in analytical critical homeostatic timescale as a function of transfer function slope at the target rate. Parametrisation in A indicated by markers of same colour. C. Release factor at target rate as a function of increase in homeostatic time constant. Markers correspond to transfer functions shown in A. D. Temporal evolution of the average firing rate in networks with presynaptic inhibition for different homeostatic time constants τc. Plasticity is switched on after 6 minutes of simulation time (asterisk). Coloured marker in top right corner specifies parametrisation of transfer function (see A).

https://doi.org/10.1371/journal.pcbi.1008118.s004

(TIF)

S5 Fig. Linear stability in the reduced (two-dimensional) mean population model critically depends on the time constant of homeostasis.

Besides increasing the critical homeostatic time constant, presynaptic inhibition preserves low firing rates beyond the point of stability. Systems without presynaptic inhibition trajectories diverge away from the fixed point (i.e. target rate, 5 Hz) if homeostasis is close to the timescale of plasticity, whereas systems with presynaptic inhibition merely oscillate around it. A. Eigenvalues of reduced system with compared to without presynaptic inhibition as a function of homeostatic time constant. The time constant is given in units of effective plasticity time constant τplast. The critical homeostatic timescale (where the eigenvalues become positive) is marked by a red dashed line. B. Phase plane dynamics with nullclines and example trajectories for short and long homeostatic timescales in systems with and without presynaptic inhibition.

https://doi.org/10.1371/journal.pcbi.1008118.s005

(TIF)

Acknowledgments

We thank Loreen Hertäg for helpful discussions and critical reading of the manuscript. We thank Owen Mackwood for considerable technical support with running simulations on a compute cluster.

References

  1. 1. Hebb DO. The Organisation of Behaviour: A Neuropsychological Theory. Wiley; 1949.
  2. 2. Abbott LF, Nelson SB. Synaptic plasticity: taming the beast. Nature neuroscience. 2000;3(11):1178–1183. pmid:11127835
  3. 3. Turrigiano GG, Nelson SB. Homeostatic plasticity in the developing nervous system. Nature Reviews Neuroscience. 2004;5(2):97. pmid:14735113
  4. 4. Keck T, Toyoizumi T, Chen L, Doiron B, Feldman DE, Fox K, et al. Integrating Hebbian and homeostatic plasticity: the current state of the field and future research directions. Philosophical Transactions of the Royal Society B: Biological Sciences. 2017;372(1715):20160158.
  5. 5. Turrigiano GG, Leslie KR, Desai NS, Rutherford LC, Nelson SB. Activity-dependent scaling of quantal amplitude in neocortical neurons. Nature. 1998;391(6670):892–896. pmid:9495341
  6. 6. Desai NS, Rutherford LC, Turrigiano GG. Plasticity in the intrinsic excitability of cortical pyramidal neurons. Nature neuroscience. 1999;2(6):515. pmid:10448215
  7. 7. O’Leary T, van Rossum MC, Wyllie DJ. Homeostasis of intrinsic excitability in hippocampal neurones: dynamics and mechanism of the response to chronic depolarization. The Journal of physiology. 2010;588(1):157–170. pmid:19917565
  8. 8. Bienenstock EL, Cooper LN, Munro PW. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. The Journal of Neuroscience. 1982;2(1):32–48. pmid:7054394
  9. 9. Li J, Park E, Zhong LR, Chen L. Homeostatic synaptic plasticity as a metaplasticity mechanism—a molecular and cellular perspective. Current opinion in neurobiology. 2019;54:44–53. pmid:30212714
  10. 10. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science. 2011;334(6062):1569–1573. pmid:22075724
  11. 11. Sprekeler H. Functional consequences of inhibitory plasticity: homeostasis, the excitation-inhibition balance and beyond. Current opinion in neurobiology. 2017;43:198–203. pmid:28500933
  12. 12. Zenke F, Hennequin G, Gerstner W. Synaptic plasticity in neural networks needs homeostasis with a fast rate detector. PLoS Computational Biology. 2013;9(11). pmid:24244138
  13. 13. Zenke F, Gerstner W. Hebbian plasticity requires compensatory processes on multiple timescales. Philosophical Transactions of the Royal Society B: Biological Sciences. 2017;372(1715):20160259.
  14. 14. Miller RJ. Presynaptic receptors. Annual review of pharmacology and toxicology. 1998;38(1):201–227. pmid:9597154
  15. 15. Laviv T, Riven I, Dolev I, Vertkin I, Balana B, Slesinger PA, et al. Basal GABA regulates GABABR conformation and release probability at single hippocampal synapses. Neuron. 2010;67(2):253–267. pmid:20670833
  16. 16. Isaacson JS, Solis JM, Nicoll RA. Local and diffuse synaptic actions of GABA in the hippocampus. Neuron. 1993;10(2):165–175. pmid:7679913
  17. 17. Wu LG, Saggau P. Presynaptic inhibition of elicited neurotransmitter release. Trends in Neurosciences. 1997;20(5):204–212. pmid:9141196
  18. 18. Urban-Ciecko J, Fanselow E, Barth A. Neocortical somatostatin neurons reversibly silence excitatory transmission via GABAb receptors. Current Biology. 2015;25(6):722–731. pmid:25728691
  19. 19. Dittman JS, Regehr WG. Mechanism and kinetics of heterosynaptic depression at a cerebellar synapse. Journal of Neuroscience. 1997;17(23):9048–9059. pmid:9364051
  20. 20. Alford S, Schwartz E. Presynaptic Inhibition. In: Encyclopedia of Neuroscience. Academic Press; 2009. p. 1001–1006. https://doi.org/10.1016/B978-008045046-9.00814-7
  21. 21. Molyneaux BJ, Hasselmo ME. GABAB presynaptic inhibition has an in vivo time constant sufficiently rapid to allow modulation at theta frequency. Journal of Neurophysiology. 2002;87(3):1196–1205. pmid:11877493
  22. 22. Branco T, Staras K. The probability of neurotransmitter release: variability and feedback control at single synapses. Nature Reviews Neuroscience. 2009;10(5):373–383. pmid:19377502
  23. 23. Delaney AJ, Crane JW, Holmes NM, Fam J, Westbrook RF. Baclofen acts in the central amygdala to reduce synaptic transmission and impair context fear conditioning. Scientific Reports. 2018;8(1):9908. pmid:29967489
  24. 24. Fink AJP, Croce KR, Huang ZJ, Abbott LF, Jessell TM, Azim E. Presynaptic inhibition of spinal sensory feedback ensures smooth movement. Nature. 2014;509(7498):43–48. pmid:24784215
  25. 25. Olsen SR, Wilson RI. Lateral presynaptic inhibition mediates gain control in an olfactory circuit. Nature. 2008;452(7190):956–960. pmid:18344978
  26. 26. Brunel N. Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons. Journal of Computational Neuroscience. 2000;8(3):183–208. pmid:10809012
  27. 27. Pfister JP. Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity. Journal of Neuroscience. 2006;26(38):9673–9682. pmid:16988038
  28. 28. Gjorgjieva J, Clopath C, Audet J, Pfister JP. A triplet spike-timing–dependent plasticity model generalizes the Bienenstock–Cooper–Munro rule to higher-order spatiotemporal correlations. Proceedings of the National Academy of Sciences. 2011;108(48):19383–19388.
  29. 29. Udeigwe LC, Munro PW, Ermentrout GB. Emergent dynamical properties of the BCM learning rule. The Journal of Mathematical Neuroscience. 2017;7(1):2. pmid:28220467
  30. 30. Sjöström PJ, Turrigiano GG, Nelson SB. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron. 2001;32(6):1149–1164. pmid:11754844
  31. 31. Amit DJ, Brunel N. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral cortex (New York, NY: 1991). 1997;7(3):237–252.
  32. 32. Mitchell SJ, Silver RA. Shunting inhibition modulates neuronal gain during synaptic excitation. Neuron. 2003;38(3):433–445. pmid:12741990
  33. 33. Abbott LF, Varela J, Sen K, Nelson S. Synaptic depression and cortical gain control. Science. 1997;275(5297):221–224.
  34. 34. Turrigiano GG. The self-tuning neuron: synaptic scaling of excitatory synapses. Cell. 2008;135(3):422–435. pmid:18984155
  35. 35. Blomfield S. Arithmetical operations performed by nerve cells. Brain research. 1974;69(1):115–124. pmid:4817903
  36. 36. Holt GR, Koch C. Shunting inhibition does not have a divisive effect on firing rates. Neural computation. 1997;9(5):1001–1013. pmid:9188191
  37. 37. Chance FS, Abbott LF, Reyes AD. Gain modulation from background synaptic input. Neuron. 2002;35(4):773–782. pmid:12194875
  38. 38. Abbott L, Chance FS. Drivers and modulators from push-pull and balanced synaptic input. Progress in brain research. 2005;149:147–155. pmid:16226582
  39. 39. Prescott SA, De Koninck Y. Gain control of firing rate by shunting inhibition: roles of synaptic noise and dendritic saturation. Proceedings of the National Academy of Sciences. 2003;100(4):2076–2081.
  40. 40. Markram H, Tsodyks M. Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature. 1996;382(6594):807. pmid:8752273
  41. 41. Tsodyks MV, Markram H. The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proceedings of the national academy of sciences. 1997;94(2):719–723.
  42. 42. Rothman JS, Cathala L, Steuber V, Silver RA. Synaptic depression enables neuronal gain control. Nature. 2009;457(7232):1015. pmid:19145233
  43. 43. Van Rossum MC, Bi GQ, Turrigiano GG. Stable Hebbian learning from spike timing-dependent plasticity. Journal of neuroscience. 2000;20(23):8812–8821. pmid:11102489
  44. 44. Turrigiano G. Homeostatic Synaptic Plasticity: Local and Global Mechanisms for Stabilizing Neuronal Function. Cold Spring Harbor Perspectives in Biology. 2012;4(1):a005736–a005736. pmid:22086977
  45. 45. Bowery N, Hudson A, Price G. GABAA and GABAB receptor site distribution in the rat central nervous system. Neuroscience. 1987;20(2):365–383. pmid:3035421
  46. 46. Serrats J, Cunningham MO, Davies CH. GABAB receptor modulation—to B or not to be B a pro-cognitive medicine? Current opinion in pharmacology. 2017;35:125–132. pmid:29050820
  47. 47. Thompson SM, Capogna M, Scanziani M. Presynaptic inhibition in the hippocampus. Trends in Neurosciences. 1993;16(6):222–227. pmid:7688163
  48. 48. Slomowitz E, Styr B, Vertkin I, Milshtein-Parush H, Nelken I, Slutsky M, et al. Interplay between population firing stability and single neuron dynamics in hippocampal networks. eLife. 2015;4:e04378.
  49. 49. Branco T, Staras K, Darcy KJ, Goda Y. Local Dendritic Activity Sets Release Probability at Hippocampal Synapses. Neuron. 2008;59(3):475–485. pmid:18701072
  50. 50. Ohta H, Gunji YP. Recurrent neural network architecture with pre-synaptic inhibition for incremental learning. Neural Networks. 2006;19(8):1106–1119. pmid:16989983
  51. 51. Vigot R, Barbieri S, Bräuner-Osborne H, Turecek R, Shigemoto R, Zhang YP, et al. Differential compartmentalization and distinct functions of GABAB receptor variants. Neuron. 2006;50(4):589–601. pmid:16701209
  52. 52. Shaban H, Humeau Y, Herry C, Cassasus G, Shigemoto R, Ciocchi S, et al. Generalization of amygdala LTP and conditioned fear in the absence of presynaptic inhibition. Nature Neuroscience. 2006;9(8):1028–1035. pmid:16819521
  53. 53. Orts-Del’Immagine A, Pugh JR. Activity-dependent plasticity of presynaptic GABAB receptors at parallel fiber synapses. Synapse. 2018;72(5):e22027. pmid:29360168
  54. 54. Engelman HS, MacDermott AB. Presynaptic ionotropic receptors and control of transmitter release. Nature Reviews Neuroscience. 2004;5(2):135. pmid:14735116
  55. 55. Wang L, Kloc M, Maher E, Erisir A, Maffei A. Presynaptic GABAa receptors modulate thalamocortical inputs in layer 4 of rat V1. Cerebral Cortex. 2018;29(3):921–936.
  56. 56. Tsodyks M, Pawelzik K, Markram H. Neural networks with dynamic synapses. Neural computation. 1998;10(4):821–835. pmid:9573407
  57. 57. Markram H, Wang Y, Tsodyks M. Differential signaling via the same axon of neocortical pyramidal neurons. Proceedings of the National Academy of Sciences. 1998;95(9):5323–5328.
  58. 58. MacDermott AB, Role LW, Siegelbaum SA. Presynaptic ionotropic receptors and the control of transmitter release. Annual review of neuroscience. 1999;22(1):443–485. pmid:10202545
  59. 59. Nicoll R, Alger B. Presynaptic inhibition: transmitter and ionic mechanisms. In: International review of neurobiology. vol. 21. Elsevier; 1979. p. 217–258.
  60. 60. Overstreet-Wadiche L, McBain CJ. Neurogliaform cells in cortical circuits. Nature Reviews Neuroscience. 2015;16(8):458. pmid:26189693
  61. 61. Tachibana M, Kaneko A. Retinal bipolar cells receive negative feedback input from GABAergic amacrine cells. Visual neuroscience. 1988;1(3):297–305. pmid:2856476
  62. 62. Chen C, Regehr WG. Presynaptic modulation of the retinogeniculate synapse. The Journal of Neuroscience. 2003;23(8):3130–3135. pmid:12716920
  63. 63. Zhang D, Wu S, Rasch MJ. Circuit motifs for contrast-adaptive differentiation in early sensory systems: the role of presynaptic inhibition and short-term plasticity. PloS one. 2015;10(2):e0118125. pmid:25723493
  64. 64. Izhikevich EM. Dynamical systems in neuroscience. MIT press; 2007.
  65. 65. Vogels TP, Abbott LF. Signal propagation and logic gating in networks of integrate-and-fire neurons. Journal of neuroscience. 2005;25(46):10786–10795. pmid:16291952
  66. 66. Stimberg M, Brette R, Goodman DF. Brian 2, an intuitive and efficient neural simulator. eLife. 2019;8. pmid:31429824