Skip to main content
Advertisement
  • Loading metrics

Structural influences on synaptic plasticity: The role of presynaptic connectivity in the emergence of E/I co-tuning

  • Emmanouil Giannakakis,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Department of Computer Science, University of Tübingen, Tübingen, Germany, Max Planck Institute for Biological Cybernetics, Tübingen, Germany

  • Oleg Vinogradov,

    Roles Investigation, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Department of Computer Science, University of Tübingen, Tübingen, Germany, Max Planck Institute for Biological Cybernetics, Tübingen, Germany

  • Victor Buendía,

    Roles Formal analysis, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Department of Computer Science, University of Tübingen, Tübingen, Germany, Max Planck Institute for Biological Cybernetics, Tübingen, Germany

  • Anna Levina

    Roles Conceptualization, Funding acquisition, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing

    anna.levina@uni-tuebingen.de

    Affiliations Department of Computer Science, University of Tübingen, Tübingen, Germany, Max Planck Institute for Biological Cybernetics, Tübingen, Germany

Abstract

Cortical neurons are versatile and efficient coding units that develop strong preferences for specific stimulus characteristics. The sharpness of tuning and coding efficiency is hypothesized to be controlled by delicately balanced excitation and inhibition. These observations suggest a need for detailed co-tuning of excitatory and inhibitory populations. Theoretical studies have demonstrated that a combination of plasticity rules can lead to the emergence of excitation/inhibition (E/I) co-tuning in neurons driven by independent, low-noise signals. However, cortical signals are typically noisy and originate from highly recurrent networks, generating correlations in the inputs. This raises questions about the ability of plasticity mechanisms to self-organize co-tuned connectivity in neurons receiving noisy, correlated inputs. Here, we study the emergence of input selectivity and weight co-tuning in a neuron receiving input from a recurrent network via plastic feedforward connections. We demonstrate that while strong noise levels destroy the emergence of co-tuning in the readout neuron, introducing specific structures in the non-plastic pre-synaptic connectivity can re-establish it by generating a favourable correlation structure in the population activity. We further show that structured recurrent connectivity can impact the statistics in fully plastic recurrent networks, driving the formation of co-tuning in neurons that do not receive direct input from other areas. Our findings indicate that the network dynamics created by simple, biologically plausible structural connectivity patterns can enhance the ability of synaptic plasticity to learn input-output relationships in higher brain areas.

Author summary

Many studies of learning in biological neural networks have focused on how plausible plasticity rules shape individual connections between neurons in a recurrent network or in feed-forward projections. However, in the latter case, the presynaptic network properties, such as clustered connectivity, strongly influence population dynamics and, thus, the learning process of the projections. Here, we aim to close this gap by showing how non-plastic network structure can strongly influence the outcomes of synaptic learning. We show that unstructured recurrent connectivity and the presence of noise can significantly reduce the ability of synaptic plasticity to separate presynaptic input populations and coordinate excitatory and inhibitory connection development, while the introduction of overlapping clustered structures in fixed or plastic recurrent connectivity can boost synaptic learning. Using Bayesian inference, we identify the optimal connectivity structures for a recurrent network and demonstrate the strong effects that relatively simple connectivity patterns can have on the ability of a network to learn via local plasticity.

Introduction

Stimulus selectivity, the ability of neurons to respond differently to distinct stimuli, is one of the primary mechanisms for encoding information in the nervous system. This selectivity can range from simple orientation selectivity in lower sensory areas [13] to more complex spatiotemporal pattern selectivity in higher areas [46]. Such selectivity is shown to be self-organized under the influence of structured input, enabling, for example, the emergence of visual orientation preference in non-visual sensory areas upon rewiring [7] or changing the whiskers representation in the barrel cortex of rats depending on the level of sensory input [8]. The mechanisms underlying the emergence of input selectivity have been the subject of extensive investigation, both through experimental and computational modelling studies, and still remain under active discussion [913].

Despite initially attributing stimulus-selectivity to excitatory neurons and their network structure, we now know that inhibitory neurons are also tuned to stimuli, and the coordination of the E/I currents is a central component of efficient neural computation [14, 15]. In particular, it has been shown that excitatory and inhibitory inputs are often correlated [16], with preferred stimuli eliciting stronger excitatory and, with the small delay, stronger inhibitory responses compared to the non-preferred stimuli [17, 18]. This co-tuning of excitation and inhibition is theorized to be beneficial for a variety of computations such as gain control [19, 20], visual surround suppression [21, 22], novelty detection [23] and optimal spike-timing [15, 24].

Although it is still unclear how E/I co-tuning emerges, the dominant view is that it arises via the interaction of several synaptic plasticity mechanisms [25], a hypothesis that has been reinforced by the findings of multiple theoretical studies over the last decade. First, it has been demonstrated that different inhibitory plasticity rules can match static excitatory connectivity [2629]. More recently, it was also shown that various combinations of plasticity and diverse normalisation mechanisms allow for the simultaneous development of matching excitatory and inhibitory connectivity in feedforward settings [12, 3032]. Moreover, a variety of plasticity mechanisms has been associated with the formation of stable assemblies [3337], the creation of E/I balance [38] and the emergence of tuning selectivity [39] in recurrent networks.

However, so far, most of the theoretical studies of synaptic plasticity have focused on identifying the optimal parameters of individual learning rules and normalization mechanisms for specific tasks and ignore the ways in which these mechanisms act within complex network structures that may influence their function. Specifically, biological networks are characterized by highly non-trivial connectivity structures that are known to display varied degrees of clustering and neuron type-specific connectivity patterns [40]. Such network structures give rise to distinct neural dynamics; for example, clustering in recurrent network structure introduces correlations in the activity of similarly tuned neurons [41] and complex interaction between subpopulations of neurons [42]. These types of dynamics fundamentally alter the statistics of population activity that most synaptic plasticity mechanisms rely on for modifying synaptic strength.

Here, we investigate how the development of matching E/I input selectivity in a downstream neuron via synaptic plasticity can be driven by the structure of recurrent connectivity in the input network. We combine excitatory and inhibitory plasticity rules [26, 31, 43] in the feedforward connections of spiking network to develop detailed co-tuning of excitatory and inhibitory connectivity, and we demonstrate that the ability of these plasticity mechanisms to create co-tuning is significantly reduced in the presence of noise and (non-plastic) random recurrent connectivity between the input neurons. We further show that the effects of recurrence and noise on the population activity that drives the formation of matching E/I feedforward weights on a downstream neuron can be fully ameliorated by the introduction of synapse-type specific assemblies of neurons, characterized by local excitation and relatively homogeneous inhibition, an often-observed pattern of cortical connectivity [44, 45]. Our findings demonstrate that network structure can, by influencing population dynamics, significantly modulate the capacity of synaptic plasticity to generate input selectivity in downstream neurons. This highlights a synergistic interaction between structural connectivity and learning mechanisms that can enhance the computational capabilities of brain networks.

Results

We begin by reproducing previously reported results, validating the emergence of co-tuning in a plastic feedforward network, and introducing measures to capture weight diversity and co-tuning for subsequent analysis. Next, we show that the introduction of strong noise or a random static recurrent connectivity in the presynaptic networks impairs the development of co-tuning by destroying the correlation structure in the activity of the presynaptic population. We then illustrate how specific structures in the static recurrent connectivity can restore the ability of plastic synapses to generate co-tuning. Using analytical results from a reduced linear neural mass model and Bayesian inference for the full network, we identify the optimal connectivity structures in static networks, demonstrating that optimal connectivity is influenced by network sparsity. Finally, we simulate fully plastic networks, confirming that our key observations hold true.

Co-tuning and its self-organization by synaptic plasticity in a low-noise feedforward setting

We simulate a single postsynaptic read-out unit driven by a population of N = 1000 neurons. The pre-synaptic population is divided into M groups Gi, i ∈ {1, …, M}. Each group is comprised of n = N/M neurons, of which 80% are excitatory and 20% are inhibitory. These neurons are driven by an identical, group-specific Poisson spike train—a shared external input. Additionally, each neuron receives low-intensity independent external noise [26] that prevents unrealistic total synchrony between the input neurons. This setting, depicted in (Fig 1a), leads to correlated firing among neurons of the same input group (Fig 1b), which is a common setting for studying the effect of different plasticity rules [12, 26, 31].

thumbnail
Fig 1. Emergence of co-tuning in a feedforward network.

a. A diagram of the feedforward network with plastic connections from the different inputs group to the readout neuron.b. The correlation matrix of the network’s activity. In the absence of noise, neurons of the same input group are highly correlated. c. The development of average E and I weights in an ideal feedforward network with very low noise leads to co-tuned and diverse feedforward connectivity. d. An illustration of feedforward connectivity that exhibits both co-tuning and diversity. Different input groups are clearly distinct, and the E and I weights for each group are correlated. The colours of the distribution indicate different groups, and the shading (light to dark) is matched for the corresponding E and I populations; the points and error bars indicate the mean and std of the E and I connectivity of each group. e. A co-tuned but not diverse connectivity. E/I weight correlation is maintained, but there is hardly any distinction between groups f. A diverse connectivity without E/I weight co-tuning. While each group is distinct from the others, there is no coordination of the E and I connections from the same group g. In the absence of weights co-tuning and diversity, the feedforward connectivity lacks any discernible pattern.

https://doi.org/10.1371/journal.pcbi.1012510.g001

Input selectivity develops when the post-synaptic neuron responds differently to inputs from different groups (by adjusting its firing rate). This happens when the average excitatory feedforward projections are sufficiently diverse between pre-synaptic groups (Fig 1d and 1f), which leads to groups with stronger feedforward connections eliciting stronger post-synaptic responses upon activation. Moreover, connections from neurons with highly correlated firing (i.e., from the same group) should have a similar strength. To quantify this feature of the network, we define a diversity metric, (1) where is the set of excitatory feedforward connection weights, is the subset of excitatory feedforward connection weights from input group i and Std(⋅) denotes the standard deviation. Diversity D ∈ [0, 1] equals unity when the feedforward connections from the same group are the same but differ across groups; D is close to zero when the feedforward connections from each group follow the same distribution and different groups cannot be meaningfully distinguished (Fig 1e and 1g).

Brain networks are characterized by a balance of excitatory and inhibitory currents [18, 46, 47]. For a neuron to be balanced, the average inhibitory current must be equal to the average excitatory current during a (relatively short—usually a few mS) time window. Depending on the temporal resolution of this canceling out, the balance can be more “loose” or “tight”, with detailed (“tight”) balance associated with efficient coding [14, 15] and the ability to encode for multiple stimuli [19] (Further discussion in S1 Text Section A). In our specific setting, due to the highly correlated firing in each group, detailed balance can be achieved by matching the relative strengths of the excitatory and inhibitory weights from each group.

To quantify the E/I weight co-tuning, which generates the detailed balance in our simplified network, we use the Pearson correlation coefficient between the mean excitatory and inhibitory weights of each group, (2) where , A ∈ {I, E} and is the average projection weight from the excitatory (A = E) or inhibitory (A = I) neurons in group i, and Cov(x1, x2) denotes the covariance of the variables x1 and x2. In networks with strong weight co-tuning CTW, the strength of incoming E and I currents is highly correlated (Fig 1d).

We verify that high diversity (D ≈ 1) and weight co-tuning (CTW ≈ 1) can organically emerge via a combination of plasticity mechanisms in the feedforward connections whose trajectories are initialized at the same (small) value. Specifically, the excitatory connections follow the triplet Hebbian STDP rule [48], and the inhibitory connections follow a symmetric rule [26]. We additionally use competitive normalization in both the inhibitory and excitatory connections, which amplifies small transient differences between the firing rates of different input groups and leads to the development of input selectivity [31]. Since neurons in different groups are independent, while neurons in the same group share a strong correlation (Fig 1b), the plasticity protocol generates very strongly correlated E/I weights and strong input selectivity, as shown in (Fig 1c). (For more details on the network’s firing activity, see S1 Text Section C).

Postsynaptic weights with high diversity can be well separated, while those with co-tuned weights produce a match between E and I incoming connections. (Fig 1d) illustrates four possible situations. Observe that it is possible to have weights that are, e.g., diverse (so they can be distinguished) but not co-tuned (so E-I are not correlated) and vice versa: E-I weights are correlated, but weights cannot be separated.

Noise and recurrent connectivity compromise the ability of STDP to produce E/I co-tuning

Strictly feed-forward networks with relatively low noise levels are unrealistic approximations of complex cortical circuits (which are characterized by noisy inputs and complex recurrent connectivity), and thus their dynamics might deviate significantly from those observed in experiments. Thus, we introduce noise and non-plastic recurrent connectivity in our pre-synaptic network, both ubiquitously present in biological networks [49, 50]. First, we investigate how they individually affect the emerging E/I co-tuning by changing the structure of the correlations between the neurons of the input network. Then, we examine ways in which different connectivity structures can ameliorate these effects.

We vary the level of noise by changing the fraction of input spikes that are specific to each neuron (noise) vs the shared (signal) input (Fig 2a). This allows us to control the signal-to-noise ratio while keeping the average number of incoming spikes the same. As the noise intensity increases, the cross-correlations within each input group decrease, while the cross-correlations between neurons of different input groups remain very low, (Fig 2b). The effect of this in-group decorrelation is an increased variability in the learned projections to the postsynaptic neuron from neurons of the same input group and, thus, a decrease in the resulting diversity (Fig 2). At the same time, this decorrelation has a much weaker effect on the ability of the plasticity to match E and I feedforward weights from the same groups. This is reflected in the slower reduction of the E/I weight co-tuning, which visibly declines only once the noise becomes overwhelmingly stronger than the input (more than 80% incoming spikes are not shared between neurons of the same group, Fig 2c). Raster plots illustrating the dynamics for different values of noise are shown in (Fig 2d).

thumbnail
Fig 2. Noise and Recurrence Destroy E/I Co-Tuning.

a. An illustration of increasing levels of noise in a single input group. In low noise settings, all neurons of the same groups fire at the same time, while as the noise level increases, each neuron fires more and more individual spikes. Joint inputs are shown as black, and individual noise spikes are yellow. b. The increase in noise leads to a decrease in in-group correlation (orange), while the between-group correlation (blue) remains low. c. An increase in the input noise leads to a reduction in diversity (teal) and, for larger noise intensities, also co-tuning of E/I weights (purple). d. Raster plots of input populations’ activity (red, inhibitory neurons; blue, excitatory ones; gray, corresponding firing rate). As the noise increases, the spiking in each group becomes more asynchronous. The traces below (gray line) show the spike count over all neurons in 2ms bins. e. An illustration of recurrent connectivity (for three input groups). The coupling parameter W controls the mean connection strength. As W increases (indicated by thicker connections in the diagram), more of the input a neuron receives comes from other neurons in the recurrently connected network than from the feedforward input. f. An increase in the recurrent coupling strength W leads to an increase in the between-group correlation (blue), while the in-group correlation (orange) remains high. g. The decrease in weight co-tuning and diversity with an increase in the coupling strength. h. The spiking activity becomes more synchronous across groups as the coupling strength increases. The traces below (gray line) show the spike count over all neurons in 2ms bins. Simulation parameters not indicated in the text can be found in Tables A, B and C in S1 Text.

https://doi.org/10.1371/journal.pcbi.1012510.g002

Recurrent connectivity in the pre-synaptic network introduces cross-correlations between the neurons from different input groups, which compromises both diversity and E/I weight co-tuning. To test the extent of this effect, we connect the N presynaptic neurons (creating a non-plastic recurrent network) with connection probability p and use coupling strength W (which denotes the mean synaptic strength) as a control parameter. Initially, we only consider fully-connected networks (p = 1). By changing W, we can control the ratio between the input received from the feedforward connections (whose rate and connection strength are fixed) and the other neurons in the network via recurrent connectivity (Fig 2e). The recurrent connectivity increases cross-correlations between groups while maintaining the high correlation within each input group, (Fig 2f). The effect of these cross-correlations is stronger than the effect of the noise since they affect both the diversity and the weight co-tuning, both of which decline as the recurrent connections become stronger (Fig 2g). As with noise, raster plots for different recurrent connectivity strengths are shown in (Fig 2h).

The combination of noise and recurrent connectivity affects both in-group and between-group correlations (Fig 3c and 3d), resulting in a reduction of weight co-tuning and diversity as the noise and recurrent connection strength increase, (Fig 3e and 3f). The effects of combined noise and unstructured recurrence are not only a sum of their independent effects, but they can lead to novel effects compounding the impact on the emergence of input selectivity. For example, for strong recurrence (Fig 3d), increasing noise levels seem to lead to an increase of between-group correlations (a counter-intuitive effect, given the decorrelating effects of noise). This is due to the absence of synchronous input to neurons of the same group (due to increased noise, i.e., reduced in-group correlation), which makes synchronisation across groups (and consequently the increase of the between-group correlation) due to recurrent input easier. More details on the effects of noise and recurrence on the network firing are discussed in S1 Text Section C and and D).

thumbnail
Fig 3. Optimized assemblies of neurons restore the co-tuning in recurrent noisy networks.

a. Diagram of the network with uniform connectivity b. The network activity is characterized by synchronous events across groups. The traces below (gray line) show the spike count over all neurons in 2ms bins. The in-group (c.) and between-group (d.) correlations for different levels of noise and recurrent connection strength in the uniformly connected network lead to a reduction in the weight co-tuning (e.) and weight diversity (f.) metrics for different noise and recurrent strength combinations. g. Diagram of the network with optimal assembly structure h. The network activity becomes more decorrelated across groups. The traces below (gray line) show the spike count over all neurons in 2ms bins. i. Approximate posterior distributions of optimal excitatory and inhibitory assemblies strength. The in-group (j.) and between-groups (k.) correlations for different levels of noise and recurrent connection strength can be almost fully restored by the assembly structure, leading to a restoration of (l.) strong weight co-tuning and (m.) weight diversity.

https://doi.org/10.1371/journal.pcbi.1012510.g003

We develop a formal description of the effect of noise and recurrence on the correlation structure in a simplified linear neural mass model. To this end, we consider M = 8 mesoscopic units instead of the previously studied M inter-connected groups, represented by continuous rate variables xj(t), j = 1, …, M. These units evolve in time, subject to stochastic white noise. The linear approximation is justified for any system at a stationary state with a constant average firing rate, and it serves as a simplified model for a wide range of parameters of the spiking network (for details on the linear model and its relation to the spiking network, (see S1 Text Section F and H).

In this simplified case, it is possible to derive analytical equations for all the relevant in- and between-group covariances, which yield the correlation coefficients. These correlations are the solution to a linear system of equations, which can be obtained exactly using numerical methods. Furthermore, one can find close-form solutions in some simple scenarios. For example, in the case of a completely homogeneous network, where all coupling weights are the same, correlation coefficients can be written explicitly (see S1 Text Section F and H). If the coupling strength increases W → + ∞, all correlations grow to 1 as . On the contrary, if we reduce the noise r → 0+ the correlations will decrease to to . Both cases eliminate any possible differentiation between the groups, thus compromising the ability of the plasticity mechanisms to create high diversity D ≈ 1. Another observation is that in the linear network, increasing noise affects the correlation coefficient quadratically, while coupling increases it linearly. Therefore, since r < 1, increasing the coupling has a larger impact on the co-tuning, a consequence that is recovered in the spiking network, consistent with the results shown in (Fig 2b) and (Fig 2e).

Neuronal type-specific assemblies restore the ability of STDP to produce co-tuning

The homogeneous all-to-all connectivity (Fig 3a and 3b) that we have examined so far is not a realistic assumption and could be particularly detrimental to the self-organization of co-tuning in higher areas. Thus, we examine the impact of different types of inhomogeneous connectivity. In particular, using the idea of functional assemblies (strongly connected neurons that are tuned to the same stimulus [25]), we study whether stronger recurrent connectivity between neurons of the same input group can introduce the necessary correlation structure in the population activity that will allow the plasticity to produce weight diversity and co-tuning.

We maintain the total recurrent input to a neuron constant (the fraction of input coming from the signal/noise vs. the other neurons in the recurrent network, excluding the input from the feedforward connections) while using the ratio of input coming from neurons of the same vs. other input groups as a control parameter. Since we want to vary this ratio independently for each connection type, we define a metric of assembly strength as: (3) where in the total recurrent input a neuron of type b receives from neurons of type a, for a, b ∈ {E, I}, of its own input group and is the total recurrent input the neuron receives from other input groups, the connection strength between neurons of the same group, the connection strength between neurons of different groups and is the average firing rate of network neurons (we assume a uniform firing across the network, which can be simplified out of the equation).

We vary assembly strengths for each type of connection rEE, rEI, rIE, and rII while keeping the total recurrent input to a neuron constant. Here W is the average coupling strength, and p is the recurrent connection probability. As in the network without assemblies, for now, we consider fully connected networks (p = 1). Thus, we can vary the fraction of input coming to a neuron from its own input group without changing the total recurrent E, or I input it receives.

Since the structure of the feedforward connectivity (diversity and weight co-tuning) that the plasticity converges to is controlled by the correlation structure of the inputs, we can use the correlations as a proxy that is easier to optimize than the connectivity metrics. Specifically, we want to maximize the in-group and minimize the between-group correlations, and we seek the assembly structure that leads to that objective. In the reduced linear neural mass model, we compute the optimal assembly strengths (see S1 Text Section H) analytically and find that strong local excitation and dispersed inhibition restore the desired correlation structure in the network’s activity (see S1 Text Section I). We find that for all combinations of noise and sufficiently strong recurrent connectivity, strong excitatory assemblies (high rEE and rEI) and uniform inhibitory connectivity (low rIE and rII) allow the correlating excitatory currents to remain mostly within the input the input group/assembly and maintain high in-group correlation, while the diffused inhibitory currents reduce correlations between groups (for a more detailed discussion of this mechanism see S1 Text Section J). Still, since the reduced model does not account for many essential features of the spiking network, like sparsity of connections, in-group interactions between neurons of the same type, and non-stationary dynamical states of the groups, the analytic solution obtained for the linear neural mass model can serve to develop intuition, but the results need to be validated for the recurrent spiking network.

We now study the effect of various assembly strengths on weight co-tuning and diversity. Instead of directly assessing it, we turn again to the impact of assembly strength on the spiking network’s activity correlation structure. Thus, we search for combinations rEE, rEI, rIE, rII that lead to the correlation structure (high in-group and low between-group correlations) that is associated with strong E/I weight co-tuning (CTW ≈ 1) and maximum weight diversity (D ≈ 1). To this end, we use sequential Approximate Bayesian Computation (ABC) [51] to minimize a loss function defined to be zero when the in-group correlations are equal one and all between-group correlations vanish (for details, see Methods).

This method allows us to find the approximate posterior distribution of network parameters (the four assembly strengths) that minimize the loss. Afterward, we verify whether connectivity parameters sampled from the approximate posterior lead to the emergence of diversity and co-tuning in the post-synaptic neuron.

Networks with optimized assemblies largely regain the ability to develop E/I co-tuning despite the noise and the non-plastic recurrent connectivity. Assembly strengths that are drawn from the approximate posterior result in a correlation structure very similar to the one observed in a feedforward/low noise network (Fig 3j and 3k), which allows the plasticity to produce a near-optimal structure in the feedforward connections (Fig 3l and 3m). We find that the optimal assembly structure involves very strong EE and IE assemblies and medium-strength EI and II (Fig 3g and 3i). For details on the impact of assemblies on the network firing and the learned connectivity, see S1 Text Section C and S1 Text Section D.

This connectivity pattern is similar to the optimal pattern of the reduced linear model, albeit with the difference that the reduced model predicted optimal performance for uniform inhibitory weights (i.e., no inhibitory assemblies, rIE = rII = 0). This difference can be attributed to the more complex dynamics of the spiking network that require some degree of local inhibition to prevent extreme synchronization (see S1 Text Section G), which can negatively impact the STDP’s ability to produce co-tuning.

This partial specificity of inhibitory recurrent connectivity can be linked to the role of inhibitory tuning in stabilizing network dynamics at the cost of reduced network feature selectivity [52]. In theory, the optimal connectivity pattern in the network would promote competition between different groups, for which completely uniform inhibitory connectivity would be ideal [42]. However, the instability in the population activity induced by such connectivity is detrimental to the emergence of E/I weight co-tuning and input selectivity. Therefore, an intermediate level of specificity in inhibitory recurrent connectivity achieves a balance by maximizing between-group competition (and thus the desired correlation structure) while maintaining stable network dynamics (see S1 Text Section G).

The sparsity of a network’s recurrent connectivity shifts the optimal assembly structure

Biological neural networks are usually very sparsely connected [5355], and the sparsity of connections is associated with distinct dynamics [56]. We observed that the impact of noise and recurrence on the deterioration of weight co-tuning and diversity in sparse networks without assemblies is qualitatively similar to fully connected networks. Thus, we examined the ability of neuronal assemblies to produce activity that restores weight co-tuning and diversity in sparsely connected recurrent networks that receive noisy input.

The optimal assembly strength values depend on sparsity levels. We use ABC to discover the approximate posterior distribution of assembly strengths for 5 different levels of sparsity, corresponding to the probability of connection p = 1.0, p = 0.75, p = 0.5, p = 0.25, and p = 0.1 (Fig 4a). We preserve the total input per neuron across different sparsity levels by scaling the coupling strength inversely proportional to p.

thumbnail
Fig 4. Assemblies improve co-tuning and allow for co-tuning in sparse networks.

a. weight co-tuning (purple) and diversity (teal) in the networks with assemblies compared to non-structured networks (dashed lines), error bars—standard deviation. The noise level is 0.1 for all sparsities, and the coupling is 1.7 (scaled by 1/p). b. Loss for the sparser networks is higher, which results in the overall worse performance; the inset shows the loss for 50 accepted samples at the last ABC step. c. Posterior distributions of all assembly strengths change with sparsity. Sparser networks require weaker inhibitory assemblies (more uniform connections) to produce co-tuning.

https://doi.org/10.1371/journal.pcbi.1012510.g004

As sparsity increases, the ability of assemblies to improve the tuning diminishes. After 21 ABC steps, the overall loss is larger for the sparse networks than for fully connected networks and increases with sparseness, (Fig 4b). Therefore, despite an improvement in the tuning metrics for most sparse networks (compare the dashed and solid lines in (Fig 4a), particularly diversity is strongly affected by sparseness and cannot be recovered by assemblies to the same extent as for the fully connected networks, (Fig 4a). This reduced effectiveness is expected, given the smaller number of connections and the greater variance in the network’s connectivity.

The optimal strength of most assemblies is reduced as the connection probability is decreased (Fig 4c). Specifically, we find that all but EE assemblies should be weaker in sparser networks, with the greatest decrease observed in the II assemblies that completely disappear for very sparse networks. This could be due to a reduced (compared to fully connected networks) need for within-group recurrent inhibition to prevent completely synchronized behaviours.

Structured connectivity promotes the emergence of co-tuning in fully plastic recurrent networks

In the previous sections, we analyzed a fixed recurrent presynaptic network that projected onto a single postsynaptic neuron via plastic synapses. While this setup provides valuable insights into how structural features can shape population activity for input selectivity to emerge via STDP, it does not fully capture the dynamics of biological neural networks. In this section, we extend our model to a fully plastic and fully recurrent network, offering a more realistic approximation of the behaviour observed in actual biological systems.

In addition to making the recurrent connections between input neurons plastic, we treat the readout neuron as part of the network, and thus, we also introduce sparse feedback connections from the readout neuron to the input neurons (Fig 5a). Since the recurrent connections are now plastic, we cannot implement the assemblies by changing the in-group connection strength. Thus, we use a sparse network (connection probability p = 0.25) and control the in-group connection probability. Following the same logic as with the in-group and between-group connection weights (described in Eq 3), we produce in-group and between-group connection probabilities such that the total connection probability (and consequently the total recurrent input) remains constant while the ratio of recurrent input received by a neuron from neurons of its own vs from other groups varies.

thumbnail
Fig 5. Structured recurrent connectivity can drive synaptic learning even in fully plastic networks.

a. A diagram of the fully recurrent plastic network. A comparison of the (b.) weight co-tuning (CTw) and (c.) diversity (D) metrics for plastic networks with fully random, fully clustered and optimal connectivity structure. Spike trains of the converged networks with (d.) fully random, (e.) fully clustered, and (f.) optimal recurrent connectivity structures. The activity of different input groups is much more correlated, allowing for easier discrimination in the last network.

https://doi.org/10.1371/journal.pcbi.1012510.g005

We simulate the network with three connectivity structures. A fully random one, where the sparse connections are implemented without any regard for the neurons’ input groups; a fully clustered one, where each input group is almost fully connected, and connections between groups are extremely sparse, and finally, a putative optimally clustered network, which is structured according to the connectivity derived in the previous section for the fixed-weights sparse recurrent network via the Bayesian optimization for the networks of sparsity of p = 0.25 (Fig 4c). We measure the co-tuning and weight diversity of the readout neuron, which has now been embedded in the network, projecting randomly with plastic connections to the neurons in the presynaptic network with probability p.

The fully random network displays very low co-tuning and diversity; the fully clustered network develops stronger co-tuning and diversity, which are then surpassed by the optimally clustered network (Fig 5b and 5c). Thus, the putatively optimally clustered network outperforms the other two: after the convergence of the plasticity, the population activity becomes much less noisy, and the activity of different groups can be more easily distinguished, which presumably also enables the development of input selectivity in the post-synaptic neuron (Fig 5d, 5e and 5f). The correlation structure of the converged plastic networks verifies this observation, with the putative optimally clustered network having very high in-group and near-zero between-group correlation. This structure is introduced at the beginning of the simulation by the structural connectivity and is maintained throughout the learning process, driving the development of input selectivity in the postsynaptic neuron.

Directly optimizing the fully plastic network is not computationally feasible because, for simulation-based inference, we would need to simulate the network’s activity until all the plastic connections converge in each simulation. However, the connectivity from the static sparse presynaptic network appears to give a good prior for a beneficial connectivity in the fully plastic networks.

Our findings suggest that structured connectivity can drive the statistics of activity even in fully plastic networks and enable the development of specific connectivity in neurons that do not receive direct input from other areas.

Discussion

Synaptic plasticity is theorized to be responsible for the formation of input selectivity across brain hierarchies, including in brain areas that only receive input from highly recurrent networks. Here, we demonstrated how the structure of non-plastic presynaptic recurrent connectivity could hinder or boost the ability of synaptic plasticity mechanisms [12, 26, 27, 31] to generate input selectivity in neurons of higher areas. We find that strong excitatory connectivity among neurons tuned to the same input, combined with broader inhibition, creates population activity with a beneficial statistical structure that enables the formation of co-tuned projections by plasticity, potentially fostering input selectivity.

How different plasticity mechanisms shape neural connectivity, such as the formation of E/I co-tuning [12, 26, 27, 31] in feedforward networks or neural assemblies in recurrent networks [26, 3436], has been a topic of extensive theoretical research. Nevertheless, the opposite effect -the ways in which fixed connectivity can shape the effects of synaptic plasticity—has only been studied in very specialized cases [57]. This omission partially obscures the two-way interaction between connectivity and synaptic plasticity in biological neural networks. While synaptic plasticity constantly modifies some aspects of neural connectivity, it acts within the many constraints of network structure that are either constant throughout an organism’s lifetime or change via structural adaptation mechanisms that act on timescales slower than synaptic plasticity [58, 59]. Effectively, this means that synaptic plasticity relies on population activity originating from networks with highly intricate connectivity structures very different from those of random networks [33, 60]. Given that most synaptic plasticity mechanisms fundamentally depend on the statistics imposed by network activity, it is reasonable to assume that the network structure highly impacts the behaviour of synaptic plasticity.

Cortical connectivity is known to be highly clustered [61], and the clustering has functional as well as spatial determinants. For example, neurons that share common inputs [62] or targets [63] are more likely to form recurrent connections between themselves [64, 65]. Moreover, excitatory cells with similar receptive fields are known to form strong reciprocal connections [66], which determine neural responses. Additionally, cortical networks have been shown to present specific correlation structures early in development [67], suggesting that recurrent cortical connectivity is at least partially structured before sensory inputs are present.

Clustered networks display distinct dynamics, including competition between clusters [42, 68] and slower timescales [41, 69], both of which can be useful for computations. Additionally, there is strong evidence that groups of highly interconnected neurons (neuronal assemblies) share common functions within recurrent networks [37, 70, 71]. Moreover, evidence has accumulated [72, 73] that different neuron types (excitatory and inhibitory subtypes) follow distinct spatial connectivity patterns, which have implications for neural computation [74]. Thus, our findings complement the ongoing research on computational implications of recurrent neural connectivity in biological networks by suggesting a link between specific, fixed connectivity patterns and local learning in feedforward projections to downstream populations via synaptic plasticity.

While the connectivity pattern we identify in our study is biologically plausible, the extent to which it is realized in vivo remains unclear. Further experimental studies in the connectivity patterns of different neuron types are necessary to model different network connectivities and study their dynamics effectively. On a theoretical side, more research is needed to uncover whether the synapse-type specific assembly structures we identify can emerge in plastic networks without any prior structure. Specifically, while there have been several studies investigating the emergence of stable assemblies via STDP [3335, 75], the protocols by which assemblies of different pre-selected strengths for each connection type could arise remain unclear. Potential mechanisms include structural plasticity and variation in the learning rates of different synaptic types. Additionally, the presence of different regulatory interneurons, which have already been studied in the context of assembly formation [34], could play a role in modulating the relative assembly strengths of different connections.

For our study, we parameterized the network connectivity by adopting a quantitative metric for the strength of different types of neuronal assemblies. This resulted in a low-dimensional parameter space and allowed us to use rejection sampling-based ABC [51] to infer the optimal assembly strengths. One limitation of this technique is that it suffers from the curse of dimensionality and typically requires a large simulation budget [76, 77]. Therefore, extensions of the current work using higher dimensional connectivity parameters or simultaneous inference of the connectivity and neuron parameters will require more efficient simulation-based methods such as neural posterior estimation [78]. Alternatively, direct optimization of each recurrent weight via gradient-based methods [79, 80] may uncover more intricate connectivity patterns that are not limited to the specific network parametrization we chose.

To summarize, we identified how particular presynaptic connectivity structures could be a favourable or detrimental substrate for plasticity to develop co-tuning of excitation and inhibition on neuronal projections. Our study is the first step in illuminating the two-way dependence between the non-plastic structural features of a network’s connectivity and synaptic plasticity, which can motivate further research on this intricate interaction.

Materials and methods

Neuron model

We modelled all neurons of our networks as leaky integrate-and-fire (LIF) neurons with leaky synapses [81]. The evolution of their membrane potential is given by the ODE: (4) where Vrest is the neuron’s resting potential, VE, VI are the excitatory and inhibitory reversal potentials and gleak the leak conductance. Additionally, the excitatory and inhibitory conductances gE, gI decay exponentially over time and get boosts upon excitatory or inhibitory pre-synaptic spiking, respectively, as (5) Here denotes the time at which the f-th spike of the j − th neuron happened and δ(t) is the Dirac’s delta function. When the membrane potential reaches the spiking threshold Vth, a spike is emitted, and the potential is changed to a reset potential Vreset. Finally, the neurons have an absolute refractory period (during which no spikes are emitted even when the spiking threshold is reached) between spikes of τref = 5ms.

We would like to remark that both , since the effect of inhibition is encoded on Eq (4). However, for illustration purposes inhibitory weights are currents are shown to be negative. For instance, this happens in (Fig 1b). An alternative neuron model is discussed in S1 Text Section B.

Network input

The external input to each of the 1000 pre-synaptic neurons is the mixture of two Poisson spike trains. The first Poisson spike train is shared with all the other neurons of the same group, while the second Poisson spike train is the individual noise of the neuron, (6) where Csignal ∼ Poisson((1 − c) ⋅ f0) and Cnoise ∼ Poisson(cf0). Here, f0 is the total firing rate of the input, and c is the strength of the noise. Csignal is the same for all neurons of the same input group, while Cnoise is individual to each neuron.

Recurrent connectivity

The recurrent connectivity is implemented in two different versions for the fixed and plastic versions.

Non-plastic recurrent connectivity.

The non-plastic recurrent connectivity between the input neurons is defined by a coupling strength parameter W, which defines the average synaptic strength and a connection probability p, which defines the sparsity. The connectivity is implemented as follows:

At first, an adjacency matrix A is defined, which implements an Erdős–Rényi connectivity with connection probability p (i.e., a connection between any two neurons is implemented with connectivity p, independently of any other connection). Then, using the coupling strength parameter W, and the given assembly strength for each connection type rEE, rEI, rIE, and rII, we extract the parameters and for each connection type (a, b ∈ {E, I}) according to Eq 3. The inhibitory weights and are scaled by a parameter gs which is set to counterbalance the slower inhibitory synapse dynamics and the smaller number of I neurons. This scaling leads to an approximately balanced network across implementations. Once the connectivity strengths are calculated, for each pre and post-synaptic neuron pair i and j, we set the connection between them as (7) where Wij is the appropriate connectivity strength (, for a, b ∈ {E, I}) depending on the neuron type of neurons i and j and whether they belong to same assembly or not. The parameter c, which scales the standard deviation, was normally set to 0.1, but we also examined narrower and broader distributions with similar results.

We finally considered an alternative, log-normal distribution of weights, which increased variability but largely lead to the same results.

Plastic recurrent connectivity.

In the plastic recurrent network case, instead of varying the connection strengths for in-group and between-group connections, we use the total connection probability p and given assembly strength for each connection type rEE, rEI, rIE, and rII, we extract parameters and for each connection type (a, b ∈ {E, I}) according to Eq 3. These parameters give the probability that a connection of a specific type is implemented in the recurrent network, which creates an inhomogeneous adjacency matrix A that implements the different levels of clustering for each connection type.

Once the adjacency matrix has been defined, we set the initial connection strength for pre and post-synaptic neuron pair i and j as: (8) where W is the coupling strength parameter, which we scale for inhibitory connections, similarly to the non-plastic network. The resulting connectivity reflects the initial conditions for the plastic recurrent network and the non-zero connections are updated according to the same plasticity protocol that is used to learn the feedforward connectivity in the networks with fixed recurrent connectivity.

Plasticity

Triplet excitatory STDP.

The excitatory connections are modified according to a simplified form of the triplet STDP rule [43], which has been shown to generalize the Bienenstock–Cooper–Munro (BCM) rule [9] for higher-order correlations [48]. In our implementation of the triplet rule, the firing rates of the pre-synaptic excitatory neurons and the post-synaptic neuron are approximated by traces with two different timescales (we use the same timescales for the fast and slow traces of the pre and postsynaptic neuron): (9a) (9b) (9c) (9d) where are the two timescales of the plasticity rule, and x1(t), x2(t) represent the slow and fast traces of the k-th excitatory pre-synaptic and the single post-synaptic neuron respectively while and are their respective firing times The function δ(x) represents a Dirac’s delta. The connection weights are updated upon pre and post-synaptic spiking according to (10) where ηE is the excitatory learning rate and ALTP, ALTD the amplitudes of long term depression and potentiation respectively. Despite scaling down the LTD amplitude to account for higher presynaptic firing rates, the rule remains slightly LTD dominated in our experiments, a setting that has been observed in experimental studies [82, 83]. The function R(x) is defined as: (11)

In numerical simulation, if spikes have been produced in the last few timesteps, so in practice R(t) = 1 for t ∈ [−δt, 0] for a small δt. Hence, R(t) is a rectangular function. Different parameters and learning rules for the excitatory plasticity are discussed in (S1 Text Section E.i and E.ii).

Inhibitory STDP.

We used the learning rule first proposed in [26] for the inhibitory connections. Approximations of the firing rates are kept via a trace for each of the pre-synaptic inhibitory neurons as well as the post-synaptic neuron, (12a) (12b) where τistdp is the single timescale of the plasticity rule, and x are the traces of the the kth inhibitory pre-synaptic and the single post-synaptic neuron, and are their respective spike times. The connection weights are updated upon pre and post-synaptic spiking as (13) Here, ηI is the inhibitory learning rate, and ρ0 is the target firing rate of the post-synaptic neuron. The rectangular function R(t) is defined in Eq (10).

Weight normalization.

Due to the instability of the triplet STDP rule, some normalization mechanism is needed to constrain weight development. We use a modified version of the competitive normalization protocol proposed in [31], which we adapt for spiking neurons.

Specifically, we normalize the k-th connection every time there is a weight update (i.e., upon pre or postsynaptic spiking): (14) Where is the target total weight of each connection type and ηN is the normalization rate. In the recurrent plastic networks, the for the recurrent neurons is determined by the coupling strength W. The normalization pushes the sum of the excitatory and the sum of the inhibitory feedforward connections weights close to the set target total weights and over time. The implications of implementing a regular normalization step (on every time step only when spiking occurs) are discussed in S1 Text Section E.iv.1. Moreover, the implications of using different normalization rates ηN are discussed in S1 Text Section E.iv.2.

An alternative way to stabilize the weights via subtractive normalization of only the excitatory synapses [12, 23, 33] was also considered leading to comparable results (see S1 Text Section E.v).

Approximating the posterior distribution of the model parameters

To estimate the set of parameters that lead to high in-group correlations and low out-group correlations, we used simulation-based inference [76]. The basic idea is to use simulation with known parameters to approximate the full posterior distributions for the model given the required output, i.e., the distribution of parameters and samples from which produce the required correlation structure. We use sequential Approximate Bayesian Computation (ABC) [51] to approximate the posterior distribution. We define a loss function that maximizes in-group correlations and minimizes between-group correlations: (15)

We define a uniform prior p(θ). A set of parameters θ = [ree, rei, rie, rii] is sampled from it and used to run the simulations for 3 seconds. From the simulation results, correlations are computed, which allows us to obtain the loss. We accept a parameter set if the loss is below the error ϵ, and keep sampling until the number of accepted samples is 60. We use the kernel density estimate on the accepted samples to obtain an approximate posterior. Next, we rescale this approximate posterior with the original prior to obtain a proposal distribution that we use as a prior in the next step of the ABC. In each step, we reduce ϵ by setting it to the 75th percentile of the losses for the accepted samples (see [51] for more details). As a rule, we run 20 to 30 steps of the sequential ABC until the loss converges. We run separate fits for networks with different levels of sparsity with connection probabilities p = 0.1, 0.25, 0.5, 0.75, 1.0. The fitting was done using a modified version of the simple-abc toolbox https://github.com/rcmorehead/simpleabc/ for python.

Reduced model

The dynamics of the system can be studied analytically using a simplified, reduced linear model. Here, each pair of variables (xi, yi) represents the excitatory and inhibitory mean firing rate of a neuron group. In theory, these variables display complicated non-linear interactions that arise from the microscopic details of the LIF spiking network and synapse dynamics. However, in the stationary state –and away from any critical point– a linearised model can capture the essential features of the correlations between different populations.

Internal noise, modelled as independent Poisson trains to each individual neuron, becomes Gaussian white noise in the large-population limit, characterized by zero mean and variance σint. Each population is affected by different internal fluctuations. For simplicity, external noise, which is applied as the same train of Poisson spikes to all the neurons inside an input group, will also be approximated as a Gaussian white noise of mean η0 and variance σext.

Therefore, the simplified linear model reads: (16a) (16b) where M is the number of populations, a, b, c, d are parameters controlling in-group recurrent coupling, and WEE, WEI, WIE, WII are couplings between different clusters. Internal noise for each population is represented by , while external noise is notated as ηi(t). All noises are uncorrelated, meaning that (17a) (17b) (17c) with c, c′ = {x, y}, and where 〈…〉 represents an ensemble average, i.e., an average over noise realizations. From this model, it is possible to obtain closed equations for Pearson correlation coefficients (see S1 Text Section H for details). Notice that stochastic differential equations are never complete without an interpretation, and we choose to interpret these in the Itô sense, which will be relevant for computations. Tables of all the parameters used in our simulations are given in S1 Text Section K Tables A, B and C.

Supporting information

Acknowledgments

EG thanks the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for support, and OV thanks the International Max Planck Research School for the Mechanisms of Mental Function and Dysfunction and the Joachim Herz Foundation for support. AL is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1—Project number 39072764.

References

  1. 1. Priebe NJ. Mechanisms of orientation selectivity in the primary visual cortex. Annual review of vision science. 2016;2(1):85–107. pmid:28532362
  2. 2. Osorio D. Directionally selective cells in the locust medulla. Journal of Comparative Physiology A. 1986;159(6):841–847. pmid:3806440
  3. 3. Ringach DL, Shapley RM, Hawken MJ. Orientation selectivity in macaque V1: diversity and laminar dependence. Journal of neuroscience. 2002;22(13):5639–5651. pmid:12097515
  4. 4. Riesenhuber M, Poggio T. Hierarchical models of object recognition in cortex. Nature neuroscience. 1999;2:1019–25. pmid:10526343
  5. 5. Scholl B, Tan AYY, Corey J, Priebe NJ. Emergence of Orientation Selectivity in the Mammalian Visual Pathway. Journal of Neuroscience. 2013;33(26):10616–10624. pmid:23804085
  6. 6. Rossi LF, Harris KD, Carandini M. Spatial connectivity matches direction selectivity in visual cortex. Nature. 2020;588(7839):648–652. pmid:33177719
  7. 7. Sharma J, Angelucci A, Sur M. Induction of visual orientation modules in auditory cortex. Nature. 2000;404(6780):841–847. pmid:10786784
  8. 8. Feldman DE, Brecht M. Map plasticity in somatosensory cortex. Science. 2005;310(5749):810–815. pmid:16272113
  9. 9. Bienenstock E, Cooper L, Munro P. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. Journal of Neuroscience. 1982;2(1):32–48. pmid:7054394
  10. 10. Blais BS, Intrator N, Shouval H, Cooper LN. Receptive Field Formation in Natural Scene Environments: Comparison of Single-Cell Learning Rules. Neural Computation. 1998;10(7):1797–1813. pmid:9744898
  11. 11. Hansel D, van Vreeswijk C. The mechanism of orientation selectivity in primary visual cortex without a functional map. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience. 2012;32(12):4049–4064. pmid:22442071
  12. 12. Clopath C, Vogels TP, Froemke RC, Sprekeler H. Receptive field formation by interacting excitatory and inhibitory synaptic plasticity. bioRxiv. 2016;.
  13. 13. Brito CSN, Gerstner W. Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation. PLOS Computational Biology. 2016;12(9):1–24. pmid:27690349
  14. 14. Deneve S, Machens C. Efficient codes and balanced networks. Nature Neuroscience. 2016;19:375–382. pmid:26906504
  15. 15. Zhou S, Yu Y. Synaptic E-I Balance Underlies Efficient Neural Coding. Frontiers in Neuroscience. 2018;12. pmid:29456491
  16. 16. Singer W. Recurrent dynamics in the cerebral cortex: Integration of sensory evidence with stored knowledge. Proceedings of the National Academy of Sciences. 2021;118(33):e2101043118. pmid:34362837
  17. 17. Isaacson J, Scanziani M. How Inhibition Shapes Cortical Activity. Neuron. 2011;72(2):231–243. pmid:22017986
  18. 18. Okun M, Lampl I. Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nature Neuroscience. 2008;11(5):535–537. pmid:18376400
  19. 19. Vogels T, Abbott LF. Gating Multiple Signals through Detailed Balance of Excitation and Inhibition in Spiking Networks. Nature neuroscience. 2009;12:483–91. pmid:19305402
  20. 20. Bhatia A, Moza S, Bhalla US. Precise excitation-inhibition balance controls gain and timing in the hippocampus. eLife. 2019;8:e43415. pmid:31021319
  21. 21. Ozeki H, Finn I, Schaffer E, Miller K, Ferster D. Inhibitory Stabilization of the Cortical Network Underlies Visual Surround Suppression. Neuron. 2009;62:578–92. pmid:19477158
  22. 22. Rubin D, Van Hooser S, Miller K. The Stabilized Supralinear Network: A Unifying Circuit Motif Underlying Multi-Input Integration in Sensory Cortex. Neuron. 2015;85(2):402–417. pmid:25611511
  23. 23. Schulz A, Miehl C, Berry I Michael J, Gjorgjieva J. The generation of cortical novelty responses through inhibitory plasticity. eLife. 2021;10:e65309. pmid:34647889
  24. 24. Wehr M, Zador AM. Balanced inhibition underlies tuning and sharpens spike timing in auditory cortex. Nature. 2003;426:442–446. pmid:14647382
  25. 25. Wu YK, Miehl C, Gjorgjieva J. Regulation of circuit organization and function through inhibitory synaptic plasticity. Trends in Neurosciences. 2022;45(12):884–898. pmid:36404455
  26. 26. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks. Science. 2011;334(6062):1569–1573. pmid:22075724
  27. 27. Luz Y, Shamir M. Balancing Feed-Forward Excitation and Inhibition via Hebbian Inhibitory Synaptic Plasticity. PLOS Computational Biology. 2012;8(1):1–12. pmid:22291583
  28. 28. Hellyer PJ, Jachs B, Clopath C, Leech R. Local inhibitory plasticity tunes macroscopic brain dynamics and allows the emergence of functional brain networks. NeuroImage. 2016;124:85–95. pmid:26348562
  29. 29. Khajehabdollahi S, Giannakakis E, Prosi J, Levina A. Reservoir computing with self-organizing neural oscillators. The 2021 Conference on Artificial Life Proceedings. 2021;.
  30. 30. Agnes EJ, Luppi AI, Vogels TP. Complementary Inhibitory Weight Profiles Emerge from Plasticity and Allow Flexible Switching of Receptive Fields. Journal of Neuroscience. 2020;40(50):9634–9649. pmid:33168622
  31. 31. Eckmann S, Young EJ, Gjorgjieva J. Synapse-type-specific competitive Hebbian learning forms functional recurrent networks. Proceedings of the National Academy of Sciences. 2024;121(25):e2305326121. pmid:38870059
  32. 32. Agnes EJ, Vogels TP. Co-dependent excitatory and inhibitory plasticity accounts for quick, stable and long-lasting memories in biological networks. Nature Neuroscience. 2024;27(5):964–974. pmid:38509348
  33. 33. Litwin-Kumar A, Doiron B. Formation and maintenance of neuronal assemblies through synaptic plasticity. Nature communications. 2014;5:5319. pmid:25395015
  34. 34. Lagzi F, Bustos MC, Oswald AM, Doiron B. Assembly formation is stabilized by Parvalbumin neurons and accelerated by Somatostatin neurons. bioRxiv. 2021;.
  35. 35. Mackwood O, Naumann LB, Sprekeler H. Learning excitatory-inhibitory neuronal assemblies in recurrent networks. eLife. 2021;10:e59715. pmid:33900199
  36. 36. Zenke F, Agnes E, Gerstner W. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nature Communications. 2015;6:6922. pmid:25897632
  37. 37. Miehl C, Gjorgjieva J. Stability and learning in excitatory synapses by nonlinear inhibitory plasticity. PLOS Computational Biology. 2022;18(12):1–34. pmid:36459503
  38. 38. Effenberger F, Jost J, Levina A. Self-organization in balanced state networks by STDP and homeostatic plasticity. PLoS computational biology. 2015;11(9):e1004420. pmid:26335425
  39. 39. Larisch R, Gönner L, Teichmann M, Hamker FH. Sensory coding and contrast invariance emerge from the control of plastic inhibition over emergent selectivity. PLOS Computational Biology. 2021;17(11):1–37. pmid:34843455
  40. 40. Jiang X, Shen S, Cadwell CR, Berens P, Sinz F, Ecker AS, et al. Principles of connectivity among morphologically defined cell types in adult neocortex. Science. 2015;350(6264):aac9462. pmid:26612957
  41. 41. Litwin-Kumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nature Neuroscience. 2012;15:1498–1505. pmid:23001062
  42. 42. Lagzi F, Atay F, Rotter S. Bifurcation analysis of the dynamics of interacting subnetworks of a spiking network. Scientific Reports. 2019;9:1–17. pmid:31388027
  43. 43. Pfister JP, Gerstner W. Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity. Journal of Neuroscience. 2006;26(38):9673–9682. pmid:16988038
  44. 44. Wu GK, Arbuckle R, hua Liu B, Tao HW, Zhang LI. Lateral Sharpening of Cortical Frequency Tuning by Approximately Balanced Inhibition. Neuron. 2008;58(1):132–143. pmid:18400169
  45. 45. Sutter ML, Loftus WC. Excitatory and Inhibitory Intensity Tuning in Auditory Cortex: Evidence for Multiple Inhibitory Mechanisms. Journal of Neurophysiology. 2003;90(4):2629–2647. pmid:12801894
  46. 46. Liu G. Local structural balance and functional interaction of excitatory and inhibitory synapses in hippocampal dendrites. Nature Neuroscience. 2004;7(4):373–379. pmid:15004561
  47. 47. Sukenik N, Vinogradov O, Weinreb E, Segal M, Levina A, Moses E. Neuronal circuits overcome imbalance in excitation and inhibition by adjusting connection numbers. Proceedings of the National Academy of Sciences. 2021;118(12). pmid:33723048
  48. 48. Gjorgjieva J, Clopath C, Audet J, Pfister JP. A triplet spike-timing–dependent plasticity model generalizes the Bienenstock–Cooper–Munro rule to higher-order spatiotemporal correlations. Proceedings of the National Academy of Sciences. 2011;108(48):19383–19388. pmid:22080608
  49. 49. Ecker AS, Tolias AS. Is there signal in the noise? Nature neuroscience. 2014;17(6):750–751. pmid:24866037
  50. 50. Aponte DA, Handy G, Kline AM, Tsukano H, Doiron B, Kato HK. Recurrent network dynamics shape direction selectivity in primary auditory cortex. Nature Communications. 2021;12(1):314. pmid:33436635
  51. 51. Beaumont MA, Cornuet JM, Marin JM, Robert CP. Adaptive approximate Bayesian computation. Biometrika. 2009;96(4):983–990.
  52. 52. Lagzi F, Fairhall AL. Emergence of co-tuning in inhibitory neurons as a network phenomenon mediated by randomness, correlations, and homeostatic plasticity. Science Advances. 2024;10(12):eadi4350. pmid:38507489
  53. 53. Seeman SC, Campagnola L, Davoudian PA, Hoggarth A, Hage TA, Bosma-Moody A, et al. Sparse recurrent excitatory connectivity in the microcircuit of the adult mouse and human cortex. eLife. 2018;7:e37349. pmid:30256194
  54. 54. Wildenberg GA, Rosen MR, Lundell J, Paukner D, Freedman DJ, Kasthuri N. Primate neuronal connections are sparse in cortex as compared to mouse. Cell Reports. 2021;36(11):109709. pmid:34525373
  55. 55. Barral J, D Reyes A. Synaptic scaling rule preserves excitatory–inhibitory balance and salient neuronal network dynamics. Nature Neuroscience. 2016;19(12):1690–1696. pmid:27749827
  56. 56. Brunel. Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons. Journal of Computational Neuroscience. 2000;8. pmid:10809012
  57. 57. Giannakakis E, Hutchings F, Papasavvas CA, Han CE, Weber B, Zhang C, et al. Computational modelling of the long-term effects of brain stimulation on the local and global structural connectivity of epileptic patients. PLOS ONE. 2020;15(2):1–21. pmid:32027654
  58. 58. Tetzlaff C, Kolodziejski C, Markelic I, Wörgötter F. Time scales of memory, learning, and plasticity. Biological Cybernetics. 2012;106. pmid:23160712
  59. 59. Zenke F, Gerstner W, Ganguli S. The temporal paradox of Hebbian learning and homeostatic plasticity. Current Opinion in Neurobiology. 2017;43:166–176. pmid:28431369
  60. 60. Negrón A, Getz MP, Handy G, Doiron B. The mechanics of correlated variability in segregated cortical excitatory subnetworks. bioRxiv. 2023;. pmid:37162867
  61. 61. Hilgetag C, Kaiser M. Clustered Organization of Cortical Connectivity. Neuroinformatics. 2004;2:353–60. pmid:15365196
  62. 62. Yoshimura Y, Dantzker J, Callaway E. Excitatory cortical neurons form fine-scale functional networks. Nature. 2005;433:868–73. pmid:15729343
  63. 63. Brown S, Hestrin S. Intracortical circuits of pyramidal neurons reflect their long-range axonal targets. Nature. 2009;457:1133–6. pmid:19151698
  64. 64. Rossi LF, Harris KD, Carandini M. Spatial connectivity matches direction selectivity in visual cortex. Nature. 2020;588(7839):648–652. pmid:33177719
  65. 65. Ding Z, Fahey PG, Papadopoulos S, Wang EY, Celii B, Papadopoulos C, et al. Functional connectomics reveals general wiring rule in mouse visual cortex. bioRxiv. 2023;.
  66. 66. Cossell L, Iacaruso M, Muir D, Houlton R, Sader E, Ko H, et al. Functional organization of excitatory synaptic strength in primary visual cortex. Nature. 2015;518. pmid:25652823
  67. 67. Smith G, Hein B, Whitney D, Fitzpatrick D, Kaschube M. Distributed network interactions and their emergence in developing neocortex. Nature Neuroscience. 2018;21. pmid:30349107
  68. 68. Lagzi F, Rotter S. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State. PLOS ONE. 2015;10(9):1–28. pmid:26407178
  69. 69. Zeraati R, Shi YL, Steinmetz N, Gieselmann M, Thiele A, Moore T, et al. Intrinsic timescales in the visual cortex change with selective attention and reflect spatial connectivity. Nature Communications. 2023;14. pmid:37012299
  70. 70. Badin AS, Fermani F, Greenfield SA. The Features and Functions of Neuronal Assemblies: Possible Dependency on Mechanisms beyond Synaptic Transmission. Frontiers in Neural Circuits. 2017;10. pmid:28119576
  71. 71. Umbach GS, Tan RJ, Jacobs J, Pfeiffer BE, Lega BC. Flexibility of functional neuronal assemblies supports human memory. Nature Communications. 2021;13.
  72. 72. Hofer S, Ko H, Pichler B, Vogelstein J, Ros H, Zeng H, et al. Differential connectivity and response dynamics of excitatory and inhibitory neurons in visual cortex. Nature neuroscience. 2011;14:1045–52. pmid:21765421
  73. 73. Levy RB, Reyes AD. Spatial Profile of Excitatory and Inhibitory Synaptic Connectivity in Mouse Primary Auditory Cortex. Journal of Neuroscience. 2012;32(16):5609–5619. pmid:22514322
  74. 74. Mongillo G, Rumpel S, Loewenstein Y. Inhibitory connectivity defines the realm of excitatory plasticity. Nature neuroscience. 2018;21(10):1463–1470. pmid:30224809
  75. 75. Zenke F, Hennequin G, Gerstner W. Synaptic Plasticity in Neural Networks Needs Homeostasis with a Fast Rate Detector. PLOS Computational Biology. 2013;9(11):1–14. pmid:24244138
  76. 76. Cranmer K, Brehmer J, Louppe G. The frontier of simulation-based inference. Proceedings of the National Academy of Sciences. 2020; p. 201912789. pmid:32471948
  77. 77. Lueckmann JM, Boelts J, Greenberg D, Goncalves P, Macke J. Benchmarking simulation-based inference. In: International Conference on Artificial Intelligence and Statistics. PMLR; 2021. p. 343–351.
  78. 78. Gonçalves PJ, Lueckmann JM, Deistler M, Nonnenmacher M, Öcal K, Bassetto G, et al. Training deep neural density estimators to identify mechanistic models of neural dynamics. eLife. 2020;9:e56261. pmid:32940606
  79. 79. Bellec G, Scherr F, Subramoney A, Hajek E, Salaj D, Legenstein R, et al. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications. 2020;11.
  80. 80. Zenke F, Vogels TP. The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks. Neural Computation. 2021;33(4):899–925. pmid:33513328
  81. 81. Destexhe A, Rudolph M, Paré D. The high-conductance state of neocortical neurons in vivo. Nature Reviews Neuroscience. 2003;4(9):739–751. pmid:12951566
  82. 82. Debanne D, Gähwiler BH, Thompson SM. Long-term synaptic plasticity between pairs of individual CA3 pyramidal cells in rat hippocampal slice cultures. The Journal of physiology. 1998;507(Pt 1):237. pmid:9490845
  83. 83. Feldman DE. Timing-based LTP and LTD at vertical inputs to layer II/III pyramidal cells in rat barrel cortex. Neuron. 2000;27(1):45–56. pmid:10939330