## Figures

## Abstract

The activity of neurons in the visual cortex is often characterized by tuning curves, which are thought to be shaped by Hebbian plasticity during development and sensory experience. This leads to the prediction that neural circuits should be organized such that neurons with similar functional preference are connected with stronger weights. In support of this idea, previous experimental and theoretical work have provided evidence for a model of the visual cortex characterized by such functional subnetworks. A recent experimental study, however, have found that the postsynaptic preferred stimulus was defined by the total number of spines activated by a given stimulus and independent of their individual strength. While this result might seem to contradict previous literature, there are many factors that define how a given synaptic input influences postsynaptic selectivity. Here, we designed a computational model in which postsynaptic functional preference is defined by the number of inputs activated by a given stimulus. Using a plasticity rule where synaptic weights tend to correlate with presynaptic selectivity, and is independent of functional-similarity between pre- and postsynaptic activity, we find that this model can be used to decode presented stimuli in a manner that is comparable to maximum likelihood inference.

## Author summary

Brains are composed of complex networks, with communication taking place along synaptic connections between neurons. These connections can change and adapt, a process we call “plasticity”. However, the specific rules that dictate these changes remain largely unknown. In our visual system, which is responsible for vision and perception, it is primarily thought that connections get stronger between neurons exhibiting similar activity, also known as ‘Hebbian plasticity’. A recent study revealed results that seemed to contradict this idea, showing a strength in numbers of synapses rather than strength. Prompted by these findings, we developed a computational model with a hypothesis about how these changes could occur. We discovered that this new model, with a plasticity mechanism based on presynaptic activity, could capture experimental findings and lead to benefits in population decoding. Our model doesn’t necessarily contradict a Hebbian model, but rather, likely co-exists with it.

**Citation: **Gallinaro JV, Scholl B, Clopath C (2023) Synaptic weights that correlate with presynaptic selectivity increase decoding performance. PLoS Comput Biol 19(8):
e1011362.
https://doi.org/10.1371/journal.pcbi.1011362

**Editor: **Lyle J. Graham,
Université Paris Descartes, Centre National de la Recherche Scientifique, FRANCE

**Received: **November 28, 2022; **Accepted: **July 16, 2023; **Published: ** August 7, 2023

**Copyright: ** © 2023 Gallinaro et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

**Data Availability: **The code for simulation and figures is available on GitHub https://github.com/juliavg/decoding.

**Funding: **This work was supported by BBSRC (BB/N013956/1) (CC), Wellcome Trust (200790/Z/16/Z) the Simons Foundation (564408) (CC) and EPSRC (EP/R035806/1) (CC). Whitehall Foundation and NIH EY031137 (BS). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

**Competing interests: ** The authors have declared that no competing interests exist.

## Introduction

Neurons in the visual cortex are selectively driven by specific features of sensory stimuli. This selective neural activity is proposed to be shaped by Hebbian plasticity [1] during developmental stages and sensory experience. Hebbian plasticity can be described as the strengthening of synaptic weights in a manner dependent on the correlation of the pre- and postsynaptic neurons [2, 3]. Thus, in the development of visual cortical circuits, Hebbian plasticity is thought to lead to a functional distribution in synaptic weights: weights are larger between pre- and postsynaptic neurons with similar functional preferences, i.e. similar preferred orientation [4] and only a few synaptic inputs are needed to define postsynaptic sensory feature selectivity [5]. In support of this framework, excitatory pyramidal neurons in layers 2/3 of mice primary visual cortex are shown to form functional subnetworks; the synaptic weights between neurons and the probability of connectivity between neurons reflect the similarity in tuning to visual features, such as for example orientation preference [6–10].

A recent study [11], however, has shown different results in the visual cortex of ferrets. Instead of a functionally-defined weight distribution, [11] found that preference for orientation stimuli is derived from the total number of excitatory synaptic inputs activated by a given stimulus. That is, strong and weak synapses were recruited for all visual stimuli presented. While this result appears to contradict the previous literature, there are many factors which impact how a given synaptic input might influence activity at the somatic output and postsynaptic selectivity. Some of these factors include synapse weight and number, reliability [12], location within the dendritic tree [13], co-activity with their neighbors [14], and presence of dendritic inhibition [15]. In fact, given that pairs of cortical neurons tend to be connected through multiple synaptic contacts [16–19], an overall stronger weight between pairs of neurons with correlated activity could be achieved through many contacts with a mixture of weights. In this case, it is possible that individual synaptic weights might depend on a learning rule that is independent of the postsynaptic neuron’s activity. Such a learning rule would allow flexibility in weighting and encode something other than covariance between presynaptic and postsynaptic activity. For example, [11] also showed that anatomical correlates of synaptic strength correlate with spine selectivity, i.e. the sharpness of the tuning curve or how selectively a spine responds to specific orientations. This observation leads to an interesting set of questions: (1) Can a presynaptic learning rule generate strong, selective synapses? (2) Is a presynaptic learning rule sufficient to generate selective postsynaptic neurons? (3) Does a presynaptic learning rule lead to efficient neuronal population decoding?

To address these questions, we model a feedforward circuit with a plasticity rule leading to synaptic weights that are correlated with presynaptic selectivity (termed ‘variance rule’), independent of the difference between pre- and postsynaptic preferred orientation (ΔPO). Using this model, we explore how synaptic weights based on presynaptic selectivity, rather than functional-similarity between presynaptic and postsynaptic activity, might be achieved by cortical neurons and what impacts this distribution has on neuron decoding. We show that this plasticity rule can lead to postsynaptic neurons selectively tuned to specific stimuli. Furthermore, we show that a decoder which uses the weights emerging from the plasticity rule performs comparatively to a decoder derived based on maximum likelihood inference. Overall, our results suggest a decoding model where the somatic preference is defined primarily by the number of activated spines given a certain stimulus, while the strength of individual spines correlates with the tuning selectivity of the corresponding inputs and does not entirely depend on the tuning similarity with the postsynaptic neuron.

## Results

Here we study the functional implications of a plasticity rule based on presynaptic variance, rather than covariance between presynaptic and postsynaptic activity (i.e. classic Hebbian), in a feedforward circuit representing neurons in the primary visual cortex. Our circuit is composed of one postsynaptic neuron, modeled as a point neuron (see Methods for details), which receives input from *N* presynaptic neurons through plastic weights *w*_{i}, and from an untuned inhibitory source (Fig 1A). Presynaptic neurons are orientation selective and their activity is modulated by tuning curves (Fig 1B). In order to model a diverse population (i.e. differences in orientation preference and selectivity), the tuning curve of each presynaptic neuron *i* has its own preferred orientation (PO, ) and width (*κ*_{i}) (Fig 1B).

(A) A postsynaptic point neuron receives input from 50 presynaptic neurons. Synaptic weights, represented by the spine sizes, are plastic according to a rule based on presynaptic variance. (B) The activity of presynaptic neurons is moduled by von Mises tuning curves. The tuning of the presynaptic neuron *i* is defined by its PO and its selectivity, given by the width parameter *κ*_{i}. (C) Synaptic weights as a function of time. Each line shows the synaptic weight of a single presynaptic neuron. The vertical dotted line indicates the moment when the stimulation protocol starts. (D) Distribution of synaptic weights at the end of the stimulation protocol. (E) Synaptic weights as a function of the width parameter *κ*. Green dots show data obtained from simulation and black line from theory. (F) Selectivity of presynaptic neurons as a function of their width parameter *κ*. (G) Synaptic weight at the end of simulation as a function of presynaptic selectivity. (D-G) Shown are data pooled from 100 independent simulation runs.

### A plasticity rule based on presynaptic variance results in weights that correlate with presynaptic selectivity

In order to build a model where synaptic weights correlate with presynaptic selectivity, we first study how synaptic plasticity might lead to this correlation. Based on previous studies showing that some forms of LTP can be induced without the need of postsynaptic depolarization [20–23], we propose a plasticity rule in which potentiation is based on presynaptic activity only. More specifically, a rule in which potentiation is based on presynaptic variance (variance rule). In this way, changes in synaptic weight, Δ*w*, can be summarized as
(1)
where *r*_{pre} is the presynaptic activity, *μ* is a constant representing the mean presynaptic activity, which is the same for all presynaptic neurons, *η*_{1} is the learning rate and *η*_{0} is the weight decay rate. We then stimulate the presynaptic neurons by showing all of them a stimulus of orientation *θ*. Every 200 ms, a new stimulus *θ* is randomly chosen from a uniform distribution between and shown to all presynaptic neurons. We then observe the evolution of the synaptic weights (Fig 1C).

Once converged, the synaptic weights have a skewed distribution (Fig 1D), and are essentially a function of the width of the presynaptic tuning curve (Fig 1E). Since selectivity of a presynaptic neuron is given by the width of its tuning curve *κ* (Fig 1F), the converged weights are also correlated with presynaptic selectivity (Fig 1G). The section “Plasticity weights” in Methods provides the equation for the equilibrium weights from a presynaptic neuron *i*, showing that it is indeed a function of its selectivity *κ*_{i}. Accordingly, initializing the weights from random values drawn from a uniform distribution leads to similar results (S7 Fig). Thus, we show that a feedforward model with a plasticity rule based on the variance of presynaptic activity, leads to synaptic weights that are correlated with presynaptic selectivity.

### Postsynaptic neurons are orientation selective and synaptic weights are uncorrelated with the difference in preferred orientation between pre- and postsynaptic neuron (ΔPO)

In the previous simulations, we find that, similar to the presynaptic neurons, the postsynaptic neuron is also orientation tuned (Fig 2B). Extracting the relevant parameters from the postsynaptic tuning curve across multiple simulation runs shows that individual postsynaptic neurons respond preferentially to orientations spread across the full range of stimuli and with selectivity values that cover almost the entire range [0, 1] (Fig 2C). Note that the value of selectivity depends on the mean activity level [24], which in our simulations is strongly influenced by the amount of inhibition received by the postsynaptic neuron (S1 Fig).

(A) Schematic representation illustrating that the postsynaptic neuron from Fig 1A is orientation selective and its synaptic weights are uncorrelated with presynaptic preferred orientation. The shadings show preferred orientation and illustrate that there are strong weights from presynaptic inputs that have the same preferred orientation and the postsynaptic neuron (, dark shading) and from presynaptic inputs that have different preferred orientation than the postsynaptic neuron (0, light shading) (B) Examples of tuning curves for the postsynaptic neuron from 4 independent simulation runs. Firing rates are normalized by the maximum rate of each individual tuning curve. (C) Statistics of postsynaptic selectivity. Data pooled from 100 independent simulation runs. *Top*: Probability density function (pdf) of the postsynaptic PO. *Bottom*: Probability density function (pdf) of the postsynaptic selectivity. (D) Synaptic weights as a function of ΔPO. (E) Pearson’s correlation coefficient between pre- and postsynaptic neurons as a function of ΔPO. (D-E) Colors indicate presynaptic selectivity.

Synaptic plasticity rules which are based on the covariance between pre- and postsynaptic activity generate circuits in which the strength of synaptic weights are anti-correlated with ΔPO (S2 Fig). That is, if the pre- and postsynaptic neurons have similar tuning, they typically are connected with a strong weight. But this is not necessarily the case for a rule based on the variance of presynaptic activity only. Therefore, we next test whether the synaptic weights are anti-correlated with ΔPO. Similar to experimental data [11], we find that the synaptic weights are not correlated with ΔPO and depend mostly on presynaptic selectivity (Fig 2A and 2D). Even though there is no clear relationship between the synaptic weights and ΔPO, the activity of the postsynaptic neuron is still more correlated with the activity of presynaptic neurons that have similar POs (Fig 2E).

### Postsynaptic neurons inherit preferred orientation from the number of presynaptic inputs with similar preference

If it is not true that synaptic weights are stronger when pre- and postsynaptic neurons have similar POs, what is then defining the postsynaptic orientation preference? While some previous work has proposed that somatic functional preference is derived from the functional preference of a few stronger presynaptic inputs [9, 10], others have proposed that it is actually defined by the number of active spines with a given PO [11]. Since in our model the synaptic weights are not correlated with ΔPO (Fig 2D), we expect that somatic preference will be defined by the number of presynaptic inputs with similar preferences.

In order to test this, we wanted to know whether all orientations within the interval were equally represented within presynaptic populations or if there was any bias towards the PO of the postsynaptic neuron. Even though the PO of the presynaptic inputs were independently drawn from a uniform distribution (Fig 3A), we find that there are indeed slightly more presynaptic inputs which are co-tuned than inputs which are not co-tuned with the postsynaptic neuron (Fig 3B). To further explore this result, we calculate the total input current to the postsynaptic neuron when different orientations are being shown to the presynaptic population (Fig 3C). We then split the total input current into two values: the total number of active inputs and the mean synaptic weight of those active inputs. Inputs are considered to be active when their firing rate is above a certain threshold during the presentation of a stimulus (see Methods for details). We find that, similar to the total input current (Fig 3C), the number of active inputs is also modulated by the stimulus, such that a larger number is active when the PO is shown (Fig 3D). Importantly, we did not observe any difference in the main results by varying the value of the threshold (S10 and S11 Figs). We also found that this bias in the total input current is still present when varying the number of presynaptic neurons (N = 10, 100, 1000) and the bias is enhanced by decreasing the total number of presynaptic inputs (S4–S6 Figs). We also found that the bias is amplified by a bias in the preferred orientation of the presynaptic neurons, simulated by drawing presynaptic PO from a normal distribution (S9 Fig).

(A) Histogram of the PO of presynaptic neurons. (B) Histogram of the difference in PO between pre- and postsynaptic neurons. (C) Total input current to postsynaptic neuron when different orientations are being shown. (D) Number of active inputs when different orientations are being shown. (E) Mean weight of active inputs when different orientations are being shown. (F) A postsynaptic neuron receives inputs from 50 presynaptic neurons. When an orientation is shown to the presynaptic neurons, only a fraction of them have their activity above a certain threshold (active inputs, represented as colored spines), while the rest have their activity below the threshold (inactive inputs, represented as grey spines). When the PO of the postsynaptic neuron is shown (, *left*), there are on average more active inputs than when the orthogonal orientation is shown (*θ* = 0, *right*).

Unexpectedly, however, we find that the mean weight of the active inputs is also modulated by the difference between the shown stimulus and the postsynaptic preferred stimulus (Fig 3E). This is different to what was observed in experiments [11] and counter-intuitive given that the plasticity rule used here is based on presynaptic activity only. However, even though correlated activity between pre- and postsynaptic neurons does not lead directly to stronger weights, the postsynaptic PO is given by a sum of the presynaptic POs weighted by their selectivity and by their input weights (S3 Fig). Therefore, the postsynaptic PO will be biased by presynaptic neurons with stronger weights, which are also the ones with higher selectivity (S3 Fig).

### A decoder based on the variance rule performs comparatively to a decoder based on maximum likelihood

Hebbian plasticity has been previously shown to enhance orientation selectivity [25], in accordance with experimental studies showing an increase in orientation selectivity after visual experience [8, 26]. Since Hebbian plasticity shapes circuits in which synaptic weights are anti-correlated with ΔPO (S2 Fig), the question emerges of whether there is any computational advantage of synaptic weights that correlate with presynaptic selectivity instead. Therefore we next study a feedforward circuit where the synaptic weights are correlated with presynaptic selectivity in the context of decoding stimuli.

We assume there is a population of input neurons with different tuning curves (Fig 4A). The same orientation *θ* is shown to all input neurons and the firing rate of presynaptic neuron *i* is given by the shown stimulus and their respective tuning curve, defined by their PO and width *κ*_{i}. A decoder receives inputs from all neurons in this population and estimates the shown orientation based on their tuning curves and firing rates [27]. We then derive a decoder, which is based on maximum likelihood inference (ML decoder, see Methods for details) [28]. Assuming the input tuning curves to be modeled as von Mises functions, the ML decoder estimates the shown orientation with (Fig 4B):
(2)
where *p*_{i} is the firing rate of neuron *i* (*r*_{i} with added Poisson noise, see Methods for details), *κ*_{i} is a parameter of the von Mises tuning curve defining its width, is the preferred stimulus of input neuron *i* and is the orientation estimated by the ML decoder.

(A) A decoder receives input from 50 input neurons and decodes the orientation that was shown to the whole population using the presented formula. (B) Decoded orientation () plotted against the actual orientation shown to the input population (*θ*) for a decoder based on maximum likelihood inference (ML decoder, *w*_{i} = *κ*_{i}). (C) Weight as a function of the presynaptic width parameter *κ* for five different decoders. (D) Performance of 5 decoders. *left*: Absolute value of bias, *middle*: variance and *right*: error. Bars show mean and standard error of the mean across 100 independent simulation runs.

Similar to Bayesian cue integration, in which inputs are integrated by being multiplied by their uncertainty [29–31], this decoder is such that the firing rate *p*_{i} of input neuron *i* is being multiplied by the width of its tuning curve *κ*_{i}, which defines its selectivity (Eq 2). This multiplication *p*_{i}*κ*_{i} could be easily implemented by synaptic weights which are correlated with presynaptic selectivity (Fig 1). Therefore, we next compare the performance of the ML decoder with that of a decoder that uses the weights obtained from the variance rule (Fig 1). In order to do this, we assume that the decoder multiplies the firing rate of each presynaptic neuron *i* by a weight *w*_{i} in order to estimate the shown orientation *θ*:
(3)
where *w*_{i} are the weights used by the decoder and *w*_{i} = *κ*_{i} in the case of the ML decoder.

We then compare the performance of the decoder that uses the weights obtained from the variance rule (Variance Rule) to other decoders (Fig 4C), namely: (i) to a decoder where *w*_{i} = *κ*_{i} (ML); (ii) to a decoder that uses the same weight for all input neurons *w*_{i} = 1 (Uniform); (iii) to a decoder that uses the same weights as the Variance Rule decoder but shuffled, and therefore uncorrelated to presynaptic selectivity (Shuffled); (iv) to a decoder that uses the weights obtained from a simulation using a covariance plasticity rule (Covariance, see). We find that the presynaptic variance decoder performs comparatively to the ML decoder and better than the others (Fig 4D). We did not find any qualitative difference in the main results by adding noise to the input coming from the presynaptic neurons (S12 and S13 Figs).

In conclusion, we demonstrate how a plasticity rule based on presynaptic variance can lead to the formation of feedforward circuits where the postsynaptic neurons are selective for specific stimuli. Under this framework, we find that postsynaptic neurons can act as decoders whose performance rivals that of maximum likelihood inference. These results suggest a decoding model where individual inputs received by the postsynaptic neuron are weighted according to presynaptic selectivity rather than functional-similarity between pre- and postsynaptic activity.

## Discussion

In this work, we simulated a feedforward network in which weights were changing according to a plasticity rule based on the variance of presynaptic activity. The presynaptic population was composed of neurons with diverse preferred orientation (PO) and selectivity. We showed that, by stimulating the input population with different orientations, synaptic weights converged to values that were a function of presynaptic selectivity. We then showed that a decoder which used those weights to decode a stimulus had a performance comparable to a decoder based on maximum likelihood inference.

We focused on the computational implications of synaptic weights that correlate with presynaptic selectivity, but did not explore the functional advantages of somatic preference being defined by the number of spines rather than by their strength [11]. This could be interesting to explore in view of recent experimental evidence showing that learning is associated with the formation of new spines [32–35] and previous theoretical work suggesting that having multiple synaptic contacts between the same pair of neurons, instead of having a single strong one, could lead to more robust circuits with stable memories [36–38]. A possible extension to our model could be to include structural plasticity that causes multiple contacts between pre- and postsynaptic neurons to form. In addition, this model could be extended to include both a covariance rule and a presynaptic variance rule for individual neurons, as including only the latter results in fixed synaptic weights for any given presynaptic neuron. Finally, in the future, it will be interesting to study how a presynaptic variance rule can be applied to a recurrent network receiving inputs from multiple sources (e.g. feed-forward, long-range) and whether such models could better describe the development of visual cortical circuits.

Different structural plasticity rules have been proposed to describe activity-dependent spine turnover [39]. A structural plasticity rule forming stronger connectivity between neurons with similar orientation preference [40], could lead to a circuit where more synaptic contacts are formed between the same pair of pre- and postsynaptic neurons when their activity is correlated (Fig 2E). As a consequence, there could be an even stronger bias in the number of presynaptic inputs that are co-tuned with the soma (larger number of inputs with |ΔPO| = 0 in Fig 3B). Another possible consequence could be a stronger overall synaptic weight due to an increased number of synaptic contacts between co-tuned pre- and postsynaptic neurons. Using a modified simulation of our model to include multiple contacts, such a scenario appears very plausible and could reconcile different plasticity rules (see S14 Fig). This could be interesting when comparing experimental data derived from measuring synaptic weights as anatomical features of spines versus measuring synaptic weights as amplitudes of postsynaptic potentials recorded from the soma. On the other hand, using a structural plasticity rule in which weaker weights are more likely to be deleted than stronger ones [37, 41] could lead to different functional circuits.

Previous work has shown that the diversity of synaptic weights found in the visual cortex of ferrets could be explained by a model where a decoder reads information from a population of diverse PO and selectivity [42]. Here, we propose that the synaptic weights could reflect presynaptic selectivity. Such diversity in synaptic weights, therefore, would not be necessary if all presynaptic inputs had the same selectivity. Why then would there be a population of neurons with diverse selectivity in the visual cortex? One possibility is that decoding from a population of input neurons with a range of selectivity provides better discrimination capabilities for natural images [43]. Another option is that selectivity could somehow encode presynaptic uncertainty, but how exactly uncertainty is encoded in neural representation is still unclear [44].

According to the rule presented in this paper, synaptic weights would fluctuate as the selectivity changes. Assuming that selectivity would reflect uncertainty, synaptic weight fluctuations would also reflect presynaptic uncertainty. In this case, we refer to uncertainty as the amount of information a particular synapse is able to convey to the postsynaptic target, which would change either from changes in the presynaptic neuron’s activity or noise within incoming sensory information. This is similar to what has been recently proposed in a theoretical paper [45], and in accordance with experimental data showing that synaptic weights fluctuate over time [46, 47]. In contrast to our model, however, experimental studies have shown that the weights can also fluctuate in an activity independent manner. Secondly, we might expect changes in synaptic weights to cause changes in the postsynaptic neuron’s activity. The postsynaptic PO would therefore change over time, reflecting changes in the reliability of presynaptic inputs. Interestingly, this possibility might correspond to recent experimental findings of ‘representational drift’, where neurons of the visual cortex and other brain regions are reportedly to exhibit fluctuations in their tuning properties over time [48].

In conclusion, our results suggest a model of the visual cortex in which postsynaptic preferred orientation is defined by the number of spines with a given preferred orientation, while synaptic strength is used as a weight reflecting presynaptic selectivity.

## Methods

The code for simulations and figures is available at https://github.com/juliavg/decoding.

### Neuron model

#### Output neuron.

The output neuron is modeled as a rate based neuron. Its firing rate *y* at time *t* is given by the equation:
(4)
where *τ*_{y} = 1 ms is the rate time constant, *α* = 0.1 nA^{−1} is the slope of the neuron’s transfer function, *w*_{ref} = 16 nA is a reference weight for the *N* excitatory inputs, *w*_{i} and *r*_{i} are respectively the weight and firing rate of input neuron *i*, *w*_{I} = −1700 pA and *r*_{I} = 100 Hz are respectively the weight and firing rate of the inhibitory source. The inhibitory source could be understood as a single neuron firing at 100 Hz or multiple neurons firing at a lower rate and adding up to the same value. The sign []_{+} indicates a rectification that sets all negative values to 0.

#### Input neurons.

The input neurons are modeled as rate based neurons with von Mises tuning curves, which is a “bump” like curve with circular boundary conditions. The firing rate of input neuron *i* at time *t* is given by:
(5)
where *r*_{ref} = 125 Hz is a reference firing rate, *θ* is the stimulus being shown to neuron *i* at time *t*, *κ*_{i} is a parameter defining the width of the tuning curve from neuron *i*, is the preferred orientation of neuron *i* and *I*_{0} is the modified Bessel function of order 0.

### Plasticity models

#### Variance (Figs 1–3).

The weights from the excitatory inputs are plastic, and evolve according to:
(6)
where *η*_{1} = 0.1 and *η*_{0} = 0.03 are learning rates and is a constant representing the mean activity of the presynaptic neurons. The same constant is used for all presynaptic neurons.

#### Covariance (S2 Fig).

For the simulations with the covariance rule, the excitatory weights evolve according to:
(7)
where *γ* = 0.24 is a constant and the remaining parameters are the same as with the rule based on presynaptic variance.

### Plasticity simulation (Figs 1–3)

One output neuron receives input from *N* = 50 neurons. Each input neuron *i* has a tuning curve defined by its width *κ*_{i}, which is drawn randomly and independently for each neuron from a uniform distribution ]0, 1], and by its preferred orientation , which is drawn randomly and independently for each neuron from a uniform distribution .

The firing rate of the input neurons are initially set to *r*_{i} = 20 Hz for a warm up period of 200 s, after which the stimulation protocol starts. During the stimulation protocol, a new orientation *θ* is chosen randomly from a uniform distribution every *T* = 200 ms. The same orientation *θ* is shown to all input neurons and the whole stimulation protocol consists of 1 000 stimuli. We run 100 independent simulation runs, and shown in the figures is the pooled data from all of them.

### Orientation selectivity

Preferred orientation and orientation selectivity of neuron *i* are calculated from the circular mean of neuronal response *R*_{i}:
(8)
where *c*_{i} (*θ*) is the calculated tuning curve of neuron *i*. We calculate *c*_{i} (*θ*) as the mean response across stimuli using 20 bins on the interval and using data from the last 500 stimuli in the simulation. Preferred orientation and the selectivity of neuron *i* are then calculated as the angle and the length of the resultant *R*_{i}, respectively.

### Plasticity weights

The equilibrium weights for the plasticity rule based on presynaptic variance can be calculated by setting the left hand side of Eq 6 to zero:
(9)
Assuming that the weights reach the steady state *W* within the interval *T* (how long each orientation is shown), we substitute *θ*(*t*) by the random variable Θ and take expected values:
(10)
Finding and substituting it in Eq 10 gives the equilibrium weight from presynaptic neuron *i*:
(11)

### Active input analysis (Fig 3)

For the active input analysis, we compare the mean activity of each input neuron during the presentation of a single stimulus to a threshold and consider it to be active (not active) when the activity is above (below) *β*.

### Maximum likelihood decoder

From [28], the estimated stimulus () of a decoder based on maximum likelihood inference (ML decoder) can be determined by:
(12)
where *p*_{i} is the firing rate of neuron *i*, *r*_{i} (*θ*) is the tuning curve of neuron *i*, and the prime denotes derivative. Assuming the tuning curves of the simulation input neurons (Eq 5):
(13)
then the estimated stimulus can be determined by:
(14)
Solving for gives:
(15)
See also [49] for a similar derivation.

### Decoder simulation (Fig 4)

A decoder infers the stimulus *θ* shown to a population of 50 input neurons. The activity of input neuron *i* in response to stimulus *θ* (*p*_{i}) is given as a sample randomly drawn from the Poisson distribution:
(16)
where *f*(*k*, *r*_{i} (*θ*)) describes the probability of neuron *i* firing with rate *k* in response to stimulus *θ*, and *r*_{i} (*θ*) is the tuning curve of neuron *i* (Eq 5).

The decoder estimates the shown stimulus according to:
(17)
where *p*_{i} is the firing rate of neuron *i* (obtained from sampling from Eq 16), *w*_{i} are the weights used by the decoder and is the preferred orientation of neuron *i*. The tuning curves and parameters of input neurons are taken from the 100 independent plasticity simulations.

We compare the performance of 5 different decoders, which differ based on the weights *w*_{i} they use: (i) *variance*: the weights used are obtained from the corresponding plasticity simulation; (ii) *ML*: *w*_{i} = *κ*_{i}; (iii) *uniform*: *w*_{i} = 1; (iv) *shuffled*: the weights are the same as the *variance*, but shuffled; (v) *covariance*: the weights used are obtained from the plasticity simulations using the covariance plasticity rule.

For each independent plasticity simulation, we evaluate the decoder by showing 20 orientations equally spaced in the interval . Each orientation is shown repeatedly for 100 trials, and we calculate the bias *b*_{est}(*θ*), variance and error *e*_{est}(*θ*) using:
(18) (19) (20)
where 〈〉 indicates the average across trails. For each independent plasticity simulation, we average the bias *b*_{est}(*θ*), variance and error *e*_{est}(*θ*) across stimuli *θ* to obtain a single value per simulation.

## Supporting information

### S1 Fig. The effect of weaker inhibition on the selectivity of the postsynaptic neuron.

The simulations performed for this figure are the same as those performed for Fig 2, except that the weight from the inhibiory source is 5 times weaker. (A) Examples of tuning curves for the postsynaptic neuron from 4 independent simulation runs. (B) Statistics of postsynaptic selectivity. Data pooled from 100 independent simulation runs. *Top*: Probability density function (pdf) of the postsynaptic PO. *Bottom*: Probability density function (pdf) of the postsynaptic selectivity.

https://doi.org/10.1371/journal.pcbi.1011362.s001

(EPS)

### S2 Fig. Relationship between synaptic weights and selectivity of presynaptic inputs in a simulation using the covariance plasticity rule.

The simulations performed for this figure are the same as those performed for Figs 1 and 2, except that the plasticity rule used is the covariance rule (see Methods for details). (A) Synaptic weights as a function of time. Each line shows the synaptic weight of a single presynaptic neuron. The vertical dotted line indicates the moment when the stimulation protocol starts. (B) Distribution of synaptic weights at the end of the stimulation protocol. (C) Synaptic weights as a function of presynaptic selectivity. (D) Synaptic weights as a function of ΔPO. (E) Pearson’s correlation coefficient between pre- and postsynaptic neurons as a function of ΔPO. (D-E) Colors indicate presynaptic selectivity.

https://doi.org/10.1371/journal.pcbi.1011362.s002

(EPS)

### S3 Fig. Postsynaptic PO is influenced by presynaptic PO, presynaptic selectivity and synaptic weights.

(A) The postsynaptic tuning curve is estimated by adding the presynaptic tuning curves multiplied by the corresponding synaptic weights. The estimated postsynaptic PO is then extracted from the estimated postsynaptic tuning curve. (B) Same as in (A) but all weights are considered to be equal *w*_{i} = 1. (C) Same as in (A) but all presynaptic tuning curves are considered to have the same width *κ*_{i} = 1. (D) Same as in (A) but with *w*_{i} = 1 and *κ*_{i} = 1.

https://doi.org/10.1371/journal.pcbi.1011362.s003

(EPS)

### S4 Fig. Same as Fig 3 in the main text, but with *N* = 1000 input neurons.

Inhibitory rate was adapted to 1000 *Hz*.

https://doi.org/10.1371/journal.pcbi.1011362.s004

(EPS)

### S5 Fig. Same as Fig 3 in the main text, but with *N* = 100 input neurons.

Inhibitory rate was adapted to 200 *Hz*.

https://doi.org/10.1371/journal.pcbi.1011362.s005

(EPS)

### S6 Fig. Same as Fig 3 in the main text, but with *N* = 10 input neurons.

Inhibitory rate was adapted to 15 *Hz*.

https://doi.org/10.1371/journal.pcbi.1011362.s006

(EPS)

### S7 Fig. Same as Fig 1 in the main text, but weights are initialized from a uniform distribution between 0 and 0.05.

https://doi.org/10.1371/journal.pcbi.1011362.s007

(EPS)

### S8 Fig. Same as Fig 2 in the main text, but preferred orientation from the input neurons is drawn from a normal distribution with mean 0 and standard deviation .

https://doi.org/10.1371/journal.pcbi.1011362.s008

(EPS)

### S9 Fig. Same as Fig 3 in the main text, but preferred orientation from the input neurons is drawn from a normal distribution with mean 0 and standard deviation .

https://doi.org/10.1371/journal.pcbi.1011362.s009

(EPS)

### S10 Fig. Same as Fig 3 in the main text, but the neurons are considered to be active when their activity is above the threshold 1.5*β*, where .

https://doi.org/10.1371/journal.pcbi.1011362.s010

(EPS)

### S11 Fig. Same as Fig 3 in the main text, but the neurons are considered to be active when their activity is above the threshold 0.5*β*, where .

https://doi.org/10.1371/journal.pcbi.1011362.s011

(EPS)

### S12 Fig. Same as Fig 4 in the main text, but the firing rate of each input neuron was calculated using Eq 5 and substituting *θ* by *θ* + *σ*, where *σ* is randomly drawn for each input neuron from a normal distribution with mean 0 and standard deviation .

https://doi.org/10.1371/journal.pcbi.1011362.s012

(EPS)

### S13 Fig. Same as Fig 4 in the main text, but instead of adding Poisson noise to the firing rate of the input neurons, we added noise drawn from a normal distribution with mean 0 and standard deviation 5 Hz.

https://doi.org/10.1371/journal.pcbi.1011362.s013

(EPS)

### S14 Fig. Simulation considering multiple synaptic contacts between a pair of pre- and postsynaptic neurons with correlated activity.

We start by establishing a simulation with a similar setup as in S8 Fig, where the preferred orientation from the input neurons are drawn from a normal distribution with mean 0 and standard deviation . We then purposely manipulate the connectivity of two presynaptic neurons *N*_{1} and *N*_{2}. For the first presynaptic neuron *N*_{1}, we choose its preferred orientation to be the same as the postsynaptic neuron (which is given by the mean 0 of the distribution of presynaptic preferred orientations) and we assume it has a low selectivity by setting its *κ* = 0.5. Assuming the number of synaptic contacts between the pre- and the postsynaptic neuron would reflect the correlation between them, we create 5 contacts between this presynaptic neuron *N*_{1} and the postsynaptic neuron. The individual strengths of all these 5 contacts are subject to the plasticity rule based on presynaptic variance. For the second presynaptic neuron *N*_{2}, we choose its preferred orientation to be orthogonal to the postsynaptic neuron () and assume it would have high selectivity by setting its *κ* = 1. Since *N*_{2} has low correlation with the postsynaptic neuron, we create only a single synaptic contact between them. All the other presynaptic neurons have only a single synaptic contact to the postsynaptic neuron, their preferred orientations are drawn randomly from a normal distribution with mean 0 and standard deviation , and their selectivity *κ* are randomly drawn from a uniform distribution as in the main simulations of the manuscript. We then run the simulation in the same way as the main simulation in the manuscript. As a result, while the weight of individual synaptic contacts is smaller between *N*_{1} and the postsynaptic neuron than between *N*_{2} and postsynaptic neuron, the overall pre-post synaptic weight (that considers both number of contacts and strength of individual contact) is larger between *N*_{1} and the postsynaptic neuron than between *N*_{2} and the postsynaptic neuron.

https://doi.org/10.1371/journal.pcbi.1011362.s014

(EPS)

## References

- 1.
Hebb DO. The organization of behavior. New York: Wiley; 1949.
- 2.
Gerstner W, Werner M K. Spiking neuron models: Single neurons, populations, plasticity. Cambridge university press; 2002.
- 3. Gerstner W, Kistler WM. Mathematical formulations of Hebbian learning. Biological Cybernetics. 2002;87(5-6):404–415. pmid:12461630
- 4. Clopath C, Büsing L, Vasilaki E, Gerstner W. Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nature Neuroscience. 2010;13(3):344–352. pmid:20098420
- 5. Goetz L, Roth A, Häusser M. Active dendrites enable strong but sparse inputs to determine orientation selectivity. Proceedings of the National Academy of Sciences. 2021;118(30). pmid:34301882
- 6. Harris KD, Mrsic-Flogel TD. Cortical connectivity and sensory coding. Nature. 2013;503(7474):51–58. pmid:24201278
- 7. Ko H, Hofer SB, Pichler B, Buchanan KA, Sjöström PJ, Mrsic-Flogel TD. Functional specificity of local synaptic connections in neocortical networks. Nature. 2011;473(7345):87–91. pmid:21478872
- 8. Ko H, Cossell L, Baragli C, Antolik J, Clopath C, Hofer SB, et al. The emergence of functional microcircuits in visual cortex. Nature. 2013;496(7443):96–100. pmid:23552948
- 9. Cossell L, Iacaruso MF, Muir DR, Houlton R, Sader EN, Ko H, et al. Functional organization of excitatory synaptic strength in primary visual cortex. Nature. 2015;518(7539):399–403. pmid:25652823
- 10. Lee WCA, Bonin V, Reed M, Graham BJ, Hood G, Glattfelder K, et al. Anatomy and function of an excitatory network in the visual cortex. Nature 2016 532:7599. 2016;532(7599):370–374. pmid:27018655
- 11. Scholl B, Thomas CI, Ryan MA, Kamasawa N, Fitzpatrick D. Cortical response selectivity derives from strength in numbers of synapses. Nature. 2021;590(7844):111–114. pmid:33328635
- 12. Branco T, Staras K, Darcy KJ, Goda Y. Local Dendritic Activity Sets Release Probability at Hippocampal Synapses. Neuron. 2008;59(3):475–485. pmid:18701072
- 13. Stuart G, Spruston N. Determinants of Voltage Attenuation in Neocortical Pyramidal Neuron Dendrites. Journal of Neuroscience. 1998;18(10):3501–3510. pmid:9570781
- 14. Scholl B, Wilson DE, Fitzpatrick D. Local Order within Global Disorder: Synaptic Architecture of Visual Space. Neuron. 2017;96(5):1127–1138. pmid:29103806
- 15. Gidon A, Segev I. Principles Governing the Operation of Synaptic Inhibition in Dendrites. Neuron. 2012;75(2):330–341. pmid:22841317
- 16. Feldmeyer D, Egger V, Lübke J, Sakmann B. Reliable synaptic connections between pairs of excitatory layer 4 neurones within a single ‘barrel’ of developing rat somatosensory cortex. The Journal of Physiology. 1999;521(Pt 1):169. pmid:10562343
- 17. Feldmeyer D, Lübke J, Silver RA, Sakmann B. Synaptic connections between layer 4 spiny neurone- layer 2/3 pyramidal cell pairs in juvenile rat barrel cortex: physiology and anatomy of interlaminar signalling within a cortical column. The Journal of Physiology. 2002;538(3):803–822. pmid:11826166
- 18. Feldmeyer D, Lübke J, Sakmann B. Efficacy and connectivity of intracolumnar pairs of layer 2/3 pyramidal cells in the barrel cortex of juvenile rats. The Journal of physiology. 2006;575(Pt 2):583–602. pmid:16793907
- 19. Markram H, Lübke J, Frotscher M, Roth A, Sakmann B. Physiology and anatomy of synaptic connections between thick tufted pyramidal neurones in the developing rat neocortex. The Journal of Physiology. 1997;500(Pt 2):409. pmid:9147328
- 20. Urban NN, Barrionuevo G. Induction of Hebbian and Non-Hebbian Mossy Fiber Long-Term Potentiation by Distinct Patterns of High-Frequency Stimulation. Journal of Neuroscience. 1996;16(13):4293–4299. pmid:8753890
- 21. Ito I, Sugiyama H. Roles of glutamate receptors in long-term potentiation at hippocampal mossy fiber synapses. Neuroreport. 1991;2(6):333–336. pmid:1680482
- 22. Katsuki H, Kaneko S, Tajima A, Satoh M. Separate mechanisms of long-term potentiation in two input systems to CA3 pyramidal neurons of rat hippocampal slices as revealed by the whole-cell patch-clamp technique. Neuroscience Research. 1991;12(3):393–402. pmid:1686310
- 23. Zalutsky RA, Nicoll RA. Comparison of two forms of long-term potentiation in single hippocampal neurons. Science (New York, NY). 1990;248(4963):1619–1624. pmid:2114039
- 24. Merkt B, Schüßler F, Rotter S. Propagation of orientation selectivity in a spiking network model of layered primary visual cortex. PLOS Computational Biology. 2019;15(7):e1007080. pmid:31323031
- 25. Sadeh S, Clopath C, Rotter S. Emergence of Functional Specificity in Balanced Networks with Synaptic Plasticity. PLOS Computational Biology. 2015;11(6):e1004307. pmid:26090844
- 26. Hoy JL, Niell CM. Layer-Specific Refinement of Visual Cortex Function after Eye Opening in the Awake Mouse. Journal of Neuroscience. 2015;35(8):3370–3383. pmid:25716837
- 27. Yates JL, Scholl B. Unraveling Functional Diversity of Cortical Synaptic Architecture Through the Lens of Population Coding. Frontiers in Synaptic Neuroscience. 2022;14:888214. pmid:35957943
- 28.
Abbott L, Dayan P. Theoretical Neuroscience. The MIT Press; 2005.
- 29. Knill DC, Pouget A. The Bayesian brain: The role of uncertainty in neural coding and computation. Trends in Neurosciences. 2004;27(12):712–719. pmid:15541511
- 30. Ma WJ, Beck JM, Latham PE, Pouget A. Bayesian inference with probabilistic population codes. Nature Neuroscience. 2006;9(11):1432–1438. pmid:17057707
- 31. Echeveste R, Lengyel M. The Redemption of Noise: Inference with Neural Populations. Trends in Neurosciences. 2018;41(11):767. pmid:30366563
- 32. Hedrick NG, Lu Z, Bushong E, Singhi S, Nguyen P, Magaña Y, et al. Learning binds new inputs into functional synaptic clusters via spinogenesis. Nature Neuroscience 2022 25:6. 2022;25(6):726–737. pmid:35654957
- 33. Fu M, Yu X, Lu J, Zuo Y. Repetitive motor learning induces coordinated formation of clustered dendritic spines in vivo. Nature 2012 483:7387. 2012;483(7387):92–95. pmid:22343892
- 34. Hofer SB, Mrsic-Flogel TD, Bonhoeffer T, Hübener M. Experience leaves a lasting structural trace in cortical circuits. Nature. 2009;457(7227):313–317. pmid:19005470
- 35. Xu T, Yu X, Perlik AJ, Tobin WF, Zweig JA, Tennant K, et al. Rapid formation and selective stabilization of synapses for enduring motor memories. Nature. 2009;462(7275):915–919. pmid:19946267
- 36. Deger M, Seeholzer A, Gerstner W. Multicontact co-operativity in spike-timing-dependent structural plasticity stabilizes networks. Cerebral Cortex. 2018;28(4):1396–1415. pmid:29300903
- 37. Fauth M, Wörgötter F, Tetzlaff C. The Formation of Multi-synaptic Connections by the Interaction of Synaptic and Structural Plasticity and Their Functional Consequences. PLOS Computational Biology. 2015;11(1):e1004031. pmid:25590330
- 38. Fauth MJ, van Rossum MC. Self-organized reactivation maintains and reinforces memories despite synaptic turnover. eLife. 2019;8. pmid:31074745
- 39. Fauth M, Tetzlaff C. Opposing Effects of Neuronal Activity on Structural Plasticity. Frontiers in Neuroanatomy. 2016;10:75. pmid:27445713
- 40. Gallinaro JV, Rotter S. Associative properties of structural plasticity based on firing rate homeostasis in recurrent neuronal networks. Scientific Reports. 2018;8(1):3754. pmid:29491474
- 41. Le Bé JV, Markram H. Spontaneous and evoked synaptic rewiring in the neonatal neocortex. Proceedings of the National Academy of Sciences of the United States of America. 2006;103(35):13214–13219. pmid:16924105
- 42. Yates JL, Scholl B. Synaptic diversity naturally arises from neural decoding of heterogeneous populations. bioRxiv. 2021; p. 2020.10.15.341131.
- 43. Goris RLT, Simoncelli EP, Movshon JA. Origin and Function of Tuning Diversity in Macaque Visual Cortex. Neuron. 2015;88(4):819–831. pmid:26549331
- 44. Koblinger Á, Fiser J, Lengyel M. Representations of uncertainty: where art thou? Current Opinion in Behavioral Sciences. 2021;38:150–162. pmid:34026948
- 45. Aitchison L, Jegminat J, Menendez JA, Pfister JP, Pouget A, Latham PE. Synaptic plasticity as Bayesian inference. Nature Neuroscience 2021 24:4. 2021;24(4):565–571. pmid:33707754
- 46. Hazan L, Ziv NE. Activity dependent and independent determinants of synaptic size diversity. Journal of Neuroscience. 2020;40(14):2828–2848. pmid:32127494
- 47. Ziv NE, Brenner N. Synaptic Tenacity or Lack Thereof: Spontaneous Remodeling of Synapses. Trends in Neurosciences. 2018;41(2):89–99. pmid:29275902
- 48. Driscoll LN, Duncker L, Harvey CD. Representational drift: Emerging theories for continual learning and experimental future directions. Current Opinion in Neurobiology. 2022;76:102609. pmid:35939861
- 49. Keemink SW, Boucsein C, van Rossum MCW. Effects of V1 surround modulation tuning on visual saliency and the tilt illusion. Journal of Neurophysiology. 2018;120(3):942–952. pmid:29847234