^{1}

^{2}

^{1}

^{*}

The authors have declared that no competing interests exist.

Conceived and designed the experiments: PZ JT. Performed the experiments: PZ. Analyzed the data: PZ CD. Wrote the paper: JT PZ.

The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.

The computations that brain circuits can perform depend on their wiring. While a wiring diagram is still out of reach for major brain structures such as the neocortex and hippocampus, data on the overall distribution of synaptic connection strengths and the temporal fluctuations of individual synapses have recently become available. Specifically, there exists a small population of very strong and stable synaptic connections, which may form the physiological substrate of life-long memories. This population coexists with a big and ever changing population of much smaller and strongly fluctuating synaptic connections. So far it has remained unclear how these properties of networks in neocortex and hippocampus arise. Here we present a computational model that explains these fundamental properties of neural circuits as a consequence of network self-organization resulting from the combined action of different forms of neuronal plasticity. This self-organization is driven by a rich-get-richer effect induced by an associative synaptic learning mechanism which is kept in check by several homeostatic plasticity mechanisms stabilizing the network. The model highlights the role of self-organization in the formation of brain circuits and parsimoniously explains a range of recent findings about their fundamental properties.

The computations performed by cortical circuits depend on their detailed patterns of synaptic connection strengths. While the gross patterning of connections across different cortical layers has been well described in some cases

Self-organization typically relies on a combination of self-reinforcing (positive feedback) processes that are combined with a competition for limited resources. In the context of Neuroscience, an example of a self-reinforcing process may be that correlated firing of two groups of neurons may strengthen synaptic connections between them according to Hebb's postulate of synaptic plasticity, while the strengthened connections will in turn amplify the correlated firing of the neurons. An example for competition for a limited resource may be a synaptic scaling mechanism that limits the sum of a neuron's synaptic efficacies such that one synapse can only grow at the expense of others. The combination of self-reinforcing mechanisms with limited resources often gives rise to the formation of structural patterns, which may or may not have specific functional advantages. Here, we will offer an explanation for fundamental aspects of the fluctuations of synaptic strength and the distribution of synaptic efficacies based on self-organization.

Specifically, recent evidence shows that the distribution of synaptic efficacies is highly skewed

To investigate whether and how these properties can arise from self-organization induced by neuronal plasticity mechanisms, we have developed a self-organizing recurrent network (SORN) model. It extends a previous model

We simulated networks of 200 excitatory and 40 inhibitory neurons for 10,000 time steps and observed the resulting activity patterns (

A: spike trains of six randomly selected excitatory neurons during 200 time steps. B: example of an ISI distribution and exponential fit of a typical excitatory neuron. C: histogram of CV values of a network's excitatory units. D: correlations between all neurons. Neurons 201–240 are inhibitory. Network activities within the first 3000 steps are discarded to accommodate for a washout of the arbitrary initial state.

A: histogram of EPSP sizes from

To estimate the probability distribution governing excitatory-to-excitatory synaptic strengths we bin connection strengths and divide the number of occurrences in each bin by the bin size. The bin sizes are uniform on the log scale. To mimic experimental procedures

As the network evolves it goes through different phases (

A: fraction of existing excitatory-to-excitatory connections recorded over 5 million time steps. The inset shows an enlargement of the last 1,000 steps. B: synaptic weight distribution recorded at 20,000th time step. C: synaptic weight distribution recorded at 500,000th time step. D: synaptic weights distributions recorded at 3,000,000th (blue dot) and 4,000,000th (red dot) time step. Blue and red curves in B–D are lognormal fits.

As a next step, we assessed the dynamics of synaptic connection strengths in SORN.

A: example traces of different synaptic weights. B: distribution of life times of newly created synapses matches a power law with exponent close to −3/2. C: distribution of relative spine volume changes across one day from

To better understand the mechanism through which the network self-organizes its connectivity and dynamics, we examined how the strength of a synaptic connection influences its probability of undergoing further growth or decline. Among all the plasticity mechanisms, only STDP and synaptic normalization adjust the weights of EE connections. While synaptic normalization will only scale all incoming excitatory-to-excitatory connections linearly, STDP has the power to change the shape of the distribution of synaptic weights impinging onto a neuron. When we recorded the isolated effect of STDP,

A: the average fraction of synaptic connections that increase due to STDP in one time step as a function of connection weight. B: same as A but for weight decreases due to STDP. C: average number of increased weights minus average number of decreased weights divided by total number of weights of this size. D: mean absolute change of synaptic weight due to STDP and synaptic scaling over 200 time steps.

With all forms of plasticity present, the network will show irregular firing activity and develop a lognormal-like weight distribution. These results are stable over a large range of parameter values (see

Intrinsic plasticity and inhibitory STDP both try to maintain a low average firing rate of excitatory cells and both are important to keep healthy network dynamics. If both are switched off, some units will exhibit very high firing rates while others remain essentially silent and all the phenomena shown in

Understanding the structure and dynamics of neural circuits and reproducing them in neural network models remains a major challenge. Classic models of STDP have been shown to lead to physiologically unrealistic bimodal weight distributions under certain conditions

It is important, however, to also consider alternative explanations. One of the simplest ways to obtain lognormal distributions is by virtue of Gibrat's law, which was originally developed in Economics. It describes the growth of companies by random annual growth rates which are independent of the companies' sizes. This process by itself, when applied to the growth of synaptic connections, would predict that the variance of the synaptic weight distribution would grow without bounds, which is clearly at odds with biological reality. Adding a multiplicative normalization mechanism such as our synaptic normalization rule to Gibrat's proportionate growth process retains the development of a lognormal-like distribution while avoiding the problem of unbounded growth. However, this model does not reproduce the pattern of weight fluctuations observed experimentally. Furthermore, such a model is purely phenomenological and does not describe the mechanism that

Similarly, the models proposed in

If our model is essentially correct, despite its very abstract formulation, then one should be able to replicate the present results in more realistic network models of spiking neurons. As a first step in this direction, we have constructed a version of the model using leaky-integrate-and-fire neurons with realistic parameter values. We have also adapted the plasticity mechanisms for this network. Initial explorations show that major features such as the lognormal-like weight distribution and the pattern of synaptic fluctuations can also be found in this less abstract network model. Future work will elaborate on these preliminary results.

Since the structure of cortical circuits determines the dynamics of neuronal activity, it also determines how information is encoded and propagated. The existence of a small number of very strong synaptic connections may greatly facilitate the highly reliable propagation of signals along pools of neurons

Many computational models of local cortical circuits assume random network structure

We use a SORN (self-organizing recurrent neural network) model

The network's activity state, at a discrete time

The

The time scale of a single iteration step in the model corresponds to typical membrane time constants and widths of spike-timing dependent plasticity (STDP) windows — lying roughly in the range of 10 to 20 ms. Note that in order to save computation time the homeostatic plasticity mechanisms described below are simulated to be much faster than in reality.

The network relies on several forms of plasticity: STDP of EE and EI connections, synaptic scaling and structural plasticity of EE connections, and intrinsic plasticity regulating the thresholds of excitatory neurons.

The set of

An

Note that the synaptic normalization and intrinsic plasticity mechanism operate faster in the model than they would in biological brains. This choice is warranted because of a separation of time scales and speeds up the simulations.

Compared to the original SORN model, we introduce two additional forms of plasticity.

Equivalently, we can write:

Comparison of SORN weight distribution to experimental data.

(PDF)

Parameter robustness analysis and long-term dynamics.

(PDF)

The authors thank Matthias Kaschube and Erin Schuman for comments on an earlier version of this manuscript. We thank Dr. Dmitri Chklovskii and Prof. Haruo Kasai for sharing their experimental data. We thank the three anonymous reviewers for helping to improve the quality of the manuscript.