Correction
4 Mar 2016: The PLOS Computational Biology Staff (2016) Correction: Plasticity-Driven Self-Organization under Topological Constraints Accounts for Non-random Features of Cortical Synaptic Wiring. PLOS Computational Biology 12(3): e1004810. https://doi.org/10.1371/journal.pcbi.1004810 View correction
Figures
Abstract
Understanding the structure and dynamics of cortical connectivity is vital to understanding cortical function. Experimental data strongly suggest that local recurrent connectivity in the cortex is significantly non-random, exhibiting, for example, above-chance bidirectionality and an overrepresentation of certain triangular motifs. Additional evidence suggests a significant distance dependency to connectivity over a local scale of a few hundred microns, and particular patterns of synaptic turnover dynamics, including a heavy-tailed distribution of synaptic efficacies, a power law distribution of synaptic lifetimes, and a tendency for stronger synapses to be more stable over time. Understanding how many of these non-random features simultaneously arise would provide valuable insights into the development and function of the cortex. While previous work has modeled some of the individual features of local cortical wiring, there is no model that begins to comprehensively account for all of them. We present a spiking network model of a rodent Layer 5 cortical slice which, via the interactions of a few simple biologically motivated intrinsic, synaptic, and structural plasticity mechanisms, qualitatively reproduces these non-random effects when combined with simple topological constraints. Our model suggests that mechanisms of self-organization arising from a small number of plasticity rules provide a parsimonious explanation for numerous experimentally observed non-random features of recurrent cortical wiring. Interestingly, similar mechanisms have been shown to endow recurrent networks with powerful learning abilities, suggesting that these mechanism are central to understanding both structure and function of cortical synaptic wiring.
Author Summary
The problem of how the brain wires itself up has important implications for the understanding of both brain development and cognition. The microscopic structure of the circuits of the adult neocortex, often considered the seat of our highest cognitive abilities, is still poorly understood. Recent experiments have provided a first set of findings on the structural features of these circuits, but it is unknown how these features come about and how they are maintained. Here we present a neural network model that shows how these features might come about. It gives rise to numerous connectivity features, which have been observed in experiments, but never before simultaneously produced by a single model. Our model explains the development of these structural features as the result of a process of self-organization. The results imply that only a few simple mechanisms and constraints are required to produce, at least to the first approximation, various characteristic features of a typical fragment of brain microcircuitry. In the absence of any of these mechanisms, simultaneous production of all desired features fails, suggesting a minimal set of necessary mechanisms for their production.
Citation: Miner D, Triesch J (2016) Plasticity-Driven Self-Organization under Topological Constraints Accounts for Non-random Features of Cortical Synaptic Wiring. PLoS Comput Biol 12(2): e1004759. https://doi.org/10.1371/journal.pcbi.1004759
Editor: Olaf Sporns, Indiana University, UNITED STATES
Received: August 21, 2015; Accepted: January 18, 2016; Published: February 11, 2016
Copyright: © 2016 Miner, Triesch. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Data and code is linked from the permanent project webspace at https://fias.uni-frankfurt.de/pm/projects/lif-sorn/wiki/Wiki
Funding: JT is a Johanna Quandt Research Professor at the Frankfurt Institute for Advanced Studies. Funding for this research was provided by the Johanna Quandt Stiftung (http://www.johanna-quandt-stiftung.de/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
The patterns of synaptic connectivity in our brains are thought to be the neurophysiological substrate of our memories, and framework upon which our cognitive functions are computed. Understanding the development of micro-structure in the cortex has significant implications for the understanding of both developmental and cognitive / computational processes. Such insight would be invaluable in understanding the root causes of cognitive and developmental impairments, as well as understanding better the nature of the computations realized by the cortex. It is believed that a small population of strong synapses forms a relatively stable backbone in recurrent cortical networks—perhaps the basis of long-term memories—while a larger population of weaker connections forms a more dynamic pool with a high rate of turnover [1–3]. It has been shown that much of the lateral recurrent connectivity of the layers of the cortex is significantly non-random [4–6], with a focus on layer 5 (L5), as this is more conventionally examined via slice studies. It is an open question which non-random features are developed as a result of direct genetic programming, neural plasticity under structured input, and spontaneous self-organization. We examine here several noted non-random features of recurrent cortical wiring that we believe can be explained as the result of spontaneous self-organization—specifically, self-organization driven by the interaction of multiple neural plasticity mechanisms. The features we will examine are the heavy-tailed, log-normal-like distribution of synaptic efficacies or dendritic spine sizes [6–10] and their associated synaptic dynamics, and the overrepresentation of bidirectional connectivity and certain triangular graph motifs [6].
The interaction of multiple plasticity mechanisms, such as synaptic scaling and Hebbian plasticity has been studied before [11–14], with results suggesting that the interactions of such mechanisms are useful for the formation and stability of patterns of representation. However, we desire a more detailed look at how such self-organization might take place in the cortex. The predecessor to the model we use to address these issues is the Self-Organizing Recurrent Neural Network, or SORN [11]. The SORN is a recurrent network model of excitatory and inhibitory binary neurons which incorporates both Hebbian and homeostatic plasticity mechanisms. Specifically, it incorporates binarized spike timing dependent plasticity (STDP), synaptic normalization (SN), and intrinsic homeostatic plasticity (IP). Certain variants also employ structural plasticity. It has been demonstrated to be computationally powerful and flexible for unsupervised sequence and pattern learning, presenting apparent approximate Bayesian inference and sampling-like behavior [15–17]. Additionally, it has been used to reproduce synaptic weight distributions and growth dynamics observed in the cortex [18].
In this paper, we introduce the LIF-SORN, a leaky integrate-and-fire based SORN-inspired network model that incorporates a spatial topology with a distance-dependent connection probability, in addition to more biologically plausible variants of and extensions to the plasticity mechanisms of the SORN. The LIF-SORN models a recurrently connected network of excitatory and inhibitory neurons in L5 of the neocortex, or a slice thereof. This new model is the first to reproduce numerous elements of the synaptic phenomena examined in [10, 19], and [18] in combination with the sort of non-random graph connectivity phenomena observed in [6]. The simultaneous reproduction of all these elements with a minimal set of plasticity mechanisms and constraints represents an unprecedented success in explaining noted features of the cortical micro-connectome in terms of self-organization.
Materials and Methods
Simulation methods
We randomly populate a 1000 × 1000 μm grid with 400 LIF neurons with intrinsic Ornstein-Uhlenbeck membrane noise as the excitatory pool, and a similar (though faster refracting) population of 80 noisy LIF neurons as the inhibitory pool. All synapses are inserted into the network with a gaussian distance-dependent connection probability profile with a half-width of 200 μm. This particular profile is chosen as a middle ground between the results of [6], which finds no distance dependence up to a scale of 80–100 μm, and the results of [5], which finds an exponential distance dependence at a scale of 200–300 μm. Recurrent excitatory synapses are not populated, as they will be grown “naturally” with the structural plasticity. Excitatory to inhibitory and inhibitory to excitatory synapses are populated to a connection fraction of 0.1 and inhibitory recurrent synapses to a connection fraction of 0.5, in approximate accordance with L5 experimental data [20]. Excitatory to inhibitory, inhibitory to excitatory, and inhibitory to inhibitory connections are given fixed efficacies and connectivities. Recurrent excitatory connectivity is begun empty and is to be grown in the course of the simulation. The relevant parameters are summarized in Tables 1 and 2.
We use the Brian spiking neural network simulator [21]. The neuron model is a leaky integrate-and-fire (LIF) neuron, the behavior of which is defined by: (1) where V is the membrane potential, El is the resting membrane potential, τ is the membrane time constant, σ is the standard deviation of the intrinsic membrane noise, and ξ is the Ornstein-Uhlenbeck process which drives the noise. In our model, the variance of the noise is 5 mV. When V reaches a threshold VT, the neuron spikes, and the membrane potential V is returned to Vreset (which may be lower than El in order to provide effective refractoriness). The parameters used are given in Table 3.
A simple transmitting synapse model is used, connecting neuron i to neuron j. When neuron i spikes, the synaptic weight is added to the membrane potential V of neuron j following the conduction delay for the type of connection (as in Table 2). To improve network activity stability, this synaptic weight is modulated by a short term plasticity (STP) mechanism [22] implementing a rapid synaptic depression combined with a slightly slower facilitation. The STP mechanism consists of a two variable system: (2) Upon each presynaptic spike, the variables are updated according to the following rules: (3) The synaptic weight is then modulated as . We select U = 0.04, τd = 500 ms and τf = 2000 ms as the respective depression and facilitation timescales, corresponding to approximate experimentally observed values [22, 23]. The presence of the STP adds a significant degree of stability to network activity and provides a more robust paramter range for other mechanisms, reducing the need for parameter tuning.
As in the original binary SORN, we include multiple plasticity mechanisms. The first is exponential spike timing dependent plasticity (STDP), which is executed at a biologically realistic timescale [24–29]. This defines the weight change to a synapse caused by a pair of pre- and post-synaptic spikes as in Eqs 4, 5 and 6: (4) (5) (6) Here, i and j index the synapse via its pre- and postsynaptic neurons respectively, f indexes presynaptic spikes, and n indexes postsynaptic spikes. A+ and A− are the maximal amplitudes of the weight changes, and τ+ and τ− are the time constants of the decay windows. Values are set to approximate experimental data; in particular, round numbers were chosen that roughly approximate data in [24] and [25], with τ+ = 15 ms, A+ = 15 mV, τ− = 30 ms, and A− = 7.5 mV. We use the “nearest neighbor” approximation in order to efficiently implement this online, in which only the closest pairs of pre- and post-synaptic spikes are used. This is implemented in an event-based fashion, using a spike memory buffer with a timestep equal to that of the simulation itself (0.1 ms) and with the full calculation only evaluated upon a spike.
In the brain, several mechanisms appear to regulate the amount of synaptic drive that a neuron is receiving. [30] demonstrated the phenomenon of synaptic normalization during long-term potentiation (LTP). The overall density of postsynaptic AMPA receptors per micrometer of dendrite stays roughly constant, but the density at individual synapses increases (for some) while the total number of synapses per micrometer of dendrite decreases. This suggests that synaptic efficacies are mainly redistributed over the dendritic tree during the typical time course of an LTP experiment, but the sum of these efficacies (roughly corresponding to the sum of the active zone areas) stays approximately constant. Another phenomenon regulating the synaptic drive a neuron is receiving is homeostatic synaptic scaling [31], which is thought to regulate synaptic efficacies in a multiplicative fashion on a very slow time scale (on the order of days) in order to maintain a certain desired level of neural activity. For the sake of simplicity, we use here only a multiplicative form of normalization that drives the sum of synaptic efficacies to a desired target value on a fast time scale: (7) Here, Wi is the vector of incoming weights for any neuron i, Wij are the weights of the individual synapses, Wtotal is the target total input for each neuron, and ηSN is a rate variable which, together with the size of the timestep, determines the timescale of the normalization. Wtotal is calculated before the simulation run for each of the four types of synapse (E to E, E to I, I to E, and I to I) by multiplying the connection fraction for that type of connection by the mean synapse strength and the size of the incoming neuron population. The timescale we use is on the order of seconds and therefore accelerated from biology; corresponding to an application of the process once per second and ηSN = 1.0. We have tested it as well applying the normalization at every single simulation timestep, and with smaller values for ηSN, which, except for very small values of ηSN, has no significant effect on any of our observables. The accelerated timescale is sufficiently separated from that of the STDP, which operates on the order to tens of milliseconds, to avoid unwanted interactions while decreasing the necessary simulation time.
Neuronal excitability is regulated by various mechanisms and over different time scales in the brain. On a very fast milliseconds time scale, a neuron’s refractory mechanism prevents it from exhibiting excessive activity in response to very strong inputs [32]. This is inherently included in the neuron model we use. At a somewhat slower time scale, spike rate adaptation reduces many types of neurons’ responses to continuous drive [33]. Given that our model lacks any strong external drive, we neglect this. At very slow time scales of hours to days, intrinsic plasticity mechanisms change a neuron’s excitability through the modification of voltage gated ion channels that can modify its firing threshold and the slope of its frequency-current curve in a homeostatic fashion. Additional regulation of neuronal activity has been observed over multiple timescales [34, 35]. In order to capture the essence of such mechanisms in a simple fashion, we adopt a simple regulatory mechanism for the firing threshold, which, in combination with the previously discussed STP mechanism, phenomenologically captures the majority of these adaptive behaviors over short and medium timescales. Though relatively stable network activity can be achieved without this mechanism, it requires hand tuning of thresholds dependent on other network parameters, which we wish to avoid. The mechanism is implemented at discrete time steps in the following way: (8) (9) Here, VT is the threshold for an individual neuron, ηIP is a learning rate, hIP is the target number of spikes per update interval, and Nspikes is the number of times a neuron has spiked since the last time a homeostatic plasticity step was executed. The right arrow indicates that the counter is reset after each evaluation of the window. This operation is performed at a biologically accelerated timescale. The desired target rate is chosen to be 3.0 Hz, so hIP = 3.0 Hz × 0.1 ms = 0.0003 and ηIP is set to 0.1 mV. In our implementation, the operation is performed at every timestep of the simulation (0.1 ms), so Nspikes effectively becomes a binary variable and Eq 9 becomes irrelevant. In this case, the action of the mechanism is that every spike increases the threshold by a small amount, and the absence of a spike decreases it by a small amount. Like the SN process, the accelerated (relative to biology) timescale is sufficiently separated from the timescale of the STDP to avoid unwanted interactions while decreasing the necessary simulation time.
We implement structural plasticity for the recurrent excitatory synapses via simultaneous synaptic pruning and synaptic growth. Synaptic pruning is implemented in a direct fashion in which synapses whose strength has been driven below a near-zero threshold (0.000001 mV) by the other plasticity mechanisms are eliminated. At the same time, new synapses are stochastically added with a strength of 0.0001 mV, according to the distance-dependent per-pair connection probabilities, at a regular rate. This is done at an accelerated timescale by adding a random number of synapses (drawn from an appropriately scaled and integer-rounded normal distribution) once a second. A mean growth rate is hand-tuned to lead to the desired excitatory-excitatory connection fraction. In this case, the mean growth rate is 920 synapses per second (with standard deviation of ) and the target connection fraction is 0.1 [6, 20]. The synapses are added according to pre-calculated connection probabilities determined by the gaussian connectivity profile described in the first paragraph of this section. Like the previous two plasticity mechanisms, the acceleration of the timescale from biology is justified by the principle of separation of timescales. At certain points in the Results and Supplementary Material, the results of the simulation are compared to those of a purely topological network. This is generated simply by performing the batch structural growth operation, as described, a single time, but adding instead a number of connections equal to the total number of connections at the target connection fraction.
Results
Network growth and abundance of bidirectional connections
As the fully simulated network runs, new recurrent excitatory synapses are allowed to grow and, if their strengths are driven close to zero, be pruned. The network first enters a growth phase, which lasts 100–200 seconds of simulation time, and then a stable phase in which the growth rate balances the pruning rate. The network is allowed to run for 500 seconds and the state of the excitatory connectivity and the dynamics of the connection changes during the final epoch are then examined.
We first examine, alongside the smooth growth of the network, the prevalence of bidirectional connections as compared to chance, a phenomenon noted to be significantly above-chance in [4] and [6], as shown in Fig 1. We observe for the total connection fraction a reliable value of 0.1, as selected. We observe a stable phase value of 0.018 for the bidirectional connection fraction, translating to a factor of 1.83 above chance. Our control for chance is the expected number of bidirectional connections for an Erdős-Rényi graph containing the same number of nodes and edges as the simulated network. For comparison purposes, a value of approximately 4 times chance is observed in [6]. We note that an otherwise equivalent non-topological network, in which the probability of connection between neurons is uniform rather than distance-dependent, produces a slight underrepresentation of bidirectional connections, reinforcing the well-known expectation that classical STDP, in the absence of other factors, favors unidirectional connectivity.
Connection fraction evolution for plastic networks with and without topology, as well as flat values for topology only. (top) Growth and subsequent stabilization of the connection fraction of the network with simulation time. (middle) Growth of the bidirectional connection fraction. (bottom) Evolution of the bidirectional connection fraction with time as it relates to chance level (i.e. compared to the value for an Erdős-Rényi graph with the same number of nodes and edges). Data averaged over ten trials; standard deviation is shaded.
Regarding the growth of the network and the stabilization of its activity, we note one additional thing. In Fig 2, we observe that the distribution of interspike intervals (ISIs) and their coefficients of variation (CVs) follow the properties of an approximately Poisson-like spiking with an effective refractory period, as is observed in cortical circuits. That is to say, the distribution of ISIs follows an exponential decay with a distortion, induced by the refractory period, at the low end, and that the CVs of the ISIs tend to be close to one.
(top) Pooled (over all neurons) distribution of ISIs with exponential fit, suggesting Poisson-like behavior with a refractory period. Individual neuron distributions have been tested to be similar. (bottom) Distribution of CVs of ISIs, suggesting Poisson-like behavior. Single trial data.
We would like to briefly consider how a model using classical STDP, which is known to drive the formation of unidirectional connections, can still produce such an abundance of bidirectional connections. In this model, the existence of clustering topology strongly drives the initial overrepresentation of bidirectional connections (as well as likely seeding higher order clustering effects, which are then selected and tuned via the plasticity mechanisms, and will be examined later). A simple mathematical argument will serve to demonstrate this (and, in fact, that any inhomogeneity in unidirectional connection probability will lead to an overrepresentation of bidirectional connections). Consider a single neuron in the center of a two dimensional sheet (this generalizes to volumes as well) which is populated with additional neurons at a uniform density. Assume that the central neuron has formed distance-dependent but otherwise random connections to the other neurons as follows: There is a local neighborhood containing a fraction f of all the neurons in the sampled area which have been connected with a high probability ph, while the remaining area contains the fraction 1 − f of all neurons, which connect with a lower probability pl. We can then treat the connection probability as a random variable P which takes the value ph with probability f and pl with probability 1 − f (this generalizes as well to additional neighborhoods, and, as the number of neighborhoods goes to infinity, to a continuous density of connection probability). The average overall connection probability of the neuron is then given by E[P] = ph f + pl(1 − f). We now want to consider the average probability of finding a bidirectional connection. We assume that all neurons share the same distance-dependent connection probability, and therefore, the probability that a neuron within the local neighborhood has formed a connection to the central neuron is the same ph with which the central neuron is likely to form a connection to the neuron in the local neighborhood. Thus, the probability of a bidirectional connection in the local neighborhood is , and by the same reasoning, the probability of forming a bidirectional connection with a neuron outside the local neighborhood is . Then, the average overall bidirectional connection probability of the neuron is given by . Since the squaring operation is convex, Jensen’s inequality applies, stating that for any convex function g(P) of a random variable P, g(E[P]) ≤ E[g(P)]. It then follows that with g(P) = P2, E[P2] ≥ E[P]2. Thus, bidirectional connections can occur more frequently than would be expected from the average unidirectional connection probability. Equality holds if and only if P is constant. It follows then that any inhomogeneity in unidirectional connection probability will lead to an overrepresentation of bidirectional connections. In the case of our model, the inhomogeneity is the distance-dependent connection probability, though any number of other factors could come into play.
For the above argument to apply to a structurally dynamic model such as ours, all that need be true is that bidirectional connections are added at a sufficiently high rate compared to their rate of removal due to STDP and pruning. The high number of bidirectional connections in the purely topological network, the low values for the purely plastic network, and the intermediate number of bidirectional connections for the full network model in Fig 1 serve to demonstrate the competition between the distance-dependent structural plasticity, which tends to boost bidirectional connectivity, and STDP and pruning, which tend to reduce bidirectional connectivity.
Markov model of bidirectional overrepresentation
Furthermore, this competition can be captured and described by a simple Markov model in which each bidirectional connection pair develops independently of all the others. The model considers a pair of excitatory neurons and has three states {U, S, D} representing that the pair of neurons is either unconnected, singly connected, or doubly connected, respectively. We define transition probabilities denoting the probability of transitioning from one state to another during a fixed time interval. For example, pUS is the probability for transitioning from the unconnected state U to the singly connected state S. The transition matrix is the matrix formed by all transition probabilites and is given by: given the assumption that transitions from the unconnected state U to the doubly connected state D and vice versa are sufficiently unlikely to be considered negligible. Since the sum of the elements in each row of T has to equal one, T can be rewritten as: which depends on the four parameters pUS, pSU, pSD, and pDS. If all of them are greater than zero, then the Markov Chain is regular and we can find its stationary distribution by finding the left Eigenvector of T: with u + s + d = 1. The resulting system of linear equations can be written as: where we have defined α = pUS/pSU and β = pSD/pDS. Thus, the behavior of the system depends only on the two transition probability ratios α and β. We can express u as a function of α and β to arrive at the final solution: We can now consider the conditions under which the model leads to an overrepresentation of bidirectional connections. The overall connection probability in the Markov model is p = s/2 + d. For a random graph, we then expect: We consider an overrepresentation of bidirectional connections to be in comparison to a random graph. Therefore, using the previously defined transition ratios and a bit of algebra, we arrive at the following expression for the overrepresentation A: We can then empirically check this Markov model against our simulation. Counting and averaging connections and transitions over the last 100 seconds of a standard 500 second run of our model, we obtain α = 0.194 and β = 0.105. This leads the Markov model to predict an overrepresentation of A = 0.180, which is, in fact, the also measured value for the average overrepresentation over the observed time period.
Statistics and fluctuations of synaptic efficacies
During the growth phase of the simulation, we note the reproduction of some of the results of [19], specifically that during network growth there is a tendency for larger synaptic weights to be more likely to shrink than smaller synaptic weights, as seen in Fig 3.
Synaptic change dynamics during network growth epochs, before stabilization. Change is over entire epoch. “Bunching” in earliest epoch is an artifact of normalization under a small number of synapses. Single trial data.
Once the stable phase is reached, we observe the distribution of synaptic weights via histogramming, as previously stated, in Fig 4. This is in qualitative agreement with the heavy-tailed, log-normal-like shape typically observed in experimental data [6–10]. Several theoretical explanations for this distribution have been proposed, including a self-scaling rich-get-richer dynamic [18] and a confluence of additive and multiplicative processes [36, 37], both of which are consistent with our model. We note that the topology of the network seems to have a minimal effect on this result, as would be expected from the results of [18].
The distribution of the base ten logarithm of synaptic weights for plastic networks with and without topology. Data averaged over ten trials; error bars are standard deviation.
We observe next the synaptic change dynamics in the stable phase of the network. We follow the format used in [10], comparing initial synaptic weight during a test epoch to both absolute and relative changes in synaptic weight, and demonstrate in Fig 5 that strong synaptic weights exhibit relatively smaller fluctuations over time, as experimentally observed [10]. Additionally, this serves to reinforce the earlier success of [18] in modeling such synaptic dynamics as the result of self-organization, and demonstrates that such results carry over into a biologically more realistic model.
The above plots show the distributions of change in synaptic weight as a function of initial synaptic weight over a ten second simulation time period. The plots on the left are from the simulated network and are in electrophysiological units. The plots on the right are from experiment [10] and are in units of volume as estimated from fluorescence data. The plots on the top show the absolute change in synaptic weight / size. The plots on the bottom show the relative change in synaptic weight / size. Single trial data.
We examine, as well, the distribution of synaptic lifetimes (see Fig 6). It has been predicted that the lifetimes of fluctuating synapses may follow a power law distribution [18]; our model makes this prediction as well. Recent experimental evidence supports this prediction [38]. We expand upon previous predictions with two interesting observations. In its current form, our model produces a slope of approximately 5/3 in the stable phase (for comparison, the experimentally observed slope is approximately 1.38). This decreases slightly in the growth phase. Secondly, we have observed as well that the slope can be modified by adjusting the balance of potentiation and depression in the STDP rule, varying between values between 1 and greater than 2, depending on the chosen parameters. For example, doubling the amplitude of the depression term in the STDP rule leads to a slope of approximately 5/2, while halving it leads to a slope of approximately 5/4. This is, in retrospect, an intuitive phenomenon. A preponderance of potentiation will lead to synapses being depressed to a value below the pruning threshold less frequently, thereby decreasing the slope of the power law. Similarly, in a depression-dominated scenario, synapses will be driven below the pruning threshold more frequently, leading to a higher power law slope. Returning to the slight decrease in slope during the growth phase, this makes sense, as a reduction in the effective pruning rate is necessary for the network to continue to grow. We believe that with a more extensive investigation of the effects of other model parameters on the power law, the slope of this distribution could be used as a meaningful measure of the potentiation-depression balance in a recurrent cortical network.
The above plot shows the distributions of of synaptic lifetimes during the stable phase. Slope is approximately 5/3. The equivalent slope in the growth phase is slightly less. Here, we define entries in the growth phase as having synaptic end times of less than 150 seconds, and entries in the stable phase as having synaptic start times of greater than 350 seconds. Slopes are approximated via linear regression to the data points before the drop-off. Single trial data.
Motif properties
We subsequently examine the prevalence of triadic motifs in the graph of the simulated network. An overrepresentation of certain motifs was noted in [6]. We used a script written for the NetworkX Python module [39, 40] to acquire a motif count for the graph of the simulated network. As the overrepresentation of bidirectional connections will trivially lead to an overrepresentation of graph motifs containing bidirectional edges, the control for chance is, in this case, a modified Erdős-Rényi graph with the same number of nodes, same number of unidirectional edges, and same number of bidirectional edges as the graph of the simulated network, with the unidirectional and bidirectional edges being independently populated. A similar control is used in [6]. We observe a similar pattern of “closed loop” triadic motifs being overrepresented in Fig 7, as experimentally observed in [6]. We note that the results for a non-topological plastic network with classical STDP, in the absence of additional factors, does not, relatively speaking, strongly select for any particular family of motifs. We similarly note that while distance-dependent topology does select for the observed family of motifs, it does not do so at the experimentally observed level. It is only the combination of topology and plasticity that strongly selected for the desired family of motifs while simultaneously producing all other noted effects. Approximate experimental data for comparison was extracted from [6] using GraphClick [41].
Triadic motif counts (in the same order as [6]) for a simulated network as a multiple of chance value. The counts have been corrected for the observed overrepresentation of bidirectional connections. Results are shown for a complete network, a purely topological construction, an equivalent network with no topology, and approximate experimental data. For the topology-free network, the count of motif 16 is out of range due to the extremely low expected count after bidirectionality corrections. Data averaged over ten trials; error bars are standard deviation. Horizontal axis has been jittered slightly to increase readability.
Discussion
The problem of how the non-random micro-connectivity of the cortex arises is a nontrivial one with significant implications for the understanding of both cognition and development. We attempt, in this paper, to provide insight into this problem by presenting a plausible model by which such non-random connectivity arises as the self-organized result of the interaction of multiple plasticity mechanisms under physiological constraints. Some models attempt to describe elements of the graph structure of the micro-connectome in purely physiological and topological terms [42]. However, such models necessarily lack an active network, and are thus unable to simultaneously account for synaptic dynamics, as our model does. Our model is, of course, a simple model, but the degree to which it accounts for observed non-random features of the typical cortical microcircuit without detailed structural features, metabolic factors, or structured input to drive the plasticity in a particular fashion is highly suggestive in terms of what is necessary at a bare minimum to drive the development and maintenance of the complex microstructure of the brain.
As mentioned in the introduction, it is hypothesized that a small backbone of strong synapses may form the basis of stable long-term memory. The fact that in our model, strong weights remain stable in the presence of ongoing plasticity and despite significant fluctuations of smaller weights (which has been modeled as a stochastic Kesten process [37]), and the naturalness with which such a dynamic arises out of the interactions of known plasticity mechanisms, is both suggestive and supportive of this theory. On a related note, the heavy-tailed distribution of synaptic efficacies (often described as log-normal or log-normal-like) is an experimentally observed phenomenon seemingly fitting this narrative [6–10]. A theoretical explanation connecting log-normal firing rates with a log-normal synaptic efficacy distribution was one of the first proposed [43]. However, further studies have suggested that such a firing rate distribution is not necessary to create a heavy-tailed distribution of synaptic efficacies, using either a self-scaling rich-get-richer dynamic [18] or a combination of additive and multiplicative dynamics [36, 37].
An additional noted non-random feature of cortical recordings that has been passed over in this model is the observed log-normal distribution of cortical firing rates (touched upon in the previous paragraph). Our intrinsic plasticity mechanism necessarily negates this feature, which may be self-organized via mechanisms not included in our model, such as diffusive homeostasis [44, 45]. In order to maximize simplicity, a single target firing rate is chosen for all neurons. Additional testing in which the target firing rate is drawn from a log-normal distribution produces minimal qualitative effects on the observed features (except, trivially, the ISI distribution, see S1 Fig). Another issue is that as things stand, the exact statistics of the micro-connectome are difficult to discern—though strong inferences can be made in the right direction—due to inherent sampling biases in paired patch-clamp reconstructions of limited size [46]. It is our hope and belief that advances in fluorescence imaging, automated electron microscopy reconstruction [47, 48], and massive multi-unit array recordings will help to alleviate these biases. One might imagine that additional biases may be caused by the relatively small model size of 400 excitatory neurons, when realistic cortical densities would result in thousands of neurons in such an equivalent volume. We have tested the network at much larger sizes of up to 2000 neurons and found no notable qualitative change to our observed results (S2 Fig; all other features remain the same as well), so we maintained a relatively small network size to increase computational ease. It should be noted that except for this check, all supplementary checks, tests, and additional analyses were performed with the standard 400 + 80 neuron network size.
We have described the formation of the overrepresentation of bidirectional connections in terms of the competition between structural growth and structural pruning in the presence of a topological inhomogeneity. Other possibilities for increasing the prevalence of bidirectional connections include an STDP window with an integral greater than zero (i.e. biased toward potentiation), or one in which the asymmetries are finely tuned so that, given the target homeostatic target firing rate, connections are, on the whole, more likely to potentiate (making the STDP window fully symmetrical has, in our model, only a minimal effect). Additionally, more complicated STDP models [50, 51] are known to produce overrepresentation of bidirectional connections in high-frequency firing regimes.
One other computational study has reproduced similar motif overrepresentations, however, this model was significantly more complex and required specific structured input [49]. Some might view the fact that, in this model, the primary driver behind the overrepresentation of bidirectional connections is topology, as a shortcoming. We do not view this as a problem; after all, topology exists in the cortex and the rest of the study’s results suggest it is an important factor in the self-organization of cortical circuits. There are the previously mentioned mechanisms utilizing non-classical STDP, such as the so-called triplet and voltage rules [50, 51], which, in the presence of high-frequency activity, are capable of producing and maintaining bidirectional connections. Introducing such mechanisms into a similar model would be a welcome and interesting future study, and could potentially lead to an even stronger and more precise motif selectivity. To further explain the importance of the various mechanisms we have introduced in self-organization, we have included a brief analysis of the behavior of the network in the absence of individual mechanisms (see Table 4 below and S3 and S4 Figs). Essentially, removal of the topology leaves the synaptic dynamics mostly unchanged, but significantly alters the connectivity structure. Removal of structural plasticity trivially leads to failure of the network to form, or, in the case of the removal of pruning only, divergent network growth. Similarly, removal of STDP leads to divergent network growth because LTD is necessary to trigger pruning. Removal of the STP leads to “epileptic” behavior, resulting in dynamic and structural disruptions. Removal of SN leads to a small subset of synapses experiencing runaway growth, with the others shrinking to near zero and being pruned. Finally, removal of the IP leads to small changes to the structural properties, but requires fine tuning of the thresholds to run in this regime. Failure to tune the thresholds in this case leads to either silent or “epileptic” networks.
Additionally, with the aim of understanding the relationship between the activity correlation, the synaptic weights, and the intersomatic separation, a Spearman’s rank correlation analysis was performed on such data from an example trial (results in S1 Table). In summary, a strong and highly significant positive correlation was found between the spike correlation and the synaptic weight. However, only a weak (negative) correlation was found between the spike correlation and the intersomatic separation, and no significant correlation was found between the intersomatic separation and synaptic weight.
As a concluding point, often, models of cortical microcircuits are described as random graphs, such as the classical random balanced network [52]. However, experiments have demonstrated that the structure of cortical microcircuitry is significantly non-random [5, 6], suggesting that random networks may be insufficient for modeling cortical development and activity. Lacking in structural plasticity or topology, such random graph based balanced networks are incapable of producing the sort of results we have observed. Having provided a mechanism with which one may generate a cortex-like non-random structure, it would be enlightening to determine if said structure provides any significant computational or metabolic advantage as compared to a random graph. Similarly, limitations in online plasticity capabilities significantly hinder the use of such random networks and their relatives in reservoir computing [53] for unsupervised learning and inference tasks (though progress has recently been made in this direction [54]), while earlier studies with the original SORN model [11, 15] suggest that the particular combination of plasticity mechanisms in our model can endow networks with impressive learning and inference capabilities. A logical next step is therefore to study the learning and inference capabilities of LIF-SORN networks and relate them to neurophysiological experiments. Our rapidly developing ability to manipulate neural circuits in vivo suggests this as an exciting direction for future research. It is our belief that the future of modeling cortical computation and related biological processes lies in the incorporation of multiple plasticity and homeostatic mechanisms under simple sets of constraints and biases.
Supporting Information
S1 Fig. Triadic motif counts as a multiple of chance for lognormal firing rates, corrected for bidirectional overrepresentation.
Triadic motif counts (in the same order as [6]) for a simulated network as a multiple of chance value. The counts have been corrected for the observed overrepresentation of bidirectional connections. Results are shown for a complete network with IP target rates drawn from a log-normal distribution (mean of 3.0, standard deviation of 1.0 Hz) instead of a single value and approximate experimental data. Other parameters remain the same, aside from scaling of growth rate to obtain stable phase connection fraction of 0.1. Error bars are standard deviation. Horizontal axis has been jittered slightly to increase readability.
https://doi.org/10.1371/journal.pcbi.1004759.s001
(TIF)
S2 Fig. Triadic motif counts as a multiple of chance for a larger (2000 neuron) network, corrected for bidirectional overrepresentation.
Triadic motif counts (in the same order as [6]) for a simulated network as a multiple of chance value. The counts have been corrected for the observed overrepresentation of bidirectional connections. Results are shown for a complete network of 2000 neurons and approximate experimental data. Other parameters remain the same, aside from scaling of growth rate to obtain stable phase connection fraction of 0.1. Error bars are standard deviation. Horizontal axis has been jittered slightly to increase readability.
https://doi.org/10.1371/journal.pcbi.1004759.s002
(TIF)
S3 Fig. Triadic motif counts as a multiple of chance for networks with plasticity mechanisms removed, corrected for bidirectional overrepresentation.
Triadic motif counts (in the same order as [6]) for a simulated network as a multiple of chance value. The counts have been corrected for the observed overrepresentation of bidirectional connections. Results are shown for a network with all plasticity mechanisms, a network without IP, a network without SN, and approximate experimental data. Error bars are standard deviation. Horizontal axis has been jittered slightly to increase readability. Upper and lower plot show the same data with a different scaling of the y-axis.
https://doi.org/10.1371/journal.pcbi.1004759.s003
(TIF)
S4 Fig. Log distribution of synaptic weights for networks with plasticity mechanisms removed.
The distribution of the base ten logarithm of synaptic weights for a network all plasticity mechanisms (ten trials), a single network without IP, and a single network without SN. Error bars are standard deviation. Upper and lower plot show the same data with a different scaling of the y-axis.
https://doi.org/10.1371/journal.pcbi.1004759.s004
(TIF)
S1 Table. Spearman’s rank correlation and associated P-value between intersomatic separation, synaptic weight, and pairwise spike correlation.
Representative single trial example data. Spike correlation was taken from 50 s activity with 50 ms bins [55].
https://doi.org/10.1371/journal.pcbi.1004759.s005
(PDF)
Acknowledgments
We would like to thank the authors of “Principles of long-term dynamics of dendritic spines” for sharing their data. We would like as well to thank Christoph Hartmann and Pengsheng Zheng for their valuable consultation.
Author Contributions
Conceived and designed the experiments: DM JT. Performed the experiments: DM. Analyzed the data: DM. Contributed reagents/materials/analysis tools: DM. Wrote the paper: DM JT. Performed the programming, analysis, and writing, and contributed to the conceptualization: DM. Provided significant expertise and consultation, provided initial editing, and contributed to the conceptualization: JT.
References
- 1. Gilson M, Fukai T (2011) Stability versus neuronal specialization for STDP: long-tail weight distributions solve the dilemma. PloS one 6: e25339. pmid:22003389
- 2. Grutzendler J, Kasthuri N, Gan W (2002) Long-term dendritic spine stability in the adult cortex. Nature 420. pmid:12490949
- 3. Trachtenberg JT, Chen BE, Knott GW, Feng G, Sanes JR, et al. (2002) Long-term in vivo imaging of experience-dependent synaptic plasticity in adult cortex. Nature 420: 788–94. pmid:12490942
- 4. Markram H (1997) A network of tufted layer 5 pyramidal neurons. Cerebral cortex (New York, NY: 1991) 7: 523–33.
- 5. Perin R, Berger TK, Markram H (2011) A synaptic organizing principle for cortical neuronal groups. Proceedings of the National Academy of Sciences of the United States of America 108: 5419–24. pmid:21383177
- 6. Song S, Sjöström PJ, Reigl M, Nelson S, Chklovskii DB (2005) Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS biology 3: e68. pmid:15737062
- 7. Harris KM, Stevens JK (1989) Dendritic spines of CA 1 pyramidal cells in the rat hippocampus: serial electron microscopy with reference to their biophysical characteristics. The Journal of neuroscience: the official journal of the Society for Neuroscience 9: 2982–97.
- 8. Lisman JE, Harris KM (1993) Quantal analysis and synaptic anatomy–integrating two views of hippocampal plasticity. Trends in neurosciences 16: 141–7. pmid:7682347
- 9. Thomson AM, Deuchars J, West DC (1993) Large, deep layer pyramid-pyramid single axon EPSPs in slices of rat motor cortex display paired pulse and frequency-dependent depression, mediated presynaptically and self-facilitation, mediated postsynaptically. Journal of neurophysiology 70: 2354–69. pmid:8120587
- 10. Yasumatsu N, Matsuzaki M, Miyazaki T, Noguchi J, Kasai H (2008) Principles of long-term dynamics of dendritic spines. The Journal of neuroscience: the official journal of the Society for Neuroscience 28: 13592–608.
- 11. Lazar A, Pipa G, Triesch J (2009) SORN: a self-organizing recurrent neural network. Frontiers in computational neuroscience 3: 23. pmid:19893759
- 12. Savin C, Joshi P, Triesch J (2010) Independent component analysis in spiking neurons. PLoS computational biology 6: e1000757. pmid:20421937
- 13. Tetzlaff C, Kolodziejski C, Timme M, Wörgötter F (2011) Synaptic scaling in combination with many generic plasticity mechanisms stabilizes circuit connectivity. Frontiers in Computational Neuroscience 5: 1–15.
- 14. Tetzlaff C, Kolodziejski C, Timme M, Wörgötter F (2012) Analysis of Synaptic Scaling in Combination with Hebbian Plasticity in Several Simple Networks. Frontiers in Computational Neuroscience 6: 1–17.
- 15. Lazar A, Pipa G, Triesch J (2011) Emerging Bayesian priors in a self-organizing recurrent network. Neural Networks and Machine Learning: 1–8.
- 16.
Duarte R, Seriès P, Morrison A (2014) Self-Organized Artificial Grammar Learning in Spiking Neural Networks. Proceedings of the 36th Annual Conference of the Cognitive Science Society: 427–432.
- 17. Hartmann C, Lazar A, Triesch J (2015) Where’s the noise? key features of neuronal variability and inference emerge from self-organized learning. PLoS computational biology 11: e1004640. pmid:26714277
- 18. Zheng P, Dimitrakakis C, Triesch J (2013) Network self-organization explains the statistics and dynamics of synaptic connection strengths in cortex. PLoS computational biology 9: e1002848. pmid:23300431
- 19. Minerbi A, Kahana R, Goldfeld L, Kaufman M, Marom S, et al. (2009) Long-term relationships between synaptic tenacity, synaptic remodeling, and network activity. PLoS biology 7: e1000136. pmid:19554080
- 20. Thomson AM, West DC, Wang Y, Bannister aP (2002) Synaptic connections and small circuits involving excitatory and inhibitory neurons in layers 2–5 of adult rat and cat neocortex: triple intracellular recordings and biocytin labelling in vitro. Cerebral cortex (New York, NY: 1991) 12: 936–53.
- 21. Goodman D, Brette R (2008) Brian: a simulator for spiking neural networks in python. Frontiers in neuroinformatics 2: 5. pmid:19115011
- 22. Markram H, Wang Y, Tsodyks M (1998) Differential signaling via the same axon of neocortical pyramidal neurons. Proceedings of the National Academy of Sciences of the United States of America 95: 5323–8. pmid:9560274
- 23. Zucker RS, Regehr WG (2002) Short-term synaptic plasticity. Annual Review of Physiology 64: 355–405. pmid:11826273
- 24. Bi GQ, Poo MM (1998) Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. The Journal of neuroscience: the official journal of the Society for Neuroscience 18: 10464–72.
- 25. Froemke R, Poo Mm, Dan Y (2005) Spike-timing-dependent synaptic plasticity depends on dendritic location. Nature 2033: 2032–2033.
- 26. Gerstner W, Kempter R, van Hemmen J L, & Wagner H (1996) A neuronal learning rule for sub-millisecond temporal coding. Nature 383: 76–78.
- 27. Kempter R, Gerstner W, van Hemmen J (1999) Hebbian learning and spiking neurons. Physical Review E 59: 4498–4514.
- 28. Song S, Miller KD, Abbott LF (2000) Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nature neuroscience 3: 919–26. pmid:10966623
- 29. Zhang LI, Tao HW, Holt CE, Harris WA, Poo Mm (1998) A critical window for cooperation and competition among developing retinotectal synapses. Nature 395: 37–44. pmid:9738497
- 30. Ibata K, Sun Q, Turrigiano GG (2008) Rapid synaptic scaling induced by changes in postsynaptic firing. Neuron 57: 819–26. pmid:18367083
- 31. Turrigiano GG, Leslie KR, Desai NS, Rutherford LC, Nelson SB (1998) Activity-dependent scaling of quantal amplitude in neocortical neurons. Nature 391: 892–6. pmid:9495341
- 32. Hill A (1936) Excitation and accommodation in nerve. Proceedings of the Royal Society of London Series B—Biological Sciences 119: 305–355.
- 33. Benda J, Herz AVM (2003) A universal model for spike-frequency adaptation. Neural computation 15: 2523–64. pmid:14577853
- 34. Desai NS, Rutherford LC, Turrigiano GG (1999) Plasticity in the intrinsic excitability of cortical pyramidal neurons. Nature neuroscience 2: 515–20. pmid:10448215
- 35. Zhang W, Linden DJ (2003) The other side of the engram: experience-driven changes in neuronal intrinsic excitability. Nature reviews Neuroscience 4: 885–900. pmid:14595400
- 36. Loewenstein Y, Kuras A, Rumpel S (2011) Multiplicative dynamics underlie the emergence of the log-normal distribution of spine sizes in the neocortex in vivo. The Journal of neuroscience: the official journal of the Society for Neuroscience 31: 9481–8.
- 37. Statman A, Kaufman M, Minerbi A, Ziv NE, Brenner N (2014) Synaptic Size Dynamics as an Effectively Stochastic Process. PLoS computational biology 10: e1003846. pmid:25275505
- 38. Loewenstein Y, Yanover U, Rumpel S (2015) Predicting the Dynamics of Network Connectivity in the Neocortex. Journal of Neuroscience 35: 12535–12544. pmid:26354919
- 39.
Hagberg AA, Schult DA, Swart PJ (2008) Exploring network structure, dynamics, and function using NetworkX. In: Proceedings of the 7th Python in Science Conference (SciPy2008). Pasadena, CA USA, pp. 11–15.
- 40.
Levenson A, van Liere D (2011). triadic census. https://networkx.lanl.gov/trac/ticket/190.
- 41.
Software A (2012). GraphClick. http://www.arizona-software.ch/graphclick/.
- 42. Aćimović J, Mäki-Marttunen T, Linne ML (2015) The effects of neuron morphology on graph theoretic measures of network connectivity: the analysis of a two-level statistical model. Frontiers in Neuroanatomy 9. pmid:26113811
- 43. Koulakov aa, Hromadka T, Zador aM (2009) Correlated Connectivity and the Distribution of Firing Rates in the Neocortex. Journal of Neuroscience 29: 3685–3694. pmid:19321765
- 44. Savin C, Triesch J, Meyer-Hermann M (2009) Epileptogenesis due to glia-mediated synaptic scaling. Journal of the Royal Society, Interface / the Royal Society 6: 655–68. pmid:18986963
- 45. Sweeney Y, Hellgren Kotaleski J, Hennig MH (2015) A Diffusive Homeostatic Signal Maintains Neural Heterogeneity and Responsiveness in Cortical Networks. PLOS Computational Biology 11: e1004389. pmid:26158556
- 46. Miner DC, Triesch J (2014) Slicing, sampling, and distance-dependent effects affect network measures in simulated cortical circuit structures. Frontiers in Neuroanatomy 8: 1–9.
- 47. Chklovskii DB, Vitaladevuni S, Scheffer LK (2010) Semi-automated reconstruction of neural circuits using electron microscopy. Current Opinion in Neurobiology 20: 667–675. pmid:20833533
- 48. Plaza SM, Scheffer LK, Chklovskii DB (2014) Toward large-scale connectome reconstructions. Current Opinion in Neurobiology 25: 201–210. pmid:24598270
- 49. Bourjaily MA, Miller P (2011) Excitatory, Inhibitory, and Structural Plasticity Produce Correlated Connectivity in Random Networks Trained to Solve Paired-Stimulus Tasks. Frontiers in Computational Neuroscience 5: 1–24.
- 50. Pfister JP, Gerstner W (2006) Triplets of spikes in a model of spike timing-dependent plasticity. The Journal of neuroscience: the official journal of the Society for Neuroscience 26: 9673–82.
- 51. Clopath C, Büsing L, Vasilaki E, Gerstner W (2010) Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nature neuroscience 13: 344–52. pmid:20098420
- 52. van Vreeswijk C, Sompolinsky H (1996) Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science (New York, NY) 274: 1724–6.
- 53. Lukoševičius M, Jaeger H (2009) Reservoir computing approaches to recurrent neural network training. Computer Science Review 3: 127–149.
- 54. Tetzlaff C, Dasgupta S, Kulvicius T, Wörgötter F (2015) The Use of Hebbian Cell Assemblies for Nonlinear Computation. Scientific Reports 5: 12866. pmid:26249242
- 55.
Dayan P, Abbott LF (2001) Theoretical neuroscience, volume 806. Cambridge, MA: MIT Press.