Reliable Neuronal Systems: The Importance of Heterogeneity

For every engineer it goes without saying: in order to build a reliable system we need components that consistently behave precisely as they should. It is also well known that neurons, the building blocks of brains, do not satisfy this constraint. Even neurons of the same type come with huge variances in their properties and these properties also vary over time. Synapses, the connections between neurons, are highly unreliable in forwarding signals. In this paper we argue that both these fact add variance to neuronal processes, and that this variance is not a handicap of neural systems, but that instead predictable and reliable functional behavior of neural systems depends crucially on this variability. In particular, we show that higher variance allows a recurrently connected neural population to react more sensitively to incoming signals, and processes them faster and more energy efficient. This, for example, challenges the general assumption that the intrinsic variability of neurons in the brain is a defect that has to be overcome by synaptic plasticity in the process of learning.


Introduction
A main difference between computers and the human brain is that computers are composed of extremely reliable components with failure rates as small as 10 {15 [1], while the failure rate of vesicle release at a synaptic site is 60-80% [2], meaning that failure of vesicle release is rather the rule than an exception. Another difference is that computers contain billions of identical gates, while neurons in the brain are highly individual [3,4]. These seemingly different aspects of the brain have a joint effect: both add variance to signal processing. At first glance, this appears like an hindrance for neuronal networks to be reliable. However, in other contexts variance has proven to enhance the inherent information in a system. In particular, it has been shown that extrinsic noise can lead to a more reliable and efficient signal processing in the crayfish [5] and other animals [6,7]. This effect is known as ''stochastic resonance'' and is particularly wellestablished in the theory of coupled oscillators [8]. In this article we demonstrate a similar, but intrinsic mechanism for neuronal networks. Our simulations show that the heterogeneity of neurons and unreliability of synaptic transmission increase speed, responsiveness, and even robustness of networks of spiking neurons as depicted in Figure 1.
Over the last decades a lot of theoretical work has been invested in understanding neuronal coding and signal processing [9,10]. Most of these investigations studied neural behavior in abstract networks consisting of highly simplified and identical neurons. This approach is based on a principle that proved very successful in mathematics and many other disciplines: first understand how a system works in a pure setting and then generalize it step by step in order to transfer it to a more noisy real world scenario. In this paper we argue that in the case of neuroscience such an approach may well lead to misconceptions of fundamental principles of information processing in the brain. Our simulations of populations of neurons whose connectivities and properties are closely matched with biological data (cf. methods) show that variance in the synapses and neurons crucially changes the dynamics of the network. For example, the spikes in reliable, homogeneous networks tend to synchronize to a precision of a few milliseconds (and thus to a precision considerably higher than observed for behaving humans and animals). On the other hand, the same network with unreliable and heterogeneous synapses and neurons decreases these correlations (cf. Fig. 2 c,d). We also show that the amount of input activity needed in order to elicit activity is significantly smaller, and thus more energy efficient in a heterogeneous setup (Fig. 3 a-c). The same is true for the time it takes for a population of neurons to react to an external stimulation (Fig. 3d). The differences became even more distinctive when we used the output of one population as the input of another one as depicted in Figure 4.
Our results can also explain a discrepancy between experimental results and theoretical modeling/simulations present in current efforts to understand neuronal signal propagation. While some experiments show decorrelating effects [13], most simulations report increasing correlations [14][15][16][17][18][19][20]. And those that found propagation modes of stable or decreasing correlation needed to incorporate additional assumptions like (i) a high level of extrinsic noise [21], (ii) simultaneous convergence of multiple signals [20,22], or (iii) unrealistically strong feed-forward synapses [19] (see also the section on related work). Here we show that decorrelation is actually possible without any of these additional assumptions. The existing discrepancy vanishes if we incorporate a realistic amount of heterogeneity in the network.
Another and perhaps more surprising result is that variability in the neuronal parameters guarantees stability. In order to study this we considered the dynamics of a recurrent network (see below) to distorted input. By this we mean the following. Two types of input that are both intensively studied are (i) 'Poisson input', where the input neurons spike independently, and (ii) synchronized input where the input arrives as a 'flank', meaning that the input neurons spike more or less simultaneously. Both modes have been observed in biological neural systems [23], and most likely both play important roles for information processing in the brain. While it is easy to generate independent Poisson spikes in simulations, it is still unclear how neuronal ensembles in the brain can generate Poisson spiking: wherever activity arises there are lateral/local connections and thus some dependencies between spike times. That is, even if a Poisson-like spiking occurs it will most likely be flawed with synchronization (as is well known from simulations [18]). In Fig. 2 c-f we show the effect of homogeneity/heterogeneity in such a scenario. We find that a heterogeneous network is not only immune to a certain amount of synchronized activity but can even remove or weaken them, while homogeneous networks increase them.

Results and Discussion
We simulated populations of 1000 excitatory and 250 inhibitory conductance-based leaky integrate-and-fire neurons randomly interconnected with a connectivity of 10% [24]. The neuronal and synaptic models come with many parameters, all of which we drew randomly for each neuron and synapse from a distribution based on physiological data in mammalian cortex (''heterogeneous network'', see methods). Moreover, we incorporated the fact that vesicle release of a synaptic site is unreliable [2]. In a second setup, we set all parameters to the mean values of their distributions and made the vesicle release 100% reliable (''homogeneous network''), normalizing the synaptic weights such that the expected postsynaptic current was the same in both setups. We studied the reaction of a population to perfect Poisson input and to Poisson input that was flawed by spontaneous synchronized activity in which all input neurons produced a spike within 5 ms. We control the amount of synchronization by a parameter l[½0,1 giving the fraction of spikes belonging to the flanks (so l~0 means Poisson spiking, and for l~1 all spikes belong to flanks). The input was modeled by 250 excitatory neurons, each projecting to randomly chosen 10% of the target population. The mean synaptic weights of lateral connections were chosen in such a way that excitation and  inhibition were balanced [25]. That is, the total activity in the population shows input normalization [26] for a large range of input rates (Fig. 2 a). This does not mean that all neurons behave the same. For example, in the experiment from Figure 2 a with heterogeneous setup and 40 Hz input there more than 400 neurons firing with less than 1 Hz, while &230 neurons fired with w10 Hz, among them 49 with w50 Hz. For other input rates and also for the homogeneous setup the corresponding numbers were very similar. Finally, the transmission delays of spikes were drawn randomly in both cases, as they depend on the geometry of the network and thus differ even if all neurons are identical (see methods).
We found that the global response of a population to Poisson input is very similar in both the heterogeneous and homogeneous case in terms of average output rate (Fig. 2 a). This was no surprise, as in a population of some thousand neurons the law of large numbers should diminish the effect of variations in the neuronal parameters. As a measure of synchrony, we computed the average cross-covariance (CC) of the binned spike trains of pairs of neurons (see methods for reasons), as done in the experimental paper [13]. The cross-correlation is a measure of the correlation between neuronal activity in a small time interval (''bin'') over several trials. A high CC indicates that for every bin neurons tend to fire jointly. A CC close to zero indicates that the precise spike time of one neuron does not have strong implications for the spikes times of other neurons in the same bin. The CC generally tends to be smaller for small bin sizes, since the number of spikes per bin is small in this case, and has been criticized for this reason (see [27] for a review). Therefore, we also compute the coefficient of variation (CV) of the interspike intervals of all spike events in each population. The CV is a measure for the irregularity of the interspike intervals, and does not suffer from the drawbacks of the CC. It is equal to 1 for Poisson spiking and greater than 1 for correlated spike times. Note that the interspike intervals are not taken between spikes of the same neuron, but rather between any neurons of the network (see methods for the reasons).
We observed that the CV is slightly higher for the homogeneous network than in the heterogeneous case. In addition, the CV reacts to flawed Poisson input much stronger in the homogeneous case ( Fig. 2 c-d). We also observed that this increase was not due to a single parameter, but had its cause rather in the interplay of many different sources of variance (Fig. 2 c). Moreover, different sources of variance add up in a non-linear way. For the CV, variance in the inhibitory neurons is especially important, since synchronous inhibition is able to diminish the output of the population drastically.
The picture became even more distinctive when we let the signal propagate along a feed-forward network of several such populations as depicted in Figure 4. It is known that synchrony tends to increase along such a feed-forward network (see section on related work). We studied a sequence of eight neuronal populations, with the excitatory cells in population n projecting to randomly chosen neurons in population nz1 (see methods). In this way we could investigate how correlations evolve when a signal is propagated through several populations. While in the homogeneous network the CV increased as the signal propagated, the heterogeneous network remained close to being Poisson (CV approximately one) even in subsequent populations (Fig. 2e). Fig. 2f shows the evolution of the cross-correlation over several populations in a flawed Poisson setup (l = 0.2). The cross-correlations of the homogenous networks (red) were larger than for the input (green) and increased from population to population. On the other hand, the heterogeneous network (blue) decorrelated the small disturbances in the input and then remained close to Poisson. Note also that this implies that in the heterogeneous case all eight populations behaved very similar, while the increasing synchronization in the homogeneous network led to significant changes in activity between several populations (data not shown).
At first sight it may seem that such a decorrelating network would perform poorly in processing input flanks like the ones appearing in gamma oscillations. However, recent work on purely excitatory networks has shown that varying spike thresholds can improve sensitivity to the input [28]. Our simulations show that this effect remains in decorrelating networks with balanced excitation and inhibition. When we tested the reaction of the networks to input flanks, heterogeneous networks showed not only a stronger response (Fig. 3 a-c), but also reacted faster (Fig. 3d). Moreover, the heterogeneous network was activated by fewer spikes, which makes it more energy efficient. Note that flanks in the input like those in Fig. 3 (that may be seen as carriers of information) as well as the synchronization in the flawed Poisson inputs are characterized by a high level of synchrony. The difference lies in the time scale: while for flawed Poisson input the spikes are synchronized up to a few milliseconds, the spikes of the input flank in Fig. 3 are scattered over 10-30 ms. Moreover, as the synchronization within Poisson input arises spontaneously and randomly [18], it does not carry meaningful information. The symmetry breaking properties of heterogeneous networks allow to distinguish between these two cases: they desynchronize the spontaneous synchronization and at the same time increase the effect of flanks as the carrier of information.
For other types of recurrent physical networks, a bit more is known. In particular, synchronization effects have been studied for networks of coupled oscillators [8,[35][36][37][38], which sometimes have been interpreted as networks of neuronal ensembles, or for networks of neurons whose membrane potential are directly coupled to each other [39]. Hansel and Mato [40] studied synchronization of networks of rate-based approximations to neurons [40]. Also networks of spiking neurons have been investigated for purely excitatory neurons [41,42], or purely inhibitory neurons receiving constant input currents [42,43]. Closest to our work are the embedded synfire chains considered in [44], and the Dale networks studied in [45]. Both are networks of spiking neurons, similar to the present paper. However, as both groups model the synaptic currents as delta peaks, and use uniform synaptic delays for all connections, the impact of synaptic time dynamics on synchronization were not investigated in these studies.
Recently, Mejias and Longtin [28] have studied the effect of varying spike thresholds on synchrony in purely excitatory networks. They found that higher variance leads to stronger synchronization, an opposing effect to the one we observe. One important difference in our setup is that we use a balanced system of inhibition and excitation, cf. Figure 2 a. Mejias and Longtin observe that increasing variance in spike thresholds increases also the output rates. For single neurons it is well known [47] that synchronization rises with the output rate. In our balanced system the main effect of the variances is not a change of the output rate but instead a more subtle reaction of the population to varying input strengths, which in turn results in a more asynchronous behaviour.
In behaving animals and humans, the activity of clusters of neurons is oscillatory with frequencies of w15 Hz [48][49][50][51], with most excitatory neurons firing highly irregular [52], named synchronous irregular (SI) state in [47]. Many experiments also reported strong spike count correlation on single cell level, for example of pyramidal cells in V1 with similar receptive fields [12,53,54]. Recently Ecker et al. [55] with permanently implanted tetrodes reported, in contrast to these results, that the correlations are in fact negligibly low when a high temporal resolution (v10 ms) is applied. They reasoned that previous, contradictory findings were an artifact of measurement [55] or analysis techniques [56], or were due to exceptionally high and polysynaptic input from LGN [53]. In light of their findings, Ecker et al. speculated about an active decorrelation process in the brain. Nevertheless, Cohen and Kohn [27] have in turn challenged the measurements and the interpretation of Ecker et al. such that a conclusive bottom-line can not yet been drawn. Our experiments may be viewed as a support of the speculations in [55].
Our results can be explained as follows. In accordance with the law of large numbers, the variance in the parameters plays only a negligible role if we study simple input-output systems without complicated dynamics. For the considered input ranges, our system is of this type, despite of the recurrent connections. In particular the input-response curve does not change much (Fig. 2a). However, variations in the neuronal parameters do have a symmetry-breaking effect that tremendously influences the local reactivity to changes in the input, cf. Fig. 2 c-d. Concretely, in a homogeneous network interneurons tend to react groupwise, thus easily over-or underreacting to pyramidal activity. When each interneuron has different integration properties, they can counterbalance the pyramidal activity more accurately. And as it is well known that functionality of the neuron system crucially depends on a careful and balanced interplay of excitation and inhibition [11,12], such symmetry breaking effects make the system react in a more subtle and balanced way than in a homogeneous setup.

Conclusion
While the benefits of a high variance are generally accepted in terms of the biodiversity of ecologic systems [57], the potential benefits for neural signal processing are still largely unexplored. We hope that a systematic exploration will be as fruitful as the study of noise in the field of stochastic resonance. In this paper we have undertaken a first step by showing that heterogeneity can enhance speed, responsiveness, and -counterintuitively -robustness of networks of spiking neurons. Our simulations show that various kinds of variance -from variances in neuronal parameters to unreliability of synapses -contribute to these effects. Quantifying the effects of the various parameters is hard, as the contributions do not seem to add up linearly, but depend on each other. We leave a more thorough study of these interdependencies to future work.

Neuron Model
1.1 Leaky Integrate and Fire Dynamics. All model neurons in our simulations are conductance-based leaky integrate-and-fire (LIF) [58,59] neurons. Gerstner gives a thorough overview of LIF neurons [60]. All simulations were implemented within the NEST framework [61]. The dynamics of the current based LIF model are governed by the following differential equation: where V m (t) is the membrane voltage, V rest is the resting potential, t m is the membrane time constant, and C m is the capacitance of the neuron's membrane. The post-synaptic current (PSC) I(t) is determined by the time-dependent voltage and the time-dependent membrane conductance, where E ex and E in are the reversal potentials of excitatory ions and inhibitory (potassium) ions, respectively. We did not estimate conductances and capacitances separately, but only their quotients (cf. section 2.5). In case that synaptic input raises the membrane potential V m above the threshold potential V th , the cell elicits an action potential (spike) and all the neurons the cell projects to will receive conductance changes that express excitatory postsynaptic currents (EPSCs) if the projecting cell is excitatory, or inhibitory postsynaptic currents (IPSCs) if the projecting cell is an interneuron (see also below). After the generation of such a spike event, the neuron undergoes an absolute refractory period of t ref milliseconds (ms) in which it is incapable of generating further spikes. At the end of the absolute refractory period the cell's V m value is reset to V reset . The conductance G ex (t) induced by the n e excitatory synapses is given by where every presynaptic spike event contributes S jk (t) given by Equation (5) below, and g j is the dimensionless strength of the connection, defined as the integrated conductance change induced at the soma divided by its capacitance C m . The conductance G in (t) is analogously given by where n i is the number of inhibitory synapses, and S jk (t) is given by Equation (6) below. The response curve S jk (t) consists of a-Amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) and N-methyl-D-aspartat (NMDA) components for excitatory synapses, where r E [½0,1 determines the ratio between AMPA-and NMDA-mediated conductance changes. For inhibitory synapses, the response curve is given by gamma-aminobutyric acid (GABA), so in this case we have AMPA and GABA triggered conductance changes are modeled by (normalized) single exponentials [62] S jk,k (t)~1 tk e where k[fAMPA,GABAg, with typical decay time-constants of 7 ms at pyramidal cells [62][63][64] and 2:5 ms at interneurons [62]. Note that the integral over S jk,k (t) is normalized to 1. The term d jk accounts for the axonal, synaptic, and dendritic delay of the synaptic connection. NMDA triggered currents are modeled by double exponentials [65]  normalizes the integral over S jk,NMDA (t) to 1. Table 1 and Table 2 provide the values for the parameters of neurons and synapses that we used in our simulations. Each parameter was drawn uniformly at random from an interval. Mean value, upper and lower bound of the interval are given in the tables. Note that the standard deviation of such a uniform distribution in ½m{d, mzd is given by d= ffiffi ffi 3 p &0:577d . In the subsequent sections we provide an overview of experimental data on cortical pyramidal cells and cortical basket cells that justify the choice of these values. All the collected animal data was measured in cats, ferrets, and rodents.
2.2 The Threshold Potential. In vitro data for the threshold potential V th of individual excitatory or inhibitory neurons can be found in [72,73,[79][80][81]93]. Detailed in vivo data in rat prefrontal cortex can be found in Degenetais et al. [85]. For pyramidal cells they find parameter ranges from {50:8+5:8 mV to {48:5+3:9 mV. In vitro data for basket cells can be found in [73] ({49+8 mV) or [93] ({38+6 mV). Since varying the threshold potential has essentially the same effect as varying the resting potential, we restricted variations to the resting potential.
2.5 Synaptic strength. In order not to estimate the synaptic capacitance and conductance separately, we rather fitted the total integral over the quotient G(t)=C m , which we call synaptic efficacy w. Intuitively, this parameter corresponds to the weight of the synapse. Table 2 shows the amplitude of the EPSP change that a single incoming spike evokes (for mean values of other synaptic and neuronal parameters) if the voltage is initially at the resting potential.
We did not derive the synaptic efficacies from literature, but we chose them in a way that inhibition in our populations balanced excitation (the gain in activity is close to 0 for a wide range of inputs). This was important for the homogeneous network to function properly, but not for the inhomogeneous. The latter one also showed a large stable operation range if the gain was positive (data not shown).
Although we did not directly fit efficacies to the literature, all used values are within the reported bounds.
Depending on the nature of the synaptic currents, w will be denoted with appropriate suffixes like w AMPA , w NMDA , w GABA , or w ex and w in for total excitatory and inhibitory currents.
2.6 AMPA mediated PSPs and PSCs. The AMPA time constant t AMPA that we use is rarely estimated directly; more often the half-width of the EPSP is considered. Results [99]. In general, the time constant and its variance are higher if only NMDA is blocked, but not other neurotransmitters like kainate [99]. We used the data from Karayannis et al. [62] since they measured both pyramidal cells and interneurons. Furthermore, the corresponding rise times and half-width of AMPA mediated PSPs in [62] match with other studies [76,100], except that those studies find higher variances.
2.8 AMPA/NMDA Ratio. We used data about the AMPA/ NMDA ratio r AMPA for synapses onto pyramidal neurons in [65,101,102], and in a review by Thomson et al. [103] for synapses onto inhibitory neurons. For pyramidal neurons r AMPA ranges from 69+7% [101] to 81:3% [65]. For excitatory connections onto inhibitory neurons the NMDA component seems to be much smaller or even absent [103]. Therefore we did not incorporate NMDA components in the latter case.
2.9 GABA mediated PSCs. As for the EPSC, there is only sparse data on the decay time constant t GABA of inhibitory postsynaptic currents (IPSCs). For pyramidal cells, Wang et al. [75] find 8:30+2:67 ms, for fast spiking cells Tamas et al. [88] report 5:6+3:4 ms. This is consistent with the finding of Thomson et al. [104] that the IPSP rise time is about twice as large in pyramidal cells compared two interneurons.
2.10 Synaptic Sites and Reliability. There are typically 3{8 synaptic contacts between any two pyramidal cells in layer 5 [101,105,106], each of them having potentially more than one vesicle release site. The transmission probability of the total dendritic tree within one layer is :0:5 [107], while it is 0:8{0:9 for projections between layers. Unfortunately, estimates on quantal count and quantal release probability are highly contradictory (see [108] for an overview). Therefore, we decided to assume 10 vesicle release sites [106] with a resting release probability of 0:1 for all glutamatergic synapses, regardless of the innervating cell type. For GABAergic synapses, we assumed 5 vesicle release sites with a resting release probability of 0:5 to account for the low transmission failure rate of basket cells [71].
2.11 Latencies. The time constant d accounts for the time lag between generation of an action potential in a presynaptic neuron and the arrival of the EPSC at the postsynaptic soma. Hence it includes axonal, synaptic and dendritic delay. For close-by neurons (ƒ500mm), connections between pyramidal cells have a delay between 1:2+0:6 ms and 1:7+0:8 ms [82,109,110]. Connections to and from basket cells have been reported to be faster: between 0:63+0:05 ms and 1:3+0:3 ms from pyramidal cells to basket cells [74,75,109]; between 0:66+0:09 ms and 1:02+0:14 ms from pyramidal cells to basket cells [75,77,90,94]; and between 0:7 ms and 1:5 ms from basket cells to basket cells (no standard deviation given) [109]. In all the latter cases, a majority of measurements supports a value of roughly 0:9 ms [75,77,94,109] 3 Feed-Forward Network 3.1 Populations. The network consisted of a feed forward chain of up to 10 populations. Each population consisted of 1000 pyramidal cells and 250 interneurons. Within each population, each neuron projected to each other neuron with probability 10% [111], as depicted in Figure 4. The probability was the same for pyramidal cells and interneurons. Moreover, each pyramidal cell in population i projected to each (excitatory or inhibitory) neuron in population iz1 with probability 10%. Interneurons did not project to other populations, as cortical basket cells do rarely project into other layers. [112].
3.2 Input. There are 250 excitatory input neurons giving input to the network. For Fig. 2 (f,g) the input goes only to the first population. As for the connections between population, each input neuron projects to each neuron in the next population with probability 10%. For Figure 2, each input neuron emits a Poisson spike train of fixed rate. For Figure 3, a random subset of the input neurons each emits a spike at a randomly chosen time in some small, predefined interval. We speak of a ''flank'' of input spikes in this case.

Measures of synchrony
The coefficient of variation (CV) of the spike time intervals was computed as follows. Let l i be the time between the ith and the iz1st pyramidal spike (not necessarily of the same neuron). Then we computed the mean m, the standard variation s, and the coefficient of variation CV as Many experimental and theoretical papers consider the CV of the spike time intervals of a single neuron (between two spikes of this specific neuron), and possibly average this value over many neurons. This individual CV serves other purposes, and should not be confused with the population-CV that we compute. In particular, the individual CVs can not serve as measures of synchrony: Consider a perfectly synchronous system of neurons which all spike at exactly the same times, but these times are random. Then each individual neuron will have a CV close to 1.
On the contrary, the population-CV of such a system will be extremely high: while most spike time intervals are close to 0, there are some (comparably) extremely long time intervals in which the complete system is silent.
The cross-correlation (CC) of the binned spike times was computed as follows. We repeated the experiment R~100 times. Then we binned the time of the experiments with some bin size b (for the value of b, see figures). For each neuron i and each trial r we counted the number of spikes s i (r,k) of neuron i that occurred in the kth bin in trial r. We computed the average number of spikes s s i (k)~1=R : X R r~1 s i (r,k) of neuron i in bin k. Then the crosscorrelation between two neurons i and j in the kth bin was calculated as (s i (r,k){ s s i (k))(s j (r,k){ s s j (k)) s i (k)s j (k) , where s i (k) and s j (k) are the standard deviation of s i ( : ,k) and s j ( : ,k), respectively. Finally, the cross-correlation CC was computed as the mean of CC i,j,k , taken over all neurons i and j and all bins k.