Sampling effects and measurement overlap can bias the inference of neuronal avalanches

To date, it is still impossible to sample the entire mammalian brain with single-neuron precision. This forces one to either use spikes (focusing on few neurons) or to use coarse-sampled activity (averaging over many neurons, e.g. LFP). Naturally, the sampling technique impacts inference about collective properties. Here, we emulate both sampling techniques on a simple spiking model to quantify how they alter observed correlations and signatures of criticality. We describe a general effect: when the inter-electrode distance is small, electrodes sample overlapping regions in space, which increases the correlation between the signals. For coarse-sampled activity, this can produce power-law distributions even for non-critical systems. In contrast, spike recordings do not suffer this particular bias and underlying dynamics can be identified. This may resolve why coarse measures and spikes have produced contradicting results in the past.


Sampling bias remains under alternative topologies
The network topology used in the main paper is local: on average, each neuron is connected to its nearest = 10 3 neighbors. It is of interest to check if alternative topologies can impact the distinguishability of the underlying dynamic state under coarse-sampling.
For that, we select two additional topologies. The first ("Orlandi") mimics the growth process of a neuronal culture. In short, axons grow outward on a semiflexible path of limited length and have a given probability to form a synapse when they intersect the (circular) dendritic tree of another neuron. Thereby, this topology is local without requiring distance-dependent synaptic weights (refer to [1] for more details). The second ("Random") implements a purely random connectivity, with each neuron being connected to = 10 3 neurons. Note that this is an unrealistic setup as this topology is completely non-local.
We find that, under coarse-sampling, reverberating and critical dynamics remain indistinguishable with the alternative topologies (Fig A, left). Meanwhile, under subsampling, all dynamic states are clearly distinguishable for all topologies (Fig A, right).

Influence of the electrode field-of-view
In the main paper we considered that the contribution of a spiking neuron to the electrode signal decays with distance as ∼ 1/ (see Ref. [3,4]). The precise way neuronal activity is recorded by extracellular electrodes depends on many factors such as neuronal morphology and the level of correlation between synapses [2,4]. Because it is difficult to account for all relevant factors, we instead prioritized a simple, mechanistic approach.
In more detail, we are interested in the impact of a spike as a function of the distance to the electrode. Thus, two main contributions arise: first, the transmembrane currents along the pre-synaptic neuron and the currents at the synapses and post-synaptic neurons (which do not necessarily cause further spikes). This creates a distancedependence of the LFP contribution that depends on the neuron's morphology and the connectivity profile. Second, the LFP signal has a distance-dependent decay depending on whether the source corresponds to an electric monopole (1/ ) or dipole (1/ 2 ). The particular decay was found to depend, among other factors, on neuron type and morphology, and it effectively varies from 1/ near the soma to 1/ 2 (and steeper) in the far-field limit [2][3][4]. Together, a more realistic LFP model would need to incorporate the connectivity profile of the neurons and the signal decay (the final effect of a neuron would be a convolution of both functions).
In our simplified model, we neglect neuron morphology (because our simple binary neurons only feature a Avalanche-size probability ( ) from coarse-sampled activity (left) and sub-sampled activity (right) for subcritical, reverberating and critical dynamics. Top: results for the topology used in the main paper ("Local"). Middle: results for a topology that mimics culture growth [1] ("Orlandi"). Bottom: results for a random topology. Under coarse-sampling, reverberating and critical dynamics are indistinguishable with all topologies. Parameters: E = 400 µm and = 8 ms.
single compartment) and the connectivity profile (because our topology is homogeneous and local). Thus, our implemented distance-dependence of the electrode signal merely serves an effective description that sensibly depends on the considered population. As a sensitivity analysis for our effective description, we here study the impact of a varying electrode field of view ( Fig B). An important detail that we neglect in the main manuscript is that the 1/ contribution does not extend into the far-field limit (beyond 100-1000 µm) [2,3]. We implemented various different effective distance-dependencies, which are motivated by the work of Einevoll and colleagues, especially Fig 2D of Ref. [2]. Note however, that the reported shape function describes the LFP contribution as a function of a neuron receiving input, whereas, in our case, the potential describes the contribution of a spiking neuron.
In particular, we checked = 2 (Fig B, right column Fig B: Effect of changing the electrode contribution of a spiking neuron at distance . a: Biophysically more plausible distance dependence of LFP, reproduced from [2]. b: Sketch of distance dependencies of tested effective electrode contribution that are motivated by (A). See accompanying text. Large decay exponents correspond to a narrow field of view of the electrodes (right column). When the transition from shallow to large exponents occurs at smaller , electrodes can record fewer units and measurement overlap decreases. Eventually, distributions become distinguishable. As a notable side-effect, also the cut-off near = 64 starts to vanish.
this case, avalanche-distributions become distinguishable, but the particular shape, cut-off and the amount of overlap between states again depends on the electrode distance and time-bin size.
We also checked contributions with a shallow exponent ( ≤ 1) near the electrode, which changes into a steep exponent ( = 2) beyond a certain distance. For shallow exponents that reach far (transition at 1000 µm, Fig B d, h, l), the distributions of all considered states overlap, as in the main manuscript. Notably, the change in shape due to In all panels: Avalanche size S cut-off at 10 mm Results from the same model of the main paper, but with a hard cut-off at 1mm (top row) and 10mm (bottom row). Neurons farther away than the cut-off do not contribute to an electrode's signal. The effect of the cut-off stacks with the effect of the electrode decay-exponent . When = 1, dynamic states remain indistinguishable, independent from the cut-off. (For larger decay exponent states are distinguishable, cf. Fig B.) Notably, also for = 1, the shape of distributions is altered by the cut-off. As a guide to the eye, dashed lines indicate a power law with exponent −1.5. Parameters: = 1 (left half) and = 2 (right half), E = 200 µm in all panels.
changing the inter-electrode distance is even more severe than for = 1. For shallow exponents that do not reach far (transition at 100 µm, Fig B e, i, m), distributions start to be distinguishable. However, in the usually considered range of avalanche sizes extending up to the number of electrodes ( = 64), reverberating and critical dynamics still tend to overlap (E, M). Only for short time bins ( = 2 ms) along large inter-electrode distances ( E = 400 µm) are the two states distinguishable (I).
Summarizing Fig B, we find that a main cause of collapsing avalanche distributions is a large electrodes' field of view. As the field of view becomes narrower, the relative contribution of the closest neurons to the electrode increases, and coarse-sampling becomes more similar to subsampling. The cut-off at ∼ E vanishes for steeper decays (large ), and the different dynamic states become distinguishable.
For completeness, we performed further checks which follow the same reasoning. First, instead of changing exponents, we implemented a hard cut-off, beyond which neurons cannot contribute to an electrodes' signal ( Fig C). We again found that, for cut-off values (≥ 1 mm), our main finding that avalanche distributions from different dynamic states are hardly distinguishable is unaltered. For shortrange cut-off values that approach the distance between neurons ( N = 50 µm), distributions become distinguishable (not shown). This effect is similar to what we saw for increasing the decay exponent → 2.0, where only individual neurons remain in the field of view of the electrode. In both theses cases, coarse-sampling starts to observe singleunit properties and becomes similar to the case of applying spike detection (here, sub-sampling).
Second, as our electrode potential is only an effective description and cannot be directly compared to Ref. [2], we additionally considered a hypothetical decay with exponent = 1.5 and repeated the analysis on different topologies (Figs. H and I ). In all cases, ≥ 1.5 causes the cut-off near = E to vanish, and an increase of weakens the coarsesampling effect. Note, however, that even with relatively narrow field of view, avalanche distribution from coarsesampling still differ from those of spike analysis.
The above changes of the electrode model indicate that in real electrode recordings both effects are present, subsampling and coarse-sampling. In particular, the resulting avalanche distributions change on a continuous scale where our descriptions of coarse-sampling and sub-sampling are the extremes. Underlying dynamic states are better distinguishable when the sampling is "closer" to sampling single units instead of weighted averages -and when the measurement overlap that is characteristic to coarse-sampling goes to zero.
Lastly, we want to caution about a peculiarity of 1/ 1 . From a geometric point of view, one has to consider how the number of neurons per volume element increases with distance from the electrode. In two dimensions, the number of neurons contained in a thin ring around an electrode scales ∼ 2 . If = 1, the contributions of far-away neurons could be as strong as the contributions of close-by neurons, especially if activity is correlated.
In more detail, let the ring be of inner radius and outer radius + , then its area is = 2 ( + 2 ). At a constant density , the number of neurons in this ring is given by = , so that, up to a constant, ∼ 2 . Let = {0, 1} denote the state of a single neuron. Here, we assume that all neurons are uncorrelated and independent identically distributed random variables, with expectation value ⟨ ⟩ = and the same variance Var[ ]. Our electrode potential was modeled as For every ring, we assume a constant = for all neurons in the ring. Then, the expected potential for the ring is Thus, when ∼ 2 , then ⟨ ( )⟩ ∼ . Indeed, this implies that the contributed potential of any of these rings is constant and that many neurons in a far-away ring can "contribute as much" as few local ones.
However, the variance of the potential per ring is not constant. For uncorrelated , the variance of their sum is equal to the sum of their variances: With we see that When again considering ∼ 2 , Hence, the standard deviation of the rings vanishes as → ∞; far away rings do contribute to the electrode, but they do not add to the variance of the signal. During the avalanche detection, the start or end of an avalanche is given by a threshold crossing. In a signal with more variance, more threshold crossing will occur, possibly leading to different avalanche statistics. As we showed, far-away neurons that are uncorrelated increase the mean, but they do not increase the variance, and, thus, do not lead to more (or less) threshold crossings. Notwithstanding, this reasoning only holds for uncorrelated neurons. If are correlated, also far-away neurons could contribute to the variance. Although we think that is not the main cause for the coarsesampling effect (cf. Fig B), we want to stress the limited range of electrodes for future work.
For a population of neurons receiving (un-) correlated synaptic input, the distance dependence of the LFP signal is studied in much more detail in Ref. [2]. Together with the above reasoning, this work highlights another possible source of sampling bias (which also affects our model electrodes): Because closer-to-critical dynamic states typically feature more correlated activity than subcritical states, the effective distance-dependence of an electrode is also affected by the dynamic state that is being recorded.
To conclude this section, we want to sketch an idealized experimental set-up: in order to determine criticality under coarse-sampling, the set-up should combine i) a large distance between electrodes E , ii) a narrow electrode field-of-view (large ) and, ideally, iii) systems to calibrate with, which feature different dynamic states. This could potentially be used to qualitatively compare the distance to criticality between the systems. However, not only is this much more limited than what is possible with spike data [5][6][7], but the cut-off is a characteristic and ubiquitous feature commonly observed in experimental data of coarse-sampled recordings [8,9]. This indicates that electrodes typically have a large field-of-view, and motivated our modeling assumption of = 1.

Neuron density
As the coarse sampling effect is sensitive to the field of view of electrodes, it may similarly depend on the amount of neurons any electrode captures, and, thus, the density of the neuron population. Hence, we performed a basic test of the robustness of our results with respect to the density of neurons (Fig D). To that end, we kept most parameters of the model as in the main manuscript: The culture extended over 4 cm substrate and neurons were hard-wired to their ≈ 1000 closest neighbors. A change in density impacts how far these neighbors are distributed, but we kept the effective projection range constant ( = 300 µm), so that our "local" topology and the connectivity profile remain unchanged. We kept electrode contributions at 1/ and placed electrodes at a large distance (400 µm), so that every electrode samples many neurons and changes in density can become clear.
Compared to = 100/ mm 2 of the main manuscript, we considered lower ( = 25/ mm 2 ) and higher densities ( = 400/ mm 2 ). Surprisingly, a change of neuron density only has a minor impact on the coarse sampling effect: Overall, the overlap of distributions and the cut-off remain for all considered densities (Fig D). Nonetheless, subtle differences are visible. Firstly, the slopes of the distributions seem to change, but this is shadowed by the dependence of the slope on the time-bin size. Secondly, for higher densities ≥ 100/ mm 2 distributions appear critical or slight subcritical, but for the low density, they also resemble super-critical distributions with a pronounced peak near the cut-off.
In conclusion, our main results concerning overlapping avalanche distributions for different dynamic states seem to be fairly invariant when increasing neuron density.

Low-pass filtering
A relevant question that we have not addressed in the main manuscript is how mechanisms that come into effect before the sampling hardware could impact avalanche statistics. Many studies are concerned with low-pass frequency filtering and how measurements of neuronal activity are affected. Such a temporal filtering may arise from the intrinsic neuronal morphology [2,4,10] or the surrounding extracellular tissue [11][12][13].
As a simple test that mimics natural low-pass filtering, we convolved the raw time series of every electrode with an exponentially decaying kernel (Fig E). Thus, the filtering was applied before the remaining analysis pipeline. The decay of the kernel (and the strength of the filtering) is parametrized through the decay time of the exponential.
We considered decay times f between 2 ms and 128 ms (ranging from 1 to 128 time steps of the simulation). The kernel was created using scipy.signal.exponential with a window size of 1000 time steps. The remaining analysis pipeline remained unchanged, and, in particular, included the frequency filtering to 0.1 Hz < < 200 Hz that we assumed as part of the recording hardware.
Whereas the overlap of distributions largely remains when low-pass filtering is applied, the shape of the distributions depends on the strength of the filter. As a general trend, all distributions tend to form "super-critical" peaks as filtering becomes stronger. We associated these peaks with multiple electrodes picking up the same event (boosting the amount of large avalanches, up to the number of electrodes). This goes along with a decreased number of total avalanches that are detected (lower left corner in all panels). Note that the same raw time series with the same duration were used along every row of Fig E. Together, this is consistent with the expecting "smoothing" of the raw time series due to low-pass filtering: Deflections of a time series around its mean get attenuated, and small excursions (at high frequency) become rare. Because these excursions potentially trigger the start of a new avalanche, fewer avalanches (or events) are detected when filtering becomes stronger. Intriguingly, the change of the Δt = 8ms no filtering 8ms 32ms 128ms (main manuscript) distribution shape due to filtering seems to affect critical and reverberating dynamics more severely than subcritical dynamics (especially visible in the bottom row). However, note that the filtering employed here only serves as an example for a low-pass filter. Experimentally, power-spectra are often found to show 1/ scaling, with 0 < < 4, which limits the functional form a more realistic filtering kernel might have [12,[14][15][16]].

Scaling laws may fail under coarsesampling
The most used indication of criticality in neuronal dynamics is the avalanche-size distribution ( ). However, at crit-icality, the avalanche duration distribution ( ) and the average avalanche size for a given duration, ⟨ ⟩( ), should also follow power-laws, each with a respective critical exponent [17]: The exponents are related to one another by the scaling relationship * − 1 − 1 = .
Lastly, at criticality, avalanches of vastly different duration still have the same average shape: The activity ( , ) at any given time (within the avalanche's lifetime ) is described by a universal scaling function ℱ, so that In other words, changing ( , ) → ( , )/ −1 and → / should result in a data collapse for the average avalanche shapes of all durations.
The shape collapse of Eq. 13 is done following the algorithm described in [20]. Briefly, the avalanche profiles * In this subsection, exclusively denotes the scaling exponent and the decay exponent (which is denoted with in the rest of the manuscript) equals 1 for all results presented here.
( , ) of all avalanches with the same duration are averaged, and the resulting curve is scaled to / and interpolated on 1000 points in the [0, 1] range. Avalanches with < 4 , or with less than 20 realizations are removed. The chosen collapse exponent is the one that minimizes the error function: where ( / ) is the interpolated average shape of avalanches with size , and = max , ( / −1 ) − min , ( / −1 ). The variance Var(.) is calculated over all valid , and the mean ⟨.⟩ is taken over the scaled duration / . For interpolation and minimization we use the scipy [21] functions interpolate.InterpolatedUnivariateSpline and optimize.minimize, respectively. From Eqs. (11)-(13), we have three independent ways to determine the exponent . Consistency between the three is a further test of criticality. However, to the best of our knowledge, experimental evidence with the full set of scaling laws is mainly observed under sub-sampling: from spikes of in vitro recordings † [23,24], but see Ref. [22].
The absence of scaling laws in coarse-sampled data can be explained by how coarse-sampling biases the average shape: the cut-off in ( ) near the number of electrodes = E implies that ⟨ ⟩( ) < E . From Eq. (11) we have < 1/ E . If > 1 the cut-off in ( ) causes a much earlier cut-off in both ( ) and ⟨ ⟩( ).
Given that experiments typically have E ∼ 10 2 electrodes, ( ) of a pure branching process (with = 2) would span a power-law for less than one order of magnitude. However, the typical standard to reliably fit a power-law is at least two orders of magnitude [25]. While this is problematic under coarse-sampling (Fig 6), we have shown that the hard cut-off is not present under sub-sampling (Fig 7).
When we apply the independent measurements of to our model (with critical dynamics) under sub-sampling, we find consistent exponents for all measurements (Fig Ff). Moreover, the exponents we find under sub-sampling are compatible with experimental values, e.g. exp = 1.3 ± 0.05 reported in Ref. [23] and 1.3 ≤ exp ≤ 1.5 reported in Ref. [24]. Notably, the exponents found in our model and experimentally differ from those expected for a pure branching process ( = 2). While not the focus here, we believe this deviation to derive from topological properties of the network, which was also observed in Ref. [23]; distancedependent weights of local topologies affect avalanche duration and size and yield different exponents than a branching process (which does not face any topological constraints).
When we apply the independent measurements of to our model (with critical dynamics) under coarse-sampling, exponents differ from measurement to measurement: The exponent obtained from the shape collapse ( ≈ 0.74) greatly differs from the other two ( ≈ 1.74, ≈ 1.62), Fig Ff. Moreover, the extremely short range available to fit ( ) and ⟨ ⟩( ) with power-laws (1 ≤ ≤ 6) makes the estimated exponents unreliable.
To conclude, the full set of critical exponents revealed criticality only under sub-sampling. Only in this case we observed both, a match between all the measurements of the exponent , and a power-law behavior extending over a range large enough to reliably fit them.

Coarse Graining the Ising Model
To demonstrate how general the impact of measurement overlap is, we study the two-dimensional Ising model. The Ising model is well understood and often serves as a textbook example for renormalization group (RG) theory in Statistical Physics [26]. In this framework, the system is coarse grained by merging multiple parts (spins) into one. An intuitive way to think of it is by zooming out of a photograph on a computer screen; a pixel can only show one color although there might be more details hidden underneath. Coarse graining is also known as the real-space block-spin renormalization and it can be used to assess criticality. † An exception can be found in Ref. [22], where scaling relations were found to hold in vivo. However, in this study, fluorescence imaging was coupled with very large time bins ≥ 0.47 s, the effect of which remains to be understood in full.
Please note that coarse graining is different from coarsesampling. Conventionally, coarse-graining perfectly tiles the space without any measurement-overlap (see Fig G).
The two-dimensional Ising model consists of = 2 spins with states = ±1, arranged on a square lattice of length . In its simplest form, it is given by the Hamiltonian ( ⃗ ) = ∑ ⟨ , ⟩ , where ⟨ , ⟩ denotes all pairs of nearest neighboring spins. The probability of observing ⃗ is given by the Boltzmann distribution where is the temperature of the system, B is the Boltzmann constant (here, B = 1) and is the partition function that normalizes the distribution. As the temperature → = 2/ln(1 + √ 2), the system undergoes a secondorder phase transition between a disordered spin configuration ( > ) and an ordered state of aligned spin orientations ( < ). Many observables diverge at = for → ∞, such as correlation length, specific heat and susceptibility [18,26].
We perform Monte Carlo simulations of the 2D Ising model using the massively parallel multicanonical method [27,28]. The multicanonical method offers numerous advantages over conventional Monte Carlo approaches. For instance, instead of simulating at a single temperature, one simulation covers the whole energy space. High-precision canonical expectation values of observables are recovered for any desired temperature during a post-production step. Thereby, we obtain the normalized absolute magnetization as a function of temperature ( ) = 1 | ∑ |.

Block-Spin Transformation
Measurement overlap causes individual sources to contribute multiple times to a signal. For the Ising model, a similar process takes place when coarse graining is applied. In the process, spins are grouped into blocks of size × , here = 4 and every block only takes a single value. The value of each block can be obtained in different ways.
• Most commonly, the majority rule [26] is employed, where the block is assigned +1 (−1) if the majority of spins has value +1 (−1). In this case, the contribution of multiple sources is integrated. Hence we compare this rule to the effects observed when neuronal systems are coarse-sampled.
In this case, all except a single spin value within a block are discarded. The block value is assigned from the single spin that is kept. Hence we compare this rule to the effects observed when neuronal systems are sub-sampled.
This block-spin transformation rescales the number of spins by a factor of 1/ 2 , effectively reducing system size (which will cause finite-size effects). It is well known, that when studying the magnetization, the effective size of the compared systems (after rescaling) has to match.

Overlap
To mimic the measurement overlap, we now introduce an overlap between the blocks of the Ising model coarse graining (Fig G). In the native block-spin transformation, blocks do not overlap. Then, in terms of spins, the linear distance between two blocks matches the block size = = 4 ( Fig Ga). When the distance between blocks is smaller than the block size, < (Fig Gb), measurement overlap is created, while when > parts of the system are not sampled. Clearly, the changes that such an overlap will cause on rescaled observables should depend on the rule used to perform the block-spin transformation.
Here, we look at combinations of block size = 4 with distance between blocks of = 2, = 4 and = 8. In order to preserve the effective system size ( = 16), we thus perform simulations for = 32, = 64 and = 128, respectively.
Using the majority rule and no overlap -which is the default real-space renormalization-group approach -the procedure moves away from ( ) (Fig Gc,  = ): can be obtained by finding the crossing of between an unblocked ( = 16) and a blocked ( = 64, = 4) system -only at is the measured invariant under block rescaling transformations.

Majority Rule "coarse"
What is the impact of the overlap for the majority rule? For increasing overlap ( < ), the crossing occurs at > (Fig Gc). This is because sharing spins increases the correlations between blocks (pairwise and higher-order), making it more likely that the rescaled spins point into the same direction. In other words, it biases the measurement of towards order, increasing our estimated critical temperature.
For absent overlap ( > ), only every other block is measured. This decorrelates the spins near the borders of each block and, therefore, decreases the correlation between blocks. As a consequence, the spin orientation of the blocked system moves towards disorder, decreasing the measured magnetization .

Decimation Rule "sub"
If instead of the majority rule the decimation rule is used, the blocking procedure does not alter the correlation between spins before and after the transformation (Fig Gd). As a consequence, the magnetization remains unaltered in general. However, in the disordered phase, we still notice a systematic deviation from the unblocked system (with = 64). This deviation can be fully attributed to finite-size effects: The distribution of realizable magnetizations in the disordered phase follows a Gaussian with mean zero and variance proportional to the (effective) number of spins. Due to the definition of the magnetization with absolute value, the expectation value of the magnetization for → ∞ is determined by the (effective) system size.
As was the case when sub-sampling neuronal systems, the increase in correlation that ultimately leads to biased observables is caused by integrating weighted contributions from various sources. This is not the case when the decimation rule is applied. Note that the impact of different block-transformation rules on ( ) will not hold for all other canonical observables such as the energy ( ) [26].

No overlap and comparison of rules
Coarse graining the Ising model. a: Representation of the standard coarse graining where block size matches the distance between blocks ( = = 4). No overlap is created. b: Coarse graining with block size = 4 and a distance between blocks of = 3. Overlapping spins (orange) are shared by two or more blocks. c: With the "coarse" majority rule, overlap impacts the spontaneous magnetization ( ). Only the crossing between the unblocked ( = 16) and non-overlapping blocked system ( = , = 64) happens at = , as would be expected. Intriguingly, the overlap ( < , = 32) pushes the system towards higher magnetization where spins appear more aligned. On the other hand, the absence of overlap ( > , = 128) causes smaller magnetization where spins appear more random. (Note that, in order to avoid finite-size effects, the target size after coarse graining has to match, here = 16. Consequently, depending on the ratio between and , simulations have different system sizes.) d: Comparison between the fully-sampled, unblocked system and blocked systems using the majority rule ("coarse") and the decimation rule ("sub") for = = 4. All simulations and curves for = 64. In the ordered, low-temperature phase, the sub curve matches the fully sampled system. Only for the high-temperature phase deviations occur due to finite-size effects (the magnetization for → ∞ approaches the value expected for the rescaled = 16 system). The coarse curve is systematically biased towards more ordered states. . Local corresponds to the topology used in the main paper, Orlandi corresponds to the model described in [1], and Random corresponds to a completely random topology. Increasing (decreasing electrode FOV) results in a loss of the cut-off for ( ) ∼ E as the coarse-sampling becomes more spike-like. Bin-size for all distributions is = 4 ms. . Local corresponds to the topology used in the main paper, Orlandi corresponds to the model described in [1], and Random corresponds to a completely random topology. Increasing (decreasing electrode FOV) results in a loss of the cut-off for ( ) ∼ E as the coarse-sampling becomes more spike-like. Bin-size for all distributions is = 8 ms.  Table. 3.