Skip to main content
Advertisement
  • Loading metrics

Sampling effects and measurement overlap can bias the inference of neuronal avalanches

  • Joao Pinheiro Neto ,

    Contributed equally to this work with: Joao Pinheiro Neto, F. Paul Spitzner

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany

  • F. Paul Spitzner ,

    Contributed equally to this work with: Joao Pinheiro Neto, F. Paul Spitzner

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany

  • Viola Priesemann

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

    viola.priesemann@ds.mpg.de

    Affiliations Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany, Bernstein Center for Computational Neuroscience, Göttingen, Germany, Georg-August University Göttingen, Göttingen, Germany

Abstract

To date, it is still impossible to sample the entire mammalian brain with single-neuron precision. This forces one to either use spikes (focusing on few neurons) or to use coarse-sampled activity (averaging over many neurons, e.g. LFP). Naturally, the sampling technique impacts inference about collective properties. Here, we emulate both sampling techniques on a simple spiking model to quantify how they alter observed correlations and signatures of criticality. We describe a general effect: when the inter-electrode distance is small, electrodes sample overlapping regions in space, which increases the correlation between the signals. For coarse-sampled activity, this can produce power-law distributions even for non-critical systems. In contrast, spike recordings do not suffer this particular bias and underlying dynamics can be identified. This may resolve why coarse measures and spikes have produced contradicting results in the past.

Author summary

The criticality hypothesis associates functional benefits with neuronal systems that operate in a dynamic state at a critical point. A common way to probe the dynamic state of a neuronal systems is measuring characteristics of so-called avalanches—distinct cascades of neuronal activity that are separated in time. For example, the probability distribution of the avalanche size will resemble a power-law if a neuronal system is critical. Thus, power-law distributions have become a common indicator for critical dynamics.

Here, we use simple models and numeric simulations to show that not only the dynamic state of a system has an impact on avalanche distributions. Also aspects that are only related to the sampling of the system (such as inter-electrode distance) or the way avalanches are calculated (such as thresholding and time binning) can shape avalanche distributions. On a mechanistic level we find that, if electrodes record spatially overlapping regions, the signals of electrodes may be spuriously correlated; multiple electrodes might pick up activity from the same neuron. Subsequently, when avalanches are inferred, such a measurement overlap can produce power-law distributions even if the underlying system is not critical.

1 Introduction

For more than two decades, it has been argued that the cortex might operate at a critical point [17]. The criticality hypothesis states that by operating at a critical point, neuronal networks could benefit from optimal information-processing properties. Properties maximized at criticality include the correlation length [8], the autocorrelation time [6], the dynamic range [9, 10] and the richness of spatio-temporal patterns [11, 12].

Evidence for criticality in the brain often derives from measurements of neuronal avalanches. Neuronal avalanches are cascades of neuronal activity that spread in space and time. If a system is critical, the probability distribution of avalanche size p(S) follows a power law p(S) ∼ Sα [8, 13]. Such power-law distributions have been observed repeatedly in experiments since they were first reported by Beggs & Plenz in 2003 [1].

However, not all experiments have produced power laws and the criticality hypothesis remains controversial. It turns out that results for cortical recordings in vivo differ systematically:

Studies that use what we here call coarse-sampled activity typically produce power-law distributions [1, 1423]. In contrast, studies that use sub-sampled activity typically do not [16, 2428]. Coarse-sampled activity include LFP, M/EEG, fMRI and potentially calcium imaging, while sub-sampled activity is front-most spike recordings. We hypothesize that the apparent contradiction between coarse-sampled (LFP-like) data and sub-sampled (spike) data can be explained by the differences in the recording and analysis procedures.

In general, the analysis of neuronal avalanches is not straightforward. In order to obtain avalanches, one needs to define discrete events. While spikes are discrete events by nature, a coarse-sampled signal has to be converted into a binary form. This conversion hinges on thresholding the signal, which can be problematic [2932]. Furthermore, events have to be grouped into avalanches, and this grouping is typically not unique [24]. As a result, avalanche-size distributions depend on the choice of the threshold and temporal binning [1, 33].

In this work, we show how thresholding and temporal binning interact with a commonly ignored effect [16, 34]. Under coarse-sampling, neighboring electrodes may share the same field-of-view. This creates a distance-dependent measurement overlap so that the activity that is recorded at different electrodes may show spurious correlations, even if the underlying spiking activity is fully uncorrelated. We show that the inter-electrode distance may therefore impact avalanche-size distributions more severely than the underlying neuronal activity.

In this numeric study, we explore the role of the recording and analysis procedures on a locally-connected network of simple binary neurons. Focusing on avalanche distributions, we compare apparent signs of criticality under sub-sampling versus coarse-sampling. To that end, we vary the distance to criticality of the underlying system over a wide range, from uncorrelated (Poisson) to highly-correlated (critical) dynamics. We then employ a typical analysis pipeline to derive signatures of criticality and study how results depend on electrode distance and temporal binning.

2 Results

The aim of this study is to understand how the sampling of neural activity affects the inference of the underlying collective dynamics. This requires us to be able to precisely set the underlying dynamics. Therefore, we use the established branching model [35], which neglects many biophysical details, but it allows us to precisely tune the dynamics and to set the distance to criticality.

To study sampling effects, we use a two-level setup inspired by [34]: an underlying network model, on which activity is then sampled with a grid of 8 × 8 virtual electrodes. Where possible, parameters of the model, the sampling and the analysis are motivated by values from experiments (see Methods).

In order to evaluate sampling effects, we want to precisely set the underlying dynamics. The branching model meets this requirement and is well understood analytically [11, 27, 3436]. Inspired by biological neuronal networks, we simulate the branching dynamics on a 2D topology with NN = 160 000 neurons where each neuron is connected to K ≈ 1000 local neighbors. To emphasize the locality, the synaptic strength of connections decays with the distance dN between neurons. For a detailed comparison with different topologies, see the Supplemental Information (Fig A in S1 Text).

2.1 Avalanches are extracted differently under coarse-sampling and sub-sampling

At each electrode, we sample both the spiking activity of the closest neuron (sub-sampling) and a spatially averaged signal that emulates LFP-like coarse-sampling.

Both coarse-sampling and sub-sampling are sketched in Fig 1A: For coarse-sampling (left), the signal from each electrode channel is composed of varying contributions (orange circles) of all surrounding neurons. The contribution of a particular spike from neuron i to electrode k decays as with the neuron-to-electrode distance dik and electrode contribution γ = 1. In contrast, if spike detection is applied (Fig 1A, right), each electrode signal captures the spiking activity of few individual neurons (highlighted circles).

thumbnail
Fig 1. Sampling affects the assessment of dynamic states from neuronal avalanches.

A: Representation of the sampling process of neurons (black circles) using electrodes (orange squares). Under coarse-sampling (e.g. LFP), activity is measured as a weighted average in the electrode’s vicinity. Under sub-sampling (spikes), activity is measured from few individual neurons. B: Fully sampled population activity of the neuronal network, for states with varying intrinsic timescales τ: Poisson (), subcritical (), reverberating () and critical (). C: Avalanche-size distribution p(S) for coarse-sampled (left) and sub-sampled (right) activity. Sub-sampling allows for separating the different states, whereas coarse-sampling leads to p(S) ∼ Sα for all states except Poisson. Parameters: Electrode contribution γ = 1, inter-electrode distance dE = 400 μm and time-bin size Δt = 8 ms.

https://doi.org/10.1371/journal.pcbi.1010678.g001

In order to focus on the key mechanistic differences between the two sampling approaches, we keep the two models as simple as possible. (This also matches the simple underlying dynamics, for which we can precisely set the distance to criticality). However, especially for coarse-sampling, this yields a rather crude approximation: More realistic, biophysically detailed LFP models would yield much more complex distance dependencies, which are an open field of research [3740]. Our chosen electrode-contribution of γ = 1 assumes a large field of view, which implies the strongest possible measurement overlap to showcase the coarse-sampling effect. As this is an important assumption, we consider electrodes with a smaller field of view in Sec. 2.5 and provide an extended discussion in the Supplemental Information (Fig B in S1 Text).

To test both recording types for criticality, we apply the standard analysis that provides a probability distribution p(S) of the avalanche size S: In theory, an avalanche describes a cascade of activity where individual units—here neurons—are consecutively and causally activated. Each activation is called an event. The avalanche size is then the total number of events in the time between the first and the last activation. A power law in the size distribution of these avalanches is a hallmark of criticality [6]. In practice, the actual size of an avalanche is hard to determine because individual avalanches are not clearly separated in time; the coarse-sampled signal is continuous-valued and describes the local population. In order to extract binary events for the avalanche analysis (Fig 2), the signal has to be thresholded—which is not necessary for spike recordings, where binary events are inherently present as timestamps.

thumbnail
Fig 2. Analysis pipeline for avalanches from sampled data.

I: Under coarse-sampling (LFP-like), the recording is demeaned and thresholded. II: The timestamps of events are extracted. Under sub-sampling (spikes), timestamps are obtained directly. III: Events from all channels are binned with time-bin size Δt and summed. The size S of each neuronal avalanche is calculated. IV: The probability of an avalanche size is given by the (normalized) count of its occurrences throughout the recording.

https://doi.org/10.1371/journal.pcbi.1010678.g002

2.2 The branching parameter m sets the distance to criticality

In order to compare apparent signatures of criticality with the true, underlying dynamics, we first give some intuition about the branching model. The branching parameter m quantifies the probability of postsynaptic activations, or in other words, how many subsequent spikes are caused (on average) by a single spike. With increasing m → 1, a single spike triggers increasingly long cascades of activity. These cascades determine the timescale over which fluctuations occur in the population activity—this intrinsic timescale τ describes the dynamic state of the system and its distance to criticality.

The intrinsic timescale can be analytically related to the branching parameter by τ ∼ −1/ln(m). As m → 1, τ → ∞ and the population activity becomes “bursty”. We illustrate this in Fig 1B and Table 1: For Poisson-like dynamics (m ≈ 0), the intrinsic timescale is zero () and the activity between neurons is uncorrelated. As the distance to criticality becomes smaller (m → 1), the intrinsic timescale becomes larger (, , ), fluctuations become stronger, and the spiking activity becomes more and more correlated in space and time. Apart from critical dynamics, of particular interest in the above list is the “reverberating regime”: For practical reasons, we assign a specific value of m (Table 1), which represents typical values observed in vivo [41, 42]. However, this choice is meant as a representation for a regime that is close-to-critical, but not directly at the critical point. In this regime, many of the benefits of criticality emerge, while the system can maintain a safety-margin from instability [41].

thumbnail
Table 1. Parameters and intrinsic timescales of dynamic states.

All combinations of branching parameter m and per-neuron drive h result in a stationary activity of 1 Hz per neuron. Due to the recurrent topology, it is more appropriate to consider the measured autocorrelation time rather than the analytic timescale τ.

https://doi.org/10.1371/journal.pcbi.1010678.t001

2.3 Coarse-sampling can cloud differences between dynamic states

Irrespective of the applied sampling, the inferred avalanche distribution should represent the true dynamic state of the system.

However, under coarse-sampling (Fig 1C, left), the avalanche-size distributions of the subcritical, reverberating and critical state are virtually indistinguishable. Intriguingly, all three show a power law. The observed exponent α = 1.5 is associated with a critical branching process. Only the uncorrelated (Poisson-like) dynamics produce a non-power-law decay of the avalanche-size distribution.

Under sub-sampling (Fig 1C, right), each dynamic state produces a unique avalanche-size distribution. Only the critical state, with the longest intrinsic timescale, produces the characteristic power law. Even the close-to-critical, reverberating regime is clearly distinguishable and features a “subcritical decay” of p(S).

2.4 Measurement overlap causes spurious correlations

Why are the avalanche-size distributions of different dynamic states hard to distinguish under coarse-sampling? The answer is hidden within the cascade of steps involved in the recording and analysis procedure. Here, we separate the impact of the involved processing steps. Most importantly, we discuss the consequences of measurement overlap—which we identify as a key explanation for the ambiguity of the distributions under coarse-sampling.

In order to obtain discrete events from the continuous time series for the avalanche analysis, each electrode signal is filtered and thresholded, binned with a chosen time-bin size Δt and, subsequently, the events from all channels are stacked. This procedure is problematic because (i) electrode proximity adds spatial correlations, (ii) temporal binning adds temporal correlations, and (iii) thresholding adds various types of bias [2931].

As a result of the involved analysis of coarse-sampled data, spurious correlations are introduced that are not present in sub-sampled data. We showcase this effect in Fig 3, where the Pearson correlation coefficient between two virtual electrodes is compared for both the (thresholded and binned) coarse-sampled and sub-sampled activity. For the same parameters and dynamic state, coarse-sampling leads to larger correlations than sub-sampling.

thumbnail
Fig 3. Coarse-sampling leads to greater correlations than sub-sampling.

Pearson correlation coefficient between the signals of two adjacent electrodes for the different dynamic states. Even for independent (uncorrelated) Poisson activity, measured correlations under coarse-sampling are non-zero. Parameters: Electrode contribution γ = 1, inter-electrode distance dE = 400 μm and time-bin size Δt = 8 ms.

https://doi.org/10.1371/journal.pcbi.1010678.g003

Depending on the sensitivity and distance between electrodes, multiple electrodes might record activity from the same neuron. This measurement overlap (or volume conduction effect) increases the spatial correlations between electrodes—and because the signals from multiple electrode channels are combined in the analysis, correlations can originate from measurement overlap alone.

2.5 Measurement overlap depends on electrodes’ field of view

The amount of measurement overlap between electrodes is determined effectively by the electrodes’ field of view, thus the distance dependence with which a neuron’s activity si contributes to the electrode signal Vk (Fig 4). We consider electrode signals , where the exponent γ indicates how narrow (γ = 2) or wide (γ = 1) the field of view is. Note that realistic distance dependencies are more complex and depend on many factors, such as neuron morphology and tissue filtering [3740].

thumbnail
Fig 4. The signal of an extracellular neuronal recording depends on neuronal morphologies, tissue filtering, and other factors, which all impact the coarse-sampling effect.

In effect, an important factor is the distance of the neuron to the electrode. Here, we show how the distance-dependence, with which a neuron’s activity contributes to an electrode, determines the collapse of avalanche distributions. A: Biophysically plausible distance dependence of LFP, reproduced from [38]. B: Sketch of a neuron’s contribution to an electrode at distance dik, as motivated by (A). The decay exponent γ characterizes the field of view. C–F: Avalanche-size distribution p(S) for coarse-sampling with the sketched electrode contributions. C, D: With a wide-field of view, distributions are hardly distinguishable between dynamic states. In contrast, for spiking activity the differences are clear (light shades in C). E, F: With a narrower field of view, distributions do not fully collapse on top of each other, but differences between reverberating and critical dynamics remain hard to identify. Parameters: Inter-electrode distance dE = 400 μm and time-bin size Δt = 8 ms. Other parameter combinations in Fig B in S1 Text.

https://doi.org/10.1371/journal.pcbi.1010678.g004

We find that the collapse of avalanche-size distributions from different dynamic states is strongest when the field of view is wide—i.e. if there is stronger measurement overlap. In that case, coarse-sampled distributions are hardly distinguishable (Fig 4C and 4D). For a narrow field of view, distributions are still hard to distinguish but do not fully collapse (Fig 4E and 4F).

In order to study the impact of inter-electrode distance and temporal binning, in the following we focus on the wide field of view (γ = 1) where the avalanche collapse is most pronounced.

2.6 The effect of inter-electrode distance

Similar to the field of view of electrodes, avalanche-size distributions under coarse-sampling depend on the inter-electrode distance dE (Fig 5A). For small inter-electrode distances, the overlap is strong. Thus, the spatial correlations are strong. Strong correlations manifest themselves in larger avalanches. However, under coarse-sampling the maximal observed size S of an avalanche is in general limited by the number of electrodes NE [34] (cf. Fig B in S1 Text). This limit due to NE manifests as a sharp cut-off and—in combination with spurious measurement correlations due to dE—can shape the probability distribution. In the following, we show that these factors can be more dominant than the actual underlying dynamics.

thumbnail
Fig 5. Under coarse-sampling, apparent dynamics depend on the inter-electrode distance dE.

A: For small distances (dE = 100 μm), the avalanche-size distribution p(S) indicates (apparent) supercritical dynamics: p(S) ∼ Sα with a sharp peak near the electrode number NE = 64. For large distances (dE = 500 μm), p(S) indicates subcritical dynamics: p(S) ∼ Sα with a pronounced decay already for S < NE. There exists a sweet-spot value (dE = 250 μm) for which p(S) indicates critical dynamics: p(S) ∼ Sα until the the cut-off is reached at S = NE. The particular sweet-spot value of dE depends on time-bin size (here, Δt = 4 ms). As a guide to the eye, dashed lines indicate S−1.5. B: The inferred branching parameter is also biased by dE when estimated from neuronal avalanches. Apparent criticality (, dotted line) is obtained with dE = 250 μm and Δt = 4 ms but also with dE = 400 μm and Δt = 8 ms. B, Inset: representation of the measurement overlap between neighboring electrodes; when electrodes are placed close to each other, spurious correlations are introduced.

https://doi.org/10.1371/journal.pcbi.1010678.g005

In theory, supercritical dynamics are characterized by a sharp peak in the avalanche distribution at S = NE. Independent of the underlying dynamics, such a peak can originate from small electrode distances (Fig 5A, dE = 100 μm): Avalanches are likely to span the small area covered by the electrode array. Furthermore, due to strong measurement overlap, individual events of the avalanche may contribute strongly to multiple electrodes.

Subcritical dynamics are characterized by a pronounced decay already for S < NE. Independent of the underlying dynamics, such a decay can originate from large electrode distances (Fig 5A, dE = 500 μm): Locally propagating avalanches are unlikely to span the large area covered by the electrode array. Furthermore, due to the weaker measurement overlap, individual events of the avalanche may contribute strongly to one electrode (or to multiple electrodes but only weakly).

Consequently, there exists a sweet-spot value of the inter-electrode distance dE for which p(S) appears convincingly critical (Fig 5A, dE = 250 μm): a power law p(S)∼Sα spans all sizes up to the cut-off at S = NE. However, the dependence on the underlying dynamic state is minimal.

Independently of the apparent dynamics, we observe the discussed cut-off at S = NE, which is also often seen in experiments (Fig 6). Note, however, that this cut-off only occurs under coarse-sampling (see again Fig 1C). When spikes are used instead (Fig 7), the same avalanche can reach an electrode repeatedly in quick succession—whereas such double-events are circumvented when thresholding at the population level. For more details see Fig B in S1 Text.

thumbnail
Fig 6. In vivo and in vitro avalanche-size distributions p(S) from LFP depend on time-bin size Δt.

Experimental LFP results are reproduced by many dynamics states of coarse-sampled simulations. A: Experimental in vivo results (LFP, human) from an array of 60 electrodes, adapted from [43]. B: Experimental in vitro results (LFP, culture) from an array with 60 electrodes, adapted from [1]. C–F: Simulation results from an array of 64 virtual electrodes and varying dynamic states, with time-bin sizes between 2 ms ≤ Δt ≤ 16 ms, γ = 1 and dE = 400 μm. Subcritical, reverberating and critical dynamics produce approximate power-law distributions with bin-size-dependent exponents α. Insets: Log-Log plot, distributions are fitted to p(S) ∼ Sα, fit range S ≤ 50. The magnitude of α decreases as Δtβ with −β indicated next to the insets, cf. Table 2.

https://doi.org/10.1371/journal.pcbi.1010678.g006

thumbnail
Fig 7. In vivo avalanche-size distributions p(S) from spikes depend on time-bin size Δt.

In vivo results from spikes are reproduced by sub-sampled simulations of subcritical to reverberating dynamics. Neither spike experiments nor sub-sampled simulations show the cut-off that is characteristic under coarse-sampling. A: Experimental in vivo results (spikes, awake monkey) from an array of 16 electrodes, adapted from [24]. The pronounced decay and the dependence on bin size indicate subcritical dynamics. B: Experimental in vitro results (spikes, culture DIV 34) from an array with 59 electrodes, adapted from [44]. Avalanche-size distributions are largely independent of time-bin size and resemble a power law over four orders of magnitude. In combination, this indicates a separation of timescales and critical dynamics (or even super critical dynamics [45]). B, Inset: Log-Lin plot of fitted α, fit range s/N ≤ 5. C–F: Simulation for sub-sampling, analogous to Fig 6. Subcritical dynamics do not produce power-law distributions and are clearly distinguishable from critical dynamics. F: Only the (close-to) critical simulation produces power-law distributions. F, Inset: Log-Log plot of fitted α, fit range S ≤ 50. In contrast to the in vitro culture (in B), the simulation does not feature a separation of time scales (due to external drive and stationary activity), and therefore the slope shows a systematic bin-size dependence here.

https://doi.org/10.1371/journal.pcbi.1010678.g007

A further signature of criticality is obtained by inferring the branching parameter. If the inference is unbiased, the inferred matches the underlying branching parameter m. We have developed a sub-sampling invariant estimator (based on the population activity inferred from spikes [27]), but is traditionally inferred from avalanches. Then, is defined as the average ratio of events between subsequent time bins in an avalanche, i.e. during non-zero activity [1, 33].

Obtaining for different electrode distances results in a picture consistent with the one from avalanche-size distributions (Fig 5B). In general, the dependence on the electrode distance is stronger than the dependence on the underlying state. At the particular value of the inter-electrode distance where , the distributions appear critical. If (), the distributions appear subcritical (supercritical). Notably, the supercritical m > 1 corresponds to dynamics where activity increases indefinitely, which is not possible for systems of finite size and exposes as an inference effect. More precisely, in case of our simulations, suffers two sources of bias: firstly, the coarse-sampling bias that is rooted in the preceding avalanche analysis, and secondly the estimator assumes a pure branching process without specific topology or coalescence effects [36].

Concluding, because the probability distributions and the inferred branching parameter share the dependence on electrode distance, a wide range of dynamic states would be consistently misclassified—solely as a function of the inter-electrode distance.

2.7 Temporal binning determines scaling exponents

Apart from the inter-electrode distance, the choice of temporal discretization that underlies the analysis may alter avalanche-size distributions. This time-bin size Δt varies from study to study and it can severely impact the observed distributions [1, 24, 43, 44]. With smaller bin sizes, avalanches tend to be separated into small clusters, whereas larger bin sizes tend to “glue” subsequent avalanches together [24]. Interestingly, this not only leads to larger avalanches, but specifically to p(S) ∼ Sα, where the exponent α increases systematically with bin size [1, 43]. Such a changing exponent is not expected for conventional systems that self-organize to criticality: Avalanches would be separated in time, and α should be fairly bin-size invariant for a large range of Δt [24, 44, 46].

Our coarse-sampled model reproduces these characteristic experimental results (Fig 6). It also reproduces the previously reported scaling [1] of the exponent with bin size α ∼ Δtβ (cf. Fig 6 insets and Table 2). Except for the Poisson dynamics, all the model distributions show power laws. Moreover the distributions are strikingly similar, not just to the experimental results, but also to each other. This emphasizes how sensitive signs of criticality are to analysis parameters: All the shown dynamic states are consistent with the ubiquitous avalanche-size distributions that are observed in coarse-sampled experiments [45] (cf. Table A in S1 Text).

When spikes are used instead, power-law distributions only arise from critical dynamics. For comparison with the coarse-sampled results in Fig 6, we show avalanche-size distributions from experimental spike recordings and sub-sampled simulations in Fig 7.

In vivo spike recordings of awake animals produce distributions that feature a pronounced decay instead of power laws (Fig 7A). Interestingly, spike recordings of in vitro cultures often show power-laws and, here, even little-to-no bin-size dependence, which indicates a fairly good separation of timescales (Fig 7B). In this example, the power-law extends over several orders of magnitude, and the slope does not decrease systematically with the bin size. This indicates close-to-critical dynamics; the slight bump that represents an excess of very large avalanche, however, might also point to slight super-criticality [44, 45].

Considering our simulations of sub-sampling (Fig 7C–7F), we only observe approximate power laws if the model is (close-to) critical (Fig 7F). Note that in critical systems, the avalanche distribution should not change with bin size, and that here the bin-size dependence of the slope is caused by the finite system size and by the non-zero spike rate, which impede a proper separation of timescales. Nonetheless, in contrast to coarse-sampling, the avalanche distributions that stem from sub-sampled measures (spikes) allow us to clearly tell apart the underlying dynamic states from one another.

Overall, as our results on coarse-sampling have shown, different sources of bias—here the measurement overlap and the bin size—can perfectly outweigh each other. For instance, smaller electrode distances (that increase correlations) can be compensated by making the time-bin size smaller (which again decreases correlations). This was particularly evident in Fig 5B, where increasing dE could be outweighed by increasing Δt in order to obtain a particular value for the branching parameter mav. The same relationship was again visible in Fig 6C–6F: For the shown dE = 400 μm (see also S1 Text for dE = 200 μm), only Δt = 8 ms results in α = 1.5—the correct exponent for the underlying dynamics. Since the electrode distance cannot be varied in most experiments, selecting anything but the one “lucky” Δt will cause a bias.

3 Discussion

When inferring collective network dynamics from partially sampled systems, it is crucial to understand how the sampling biases the measured observables. Without this understanding, an elaborate analysis procedure—such as the one needed to study neuronal avalanches from coarse-sampled data—can result in a misclassification of the underlying dynamics.

We have shown that the analysis of neuronal avalanches based on (LFP-like) coarse-sampled data can cloud differences of avalanche distributions from systems with different spatio-temporal signatures. These signatures derive from underlying dynamic states that, in this work, range from subcritical to critical—a range over which the intrinsic timescale undergoes a hundred-fold increase. And yet, the resulting avalanche-size distributions can be ambiguous (Fig 1).

The ambiguity of neuronal avalanches partially originates from spurious correlations. We have demonstrated the generation of spurious correlations from two sampling- and processing mechanisms: measurement overlap (due to volume conduction) and temporal binning. Other studies found further mechanisms that can generate apparent power-law distributions by (purposely or accidentally) introducing correlations into the observed system. For instance, correlated input introduces temporal correlations already into the underlying system [47, 48]. Along with thresholding and low-pass frequency filtering—which add temporal correlations to the observed system [25, 49]—this creates a large space of variables that either depend on the system, sampling and processing, or a combination of both.

As our results focus on sampling and processing, we believe that the observed impact on avalanche-size distributions is general and model independent. We deliberately used simple models and confirmed that our results are robust to parameter and model changes: First, our model for coarse-sampling prioritizes simplicity over biophysical details—in order to be consistent with our simplified but well-controlled neuronal dynamics—but we checked that our results are consistent with different distance-dependencies or adding a cut-off (Figs B and C in S1 Text). Second, employing a more realistic topology causes no qualitative difference (Fig A in S1 Text). Third, as a proof of concept, we investigated the impact of measurement overlap in the 2D Ising model (Fig G in S1 Text). Even in such a fundamental model a measurement overlap can bias the assessment of criticality. Lastly, we investigated scaling relations (of avalanche size- and duration distributions) and found that under coarse-sampling, the inference is severely hindered (Fig F in S1 Text). Under sub-sampling, scaling relations hold but with a different collapse exponent than expected for our model. This is consistent with other recent work showing that sampling can affect the collapse exponent [50].

Despite these efforts, our work remains a mechanistic modeling study and we want to stress its limitations: Our virtual sampling did not account for neuron morphology nor the individual neuron’s connectivity profiles. As spikes are non-local events, both these aspects impact the sampling range of an electrode and the decay of e.g. an LFP signal [38, 40]. Sampling also depends on effects that occur prior to recording, such as possible filtering due to extracellular tissue [25, 51] or filtering due to neuron morphology [40, 52]. In particular, low-pass filtering can arise from synaptic dynamics or the propagation within dendrites [53]. Clearly, as high frequencies get stripped from the signal, this could attenuate deflections of the recorded time series. Because these deflections are central to the avalanche detection, low-pass filtering could, in principle, affect avalanche statistics. However, preliminary tests showed that our main result of overlapping distributions for different dynamics states remains intact when the raw time series are low-pass filtered (Fig E in S1 Text).

Our results seemingly contradict experimental studies that demonstrate that the avalanche analysis is sensitive to pharmacological manipulations such as anesthesia [18, 5457]. Following a sufficient manipulation, a system’s dynamic state will change—which should be reflected by a visible difference of avalanche distributions. We showed that under coarse-sampling, the precise dynamic state could be misclassified. Whereas subtle differences between the avalanche distributions from different dynamic states are indeed visible (Fig 5), in general, they are clouded under coarse-sampling due to the measurement overlap. However, the smaller the measurement overlap becomes (e.g. through increasing the electrode-distance), the clearer the differences between dynamic states become (Fig B in S1 Text). In experiments the measurement overlap is unknown; it is also a priori unknown how strong a pharmacological perturbation is (relative to the equally unknown initial dynamic state) and how much coarse-sampling affects its inference. In modeling studies such as ours, these circumstances are well controlled—providing an explanation on a mechanistic level that can now be taken into consideration (and accounted for) when analyzing experimental data.

With our results on sampling effects, we can revisit the previous literature on neuronal avalanches. In Ref. [26] Ribeiro and colleagues show that “undersampling” biases avalanche distributions near criticality. In this case, undersampling was modeled by electrodes picking up a variable number of closest neurons. Here, we separated the effect of sub-sampling (electrodes cannot record all neurons) from coarse-sampling (electrodes record multiple neurons with distance-dependent contributions) and can add to previous results: In our model, we found that coarse-sampling clouds the differences between subcritical, reverberating, and critical dynamics; for γ = 1, the avalanche distributions always resemble power laws (Fig 4). Because of this ambiguity, the power-law distributions obtained ubiquitously from LFP, EEG, MEG and BOLD activity should be taken as evidence of neuronal activity with spatio-temporal correlations—but not necessarily of criticality proper; the coarse-sampling might hinder such a precise classification. In this regard, the interpretation of results from calcium imaging (which has a lower temporal resolution than electrode recordings) remains open (cf. Table A in S1 Text for an overview).

In contrast, a more precise classification seems possible when using spikes. If power-law distributions are observed from (sub-sampled) spiking activity, they do point to critical dynamics. For spiking activity, we even have mathematical tools to infer the precise underlying state in a sub-sampling-invariant manner that does not rely on avalanche distributions [27, 58]. However, not all spike recordings point to critical dynamics: Whereas in vitro recordings typically do produce power-law distributions [44, 5961], extracellular spike recordings from awake animals typically do not [16, 18, 24, 62].

Lastly, our results might offer a solution to resolve an inconsistency between avalanche distributions that derive from spikes vs. LFP-like sampling: For experiments on awake animals, spike-based studies typically indicate subcritical dynamics. Although coarse measures typically produce power laws that indicate criticality, in this work we showed that they might cloud the difference between critical and subcritical dynamics. Consistent with both, a brain that operates in a near-critical regime—as opposed to a fixed dynamic state—could harness benefits associated with criticality while flexibly tuning its response properties [43, 6369].

4 Methods

4.1 Model details

Our model is comprised of a two-level configuration, where a 2D network of NN = 160000 spiking neurons is sampled by a square array of NE = 8 × 8 virtual electrodes. Neurons are distributed randomly in space (with periodic boundary conditions) and, on average, nearest neighbors are dN = 50 μm apart. While the model is inherently unit-less, it is more intuitive to assign some length scale—in our case the inter-neuron distance dN—to set that scale: all other size-dependent quantities can then be expressed in terms of the chosen dN. For instance, the linear system size L can be derived by realizing that the random placement of neurons corresponds to an ideal gas. It follows that for uniformly distributed neurons. (For comparison, on a square lattice, the packing ratio would be higher and it is easy to see that the system size would be .) Given the system size and neuron number, the overall neuronal density is ρ = 100/mm2. With our choice of parameters, the model matches typical experimental conditions in terms of inter-neuron distance and system size (see Table 3 for details). Whereas the apparent neuron density of ρ = 100/mm2 is on the lower end of literature values [70, 71], this parameter choice avoids boundary effects that can be particularly dominant near criticality due to the long spatial correlation. The implementation of the model in C++, and the python code used to analyze the data and generate the figures, are available online at https://github.com/Priesemann-Group/criticalavalanches.

thumbnail
Table 3. Values and descriptions of the model parameters.

https://doi.org/10.1371/journal.pcbi.1010678.t003

4.2 Topology

We consider a topology that enforces local spreading dynamics. Every neuron is connected to all of its neighbors within a threshold distance dmax. The threshold is chosen so that on average K = 103 outgoing connections are established per neuron. We thus seek the radius dmax of a disk whose area contains K neurons. Using the already known neuron density, we find . For every established connection, the probability of a recurrent activation decreases with increasing neuron distance. Depending on the particular distance dij between the two neurons i and j, the connection has a normalized weight (with normalization constant ). Our weight definition approximates the distance dependence of average synaptic strength. The parameter σ sets the effective distance over which connections can form (dmax is an upper limit for σ and mainly speeds up computation.) In the limit σ → ∞, the network is all-to-all connected. In the limit σ → 0, the network is completely disconnected. Therefore, the effective connection length σ enables us to fine tune how local the dynamic spreading of activity is. In our simulations, we choose σ = 6dN = 300 μm. Thus, the overall reach is much shorter than dmax (σ ≈ 0.16 dmax).

4.3 Dynamics

To model the dynamic spreading of activity, time is discretized to a chosen simulation time step, here δt = 2 ms, which is comparable to experimental evidence on synaptic transmission [72]. Our simulations run for 106 time steps on an ensemble of 50 networks for each configuration (combination of parameters and dynamic state). This corresponds to ∼ 277 hours of recordings for each dynamic state.

The activity spreading is modeled using the dynamics of a branching process with external drive [27, 35]. At every time step t, each neuron i has a state si(t) = 1 (spiking) or 0 (quiescent). If a neuron is spiking, it tries to activate its connected neighbors—so that they will spike in the next time step. All of these recurrent activations depend on the branching parameter m: Every attempted activation has a probability pij = m wij to succeed. (Note that the distance-dependent weights are normalized to 1 but the activation probabilities are normalized to m.) In addition to the possibility of being activated by its neighbors, each neuron has a probability h to spike spontaneously in the next time step. After spiking, a neuron is reset to quiescence in the next time step if it is not activated again.

Our model gives us full control over the dynamic state of the system—and its distance to criticality. The dynamic state is described by the intrinsic timescale τ. We can analytically calculate the intrinsic timescale τ = −δt/ln (m), where δt is the duration of each simulated time step. Note that m—the control parameter that tunes the system—is set on the neuron level while τ is a (collective) network property (that in turn allows us to deduce an effective m). As the system is pushed more towards criticality (by setting m → 1), the intrinsic timescale diverges τ → ∞.

For consistency, we measure the intrinsic timescale during simulations. To that end, the (fully sampled) population activity at each time step is given by the number of active neurons A(t) = ∑i si(t). A linear least-squares fit of the autoregressive relation A(t + 1) = eδt/τ A(t) + NNh over the full simulated time series yields an estimate that describes each particular realization.

By adjusting the branching parameter m (setting the dynamic state) and the probability for spontaneous activations h (setting the drive), we control the distance to criticality and the average stationary activity. The activity is given by the average spike rate r = h/(δt(1 − m)) of the network. For all simulations, we fix the rate to r = 1Hz in order to avoid rate effects when comparing different states (see Table 1 for the list of parameter combinations). Note that, due to the non-zero drive h and the desired stationary activity, the model cannot be perfectly critical (, see Table 1).

4.4 Coalescence compensation

With our probability-based update rules, it may happen that target neurons are simultaneously activated by multiple sources. This results in so-called coalescence effects that are particularly strong in our model due to the local activity spreading [36]. For instance, naively setting m = 1 (with σ = 300 μm) would result in an effective (measured) , which has considerably different properties. Compared to e.g. m = 0.999 this would result in a 20-fold decrease in τ.

In order to compensate these coalescence effects, we apply a simple but effective fix: If an activation attempt is successful but the target neuron is already marked to spike in the next time step, another (quiescent) target is chosen. Because our implementation stores all the connected target neurons as a list sorted by their distance to the source, it is easy to activate the next neuron in that list. Thereby, the equivalent probability of the performed activation is as close to the originally attempted one as possible.

4.5 Virtual electrode recordings

Our simulations are designed to mimic sampling effects of electrodes in experimental approaches. To simulate sampling, we use the readout of NE = 64 virtual electrodes that are placed in an 8 × 8 grid. Electrodes are separated by an inter-electrode distance that we specify in multiples of inter-neuron distance dN. It is kept constant for each simulation and we study the impact of the inter-electrode distance by repeated simulations spanning electrode distances between 1dN = 50 μm and 10dN = 500 μm. The electrodes are modeled to be point-like objects in space that have a small dead-zone of around their origin. Within the dead-zone, no signal can be recorded (in fact, we implement this by placing the electrodes first and the neurons second—and forbid neuron placements too close to electrodes).

Using this setup, we can apply sampling that emulates either the detection of spike times or LFP-like recordings. To model the detection of spike times, each electrode only observes the single neuron that is closest to it. Whenever this particular neurons spikes, the timestamp of the spike is recorded. All other neurons are neglected—and the dominant sampling effect is sub-sampling. On the other hand, to model LFP-like recordings, each electrode integrates the spiking of all neurons in the system. Contributions are strictly positive, matching the underlying branching dynamics (for more biophysically detailed LFP models, contributions would depend on neuron types and other factors). The contribution of a single spike, e.g. from neuron i to electrode k, decays as 1/dik with the neuron-to-electrode distance. (See Fig B in S1 Text for a detailed discussion of the qualitative impact of changing the distance dependence, e.g. to .) The total signal of the electrode at time t is then . Diverging electrode signals are prevented by the forbidden zone around the electrodes. For such coarse-sampled activity, all neurons contribute to the signal and the contribution is weighted by their distance.

4.6 Avalanches

Taking into account all 64 electrodes, a new avalanche starts (by definition [1]) when there is at least one event (spike) in a time bin—given there was no event in the previous time bin (see Fig 2). An avalanche ends whenever an empty bin is observed (no event over the duration of the time bin). Hence, an avalanche persists for as long as every consecutive time bin contains at least one event—which is called the avalanche duration D. From here, it is easy to count the total number of events that were recorded across all electrodes and included time bins—which is called the avalanche size S. The number of occurrences of each avalanche size (or duration) are sorted into a histogram that describes the avalanche distribution.

4.7 Analysis of avalanches under coarse and sub-sampling

We analyze avalanche size distributions in a way that is as close to experimental practice as possible (see Fig 2). From the simulations described above, we obtain two outputs from each electrode: a) a list containing spike times of the single closest neuron and b) a time series of the integrated signal to which all neurons contributed.

In case of the (sub-sampled) spike times a), the spiking events are already present in binary form. Thus, to define a neural avalanche, the only required parameter is the size of the time bin Δt (for instance, we may choose Δt = 4 ms).

In case of the (coarse-sampled) time series b), binary events need to be extracted from the continuous electrode signal. The extraction of spike times from the continuous signal relies on a criterion to differentiate if the set of observed neurons is spiking or not—which is commonly realized by applying a threshold. (Note that now thresholding takes place on the electrode level, whereas previously, an event belonged to a single neuron.) Here, we obtain avalanches by thresholding as follows: First, all time series are frequency filtered to 0.1 Hz < f < 200 Hz. This demeans and smoothes the signal (and reflects common hardware-implemented filters of LFP recordings). Second, the mean and standard deviation of the full time series are computed for each electrode. The mean is virtually zero due to cutting low frequencies when band-pass filtering. Each electrode’s threshold is set to three standard deviations above the mean. Third, for every positive excursion of the time series (i.e. Vk(t) > 0), we recorded the timestamp t = tmax of the maximum value of the excursion. An event was defined when Vk(tmax) was larger than the threshold Θk of three standard deviations of the (electrode-specific) time series. (Whenever the signal passes the threshold, the timestamps of all local maxima become candidates for the event; however, only the one largest maximum between two crossings of the mean assigns the final event-time.) Once the continuous signal of each electrode has been mapped to binary events with timestamps, the remaining analysis steps were the same for coarse-sampled and sub-sampled data. Last, avalanche size and duration distributions are fitted to power-laws using the powerlaw package [73].

Supporting information

S1 Text. Supplementary text, figures and extended modeling.

We provide additional computations, numerical simulations, and an extended discussion of the model and its parametrizations.

https://doi.org/10.1371/journal.pcbi.1010678.s001

(PDF)

Acknowledgments

We thank Jordi Soriano, Johannes Zierenberg and all members of our group, for valuable input. We thank Johannes Zierenberg and Bettina Royen for careful proofreading of the manuscript.

References

  1. 1. Beggs JM, Plenz D. Neuronal Avalanches in Neocortical Circuits. Journal of Neuroscience. 2003;23(35):11167–11177. pmid:14657176
  2. 2. Dunkelmann S, Radons G. Neural Networsk and Abelian Sandpile Models of Self-Organized Criticality. In: Marinaro M, Morasso PG, editors. Proceedings of International Conference Artificial Neural Networks. Springer-Verlag; 1994. p. 867–870.
  3. 3. Beggs JM. The criticality hypothesis: how local cortical networks might optimize information processing. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2008;366(1864):329–343. pmid:17673410
  4. 4. Muñoz MA. Colloquium: Criticality and dynamical scaling in living systems. Reviews of Modern Physics. 2018;90(3):031001.
  5. 5. Cocchi L, Gollo LL, Zalesky A, Breakspear M. Criticality in the brain: A synthesis of neurobiology, models and cognition. Progress in Neurobiology. 2017;158:132–152. pmid:28734836
  6. 6. Plenz D, Niebur E, editors. Criticality in Neural Systems. vol. 9783527411. Weinheim, Germany: Wiley-VCH Verlag GmbH & Co. KGaA; 2014. Available from: http://doi.wiley.com/10.1002/9783527651009.
  7. 7. Zeraati R, Priesemann V, Levina A. Self-Organization Toward Criticality by Synaptic Plasticity. Front Phys. 2021;9.
  8. 8. Sethna JP. Statistical Mechanics: Entropy, Order Parameters, and Complexity. 1st ed. New York: Oxford University Press; 2006.
  9. 9. Kinouchi O, Copelli M. Optimal dynamical range of excitable networks at criticality. Nature Physics. 2006;2(5):348–351.
  10. 10. Zierenberg J, Wilting J, Priesemann V, Levina A. Tailored ensembles of neural networks optimize sensitivity to stimulus statistics. Physical Review Research. 2020;2(1):013115.
  11. 11. Haldeman C, Beggs JM. Critical Branching Captures Activity in Living Neural Networks and Maximizes the Number of Metastable States. Physical Review Letters. 2005;94(5):058101. pmid:15783702
  12. 12. Tkačik G, Mora T, Marre O, Amodei D, Palmer SE, Berry MJ, et al. Thermodynamics and signatures of criticality in a network of neurons. Proceedings of the National Academy of Sciences. 2015;112(37):11508–11513.
  13. 13. Sethna JP, Dahmen KA, Myers CR. Crackling noise. Nature. 2001;410(6825):242–250. pmid:11258379
  14. 14. Gireesh ED, Plenz D. Neuronal avalanches organize as nested theta- and beta/gamma-oscillations during development of cortical layer 2/3. Proceedings of the National Academy of Sciences. 2008;105(21):7576–7581. pmid:18499802
  15. 15. Petermann T, Thiagarajan TC, Lebedev MA, Nicolelis MAL, Chialvo DR, Plenz D. Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. Proceedings of the National Academy of Sciences. 2009;106(37):15921–15926. pmid:19717463
  16. 16. Dehghani N, Hatsopoulos NG, Haga ZD, Parker RA, Greger B, Halgren E, et al. Avalanche Analysis from Multielectrode Ensemble Recordings in Cat, Monkey, and Human Cerebral Cortex during Wakefulness and Sleep. Frontiers in Physiology. 2012;3(August):1–18. pmid:22934053
  17. 17. Clawson WP, Wright NC, Wessel R, Shew WL. Adaptation towards scale-free dynamics improves cortical stimulus discrimination at the cost of reduced detection. PLOS Computational Biology. 2017;13(5):e1005574. pmid:28557985
  18. 18. Ribeiro TL, Copelli M, Caixeta F, Belchior H, Chialvo DR, Nicolelis MAL, et al. Spike Avalanches Exhibit Universal Dynamics across the Sleep-Wake Cycle. PLoS ONE. 2010;5(11):e14129. pmid:21152422
  19. 19. Shriki O, Alstott J, Carver F, Holroyd T, Henson RNA, Smith ML, et al. Neuronal Avalanches in the Resting MEG of the Human Brain. Journal of Neuroscience. 2013;33(16):7079–7090. pmid:23595765
  20. 20. Arviv O, Goldstein A, Shriki O. Near-Critical Dynamics in Stimulus-Evoked Activity of the Human Brain and Its Relation to Spontaneous Resting-State Activity. Journal of Neuroscience. 2015;35(41):13927–13942. pmid:26468194
  21. 21. Palva JM, Zhigalov A, Hirvonen J, Korhonen O, Linkenkaer-Hansen K, Palva S. Neuronal long-range temporal correlations and avalanche dynamics are correlated with behavioral scaling laws. Proceedings of the National Academy of Sciences. 2013;110(9):3585–3590. pmid:23401536
  22. 22. Tagliazucchi E, Balenzuela P, Fraiman D, Chialvo DR. Criticality in Large-Scale Brain fMRI Dynamics Unveiled by a Novel Point Process Analysis. Frontiers in Physiology. 2012;3(February):1–12. pmid:22347863
  23. 23. Ponce-Alvarez A, Jouary A, Privat M, Deco G, Sumbre G. Whole-Brain Neuronal Activity Displays Crackling Noise Dynamics. Neuron. 2018;100(6):1446–1459. pmid:30449656
  24. 24. Priesemann V, Wibral M, Valderrama M, Pröpper R, Le Van Quyen M, Geisel T, et al. Spike avalanches in vivo suggest a driven, slightly subcritical brain state. Frontiers in Systems Neuroscience. 2014;8(June):108. pmid:25009473
  25. 25. Bédard C, Kröger H, Destexhe A. Does the 1/f Frequency Scaling of Brain Signals Reflect Self-Organized Critical States? Physical Review Letters. 2006;97(11):118102. pmid:17025932
  26. 26. Ribeiro TL, Ribeiro S, Belchior H, Caixeta F, Copelli M. Undersampled critical branching processes on small-world and random networks fail to reproduce the statistics of spike avalanches. PLoS ONE. 2014;9(4). pmid:24751599
  27. 27. Wilting J, Priesemann V. Inferring collective dynamical states from widely unobserved systems. Nature Communications. 2018;9(1):2325. pmid:29899335
  28. 28. Wilting J, Dehning J, Pinheiro Neto J, Rudelt L, Wibral M, Zierenberg J, et al. Operating in a Reverberating Regime Enables Rapid Tuning of Network States to Task Requirements. Frontiers in Systems Neuroscience. 2018;12(November). pmid:30459567
  29. 29. Font-Clos F, Pruessner G, Moloney NR, Deluca A. The perils of thresholding. New Journal of Physics. 2015;17(4):043066.
  30. 30. Laurson L, Illa X, Alava MJ. The effect of thresholding on temporal avalanche statistics. Journal of Statistical Mechanics: Theory and Experiment. 2009;2009(01):P01019.
  31. 31. Villegas P, di Santo S, Burioni R, Muñoz MA. Time-series thresholding and the definition of avalanche size. Physical Review E. 2019;100(1):012133. pmid:31499802
  32. 32. Dalla Porta L, Copelli M. Modeling neuronal avalanches and long-range temporal correlations at the emergence of collective oscillations: Continuously varying exponents mimic M/EEG results. PLOS Computational Biology. 2019;15(4):e1006924. pmid:30951525
  33. 33. Klaus A, Yu S, Plenz D. Statistical analyses support power law distributions found in neuronal avalanches. PLoS ONE. 2011;6(5). pmid:21720544
  34. 34. Yu S, Klaus A, Yang H, Plenz D. Scale-Invariant Neuronal Avalanche Dynamics and the Cut-Off in Size Distributions. PLoS ONE. 2014;9(6):e99761. pmid:24927158
  35. 35. Harris TE. The Theory of Branching Processes. Berlin: Springer-Verlag; 1963.
  36. 36. Zierenberg J, Wilting J, Priesemann V, Levina A. Description of spreading dynamics by microscopic network models and macroscopic branching processes can differ due to coalescence. Physical Review E. 2020;101(2):022301. pmid:32168601
  37. 37. Pettersen KH, Einevoll GT. Amplitude Variability and Extracellular Low-Pass Filtering of Neuronal Spikes. Biophysical Journal. 2008;94(3):784–802. pmid:17921225
  38. 38. Linden H, Tetzlaff T, Potjans TC, Pettersen KH, Grün S, Diesmann M, et al. Modeling the spatial reach of the LFP. Neuron. 2011;72(5):859–872. pmid:22153380
  39. 39. Riera JJ, Ogawa T, Goto T, Sumiyoshi A, Nonaka H, Evans A, et al. Pitfalls in the Dipolar Model for the Neocortical EEG Sources. J Neurophysiol. 2012;108:956–975. pmid:22539822
  40. 40. Einevoll GT, Kayser C, Logothetis NK, Panzeri S. Modelling and analysis of local field potentials for studying the function of cortical circuits. Nature Reviews Neuroscience. 2013;14(11):770–785. pmid:24135696
  41. 41. Wilting J, Priesemann V. 25 Years of Criticality in Neuroscience—Established Results, Open Controversies, Novel Concepts. 2019;.
  42. 42. Ma Z, Turrigiano GG, Wessel R, Hengen KB. Cortical Circuit Dynamics Are Homeostatically Tuned to Criticality In Vivo. Neuron. 2019;104:655–664.e4. pmid:31601510
  43. 43. Priesemann V, Valderrama M, Wibral M, Le Van Quyen M. Neuronal Avalanches Differ from Wakefulness to Deep Sleep—Evidence from Intracranial Depth Recordings in Humans. PLoS Computational Biology. 2013;9(3):e1002985. pmid:23555220
  44. 44. Levina A, Priesemann V. Subsampling scaling. Nature Communications. 2017;8(1):15140. pmid:28469176
  45. 45. Plenz D, Ribeiro TL, Miller SR, Kells PA, Vakili A, Capek EL. Self-Organized Criticality in the Brain. Front Phys. 2021;9.
  46. 46. Bak P, Tang C, Wiesenfeld K. Self-organized criticality: An explanation of the 1/f noise. Physical Review Letters. 1987;59(4):381–384. pmid:10035754
  47. 47. Priesemann V, Shriki O. Can a time varying external drive give rise to apparent criticality in neural systems? PLOS Computational Biology. 2018;14(5):e1006081. pmid:29813052
  48. 48. Touboul J, Destexhe A. Power-law statistics and universal scaling in the absence of criticality. Physical Review E. 2015;95(1):2–6.
  49. 49. Touboul J, Destexhe A. Can Power-Law Scaling and Neuronal Avalanches Arise from Stochastic Dynamics? PLoS ONE. 2010;5(2):e8982. pmid:20161798
  50. 50. Carvalho TTA, Fontenele AJ, Girardi-Schappo M, Feliciano T, Aguiar LAA, Silva TPL, et al. Subsampled Directed-Percolation Models Explain Scaling Relations Experimentally Observed in the Brain. Front Neural Circuits. 2021;14:576727. pmid:33519388
  51. 51. Gabriel S, Lau RW, Gabriel C. The dielectric properties of biological tissues: \uppercase{II}. Measurements in the frequency range 10\uppercase{H}z to 20\uppercase{GH}z. Physics in Medicine and Biology. 1996;41(11):2251–2269. pmid:8938025
  52. 52. Buzsáki G, Anastassiou Ca, Koch C. The origin of extracellular fields and currents—EEG, ECoG, LFP and spikes. Nature Reviews Neuroscience. 2012;13(6):407–420. pmid:22595786
  53. 53. Lindén H, Pettersen KH, Einevoll GT. Intrinsic Dendritic Filtering Gives Low-Pass Power Spectra of Local Field Potentials. J Comput Neurosci. 2010;29:423–444. pmid:20502952
  54. 54. Scott G, Fagerholm ED, Mutoh H, Leech R, Sharp DJ, Shew WL, et al. Voltage Imaging of Waking Mouse Cortex Reveals Emergence of Critical Neuronal Dynamics. Journal of Neuroscience. 2014;34(50):16611–16620. pmid:25505314
  55. 55. Bellay T, Klaus A, Seshadri S, Plenz D. Irregular spiking of pyramidal neurons organizes as scale-invariant neuronal avalanches in the awake state. eLife. 2015;4(JULY 2015):1–25. pmid:26151674
  56. 56. Fagerholm ED, Scott G, Shew WL, Song C, Leech R, Knöpfel T, et al. Cortical Entropy, Mutual Information and Scale-Free Dynamics in Waking Mice. Cerebral Cortex. 2016;26(10):3945–3952. pmid:27384059
  57. 57. Fekete T, Omer DB, O’Hashi K, Grinvald A, van Leeuwen C, Shriki O. Critical Dynamics, Anesthesia and Information Integration: Lessons from Multi-Scale Criticality Analysis of Voltage Imaging Data. NeuroImage. 2018;183:919–933. pmid:30120988
  58. 58. Wilting J, Priesemann V. Between Perfectly Critical and Fully Irregular: A Reverberating Model Captures and Predicts Cortical Spike Propagation. Cerebral Cortex. 2019;29(6):2759–2770. pmid:31008508
  59. 59. Tetzlaff C, Okujeni S, Egert U, Wörgötter F, Butz M. Self-Organized Criticality in Developing Neuronal Networks. PLoS Computational Biology. 2010;6(12):e1001013. pmid:21152008
  60. 60. Friedman N, Ito S, Brinkman BAW, Shimono M, Deville REL, Dahmen KA, et al. Universal critical dynamics in high resolution neuronal avalanche data. Physical Review Letters. 2012;108(20):1–5. pmid:23003192
  61. 61. Pasquale V, Massobrio P, Bologna LL, Chiappalone M, Martinoia S. Self-organization and neuronal avalanches in networks of dissociated cortical neurons. Neuroscience. 2008;153(4):1354–1369. pmid:18448256
  62. 62. Hahn G, Petermann T, Havenith MN, Yu S, Singer W, Plenz D, et al. Neuronal avalanches in spontaneous activity in vivo. Journal of neurophysiology. 2010;104(6):3312–3322. pmid:20631221
  63. 63. Fox MD, Snyder AZ, Vincent JL, Corbetta M, Van Essen DC, Raichle ME. The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proceedings of the National Academy of Sciences. 2005;102(27):9673–9678. pmid:15976020
  64. 64. Hellyer PJ, Jachs B, Clopath C, Leech R. Local inhibitory plasticity tunes macroscopic brain dynamics and allows the emergence of functional brain networks. NeuroImage. 2016;124:85–95. pmid:26348562
  65. 65. Shew WL, Clawson WP, Pobst J, Karimipanah Y, Wright NC, Wessel R. Adaptation to sensory input tunes visual cortex to criticality. Nature Physics. 2015;11(8):659–663.
  66. 66. Simola J, Zhigalov A, Morales-Muñoz I, Palva JM, Palva S. Critical dynamics of endogenous fluctuations predict cognitive flexibility in the Go/NoGo task. Scientific Reports. 2017;7(1):2909. pmid:28588303
  67. 67. Deco G, Jirsa VK, McIntosh AR. Emerging concepts for the dynamical organization of resting-state activity in the brain. Nature Reviews Neuroscience. 2011;12(1):43–56. pmid:21170073
  68. 68. Hahn G, Ponce-Alvarez A, Monier C, Benvenuti G, Kumar A, Chavane F, et al. Spontaneous cortical activity is transiently poised close to criticality. PLOS Computational Biology. 2017;13(5):e1005543. pmid:28542191
  69. 69. Tomen N, Rotermund D, Ernst U. Marginally subcritical dynamics explain enhanced stimulus discriminability under attention. Frontiers in Systems Neuroscience. 2014;8(August):1–15. pmid:25202240
  70. 70. Wagenaar DA, Pine J, Potter SM. An Extremely Rich Repertoire of Bursting Patterns during the Development of Cortical Cultures. BMC Neurosci. 2006; p. 18. pmid:16464257
  71. 71. Ivenshitz M, Segal M. Neuronal Density Determines Network Connectivity and Spontaneous Activity in Cultured Hippocampus. Journal of Neurophysiology. 2010;104(2):1052–1060. pmid:20554850
  72. 72. Sabatini BL, Regehr WG. Timing of Synaptic Transmission. Annual Review of Physiology. 2002;61(1):521–542.
  73. 73. Alstott J, Bullmore E, Plenz D. powerlaw: A Python Package for Analysis of Heavy-Tailed Distributions. PLoS ONE. 2014;9(1):e85777. pmid:24489671