Figures
Abstract
Neural activity at the population level is commonly studied experimentally through measurements of electric brain signals like local field potentials (LFPs), or electroencephalography (EEG) signals. To allow for comparison between observed and simulated neural activity it is therefore important that simulations of neural activity can accurately predict these brain signals. Simulations of neural activity at the population level often rely on point-neuron network models or firing-rate models. While these simplified representations of neural activity are computationally efficient, they lack the explicit spatial information needed for calculating LFP/EEG signals. Different heuristic approaches have been suggested for overcoming this limitation, but the accuracy of these approaches has not fully been assessed. One such heuristic approach, the so-called kernel method, has previously been applied with promising results and has the additional advantage of being well-grounded in the biophysics underlying electric brain signal generation. It is based on calculating rate-to-LFP/EEG kernels for each synaptic pathway in a network model, after which LFP/EEG signals can be obtained directly from population firing rates. This amounts to a massive reduction in the computational effort of calculating brain signals because the brain signals are calculated for each population instead of for each neuron. Here, we investigate how and when the kernel method can be expected to work, and present a theoretical framework for predicting its accuracy. We show that the relative error of the brain signal predictions is a function of the single-cell kernel heterogeneity and the spike-train correlations. Finally, we demonstrate that the kernel method is most accurate for contributions which are also dominating the brain signals: spatially clustered and correlated synaptic input to large populations of pyramidal cells. We thereby further establish the kernel method as a promising approach for calculating electric brain signals from large-scale neural simulations.
Citation: Ness TV, Tetzlaff T, Einevoll GT, Dahmen D (2025) On the validity of electric brain signal predictions based on population firing rates. PLoS Comput Biol 21(4): e1012303. https://doi.org/10.1371/journal.pcbi.1012303
Editor: Matthias Helge Hennig
Received: July 5, 2024; Accepted: March 6, 2025; Published: April 14, 2025
Copyright: © 2025 Ness et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Simulation code to reproduce all figures in this paper, as well as all the simulated kernels are freely available from https://github.com/torbjone/kernel_validity_paper.git.
Funding: This work received funding from the European Union Horizon 2020 Research and Innovation Programme under Grant Agreement No. 945539 [Human Brain Project (HBP) SGA3], No. 101147319 [EBRAINS 2.0] (to TVN, TT, GTE, and DD), the Norwegian Research Council (NFR) through NOTUR (No. NN4661K) (to TVN, GTE), the Helmholtz Association Initiative and Networking Fund (SO-092, Advanced Computing Architectures (to TT)), and the Helmholtz Metadata Collaboration (HMC; ZT-I-PF-3-026 (to TT)). Open-access publication is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 491111487 (to TT, DD). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
1. Introduction
Science is at its most productive when models can make experimental predictions so that experimental results can inform and improve the models. Measurable brain signals should therefore be available from simulations of neural activity. The brain is studied at many different scales, from the molecular scale to behavior, and the different scales rely on models at different levels of abstraction. It is therefore important to have well-founded methods to calculate different types of brain signals, from neural simulations at different levels of abstraction (Fig 1) [1].
To study neural activity at the level of neural populations, it is common to rely on measurements of local field potentials (LFPs), which is the low-frequency part of the extracellular potential measured inside the brain, or on electroencephalography (EEG) signals, which is the extracellular potential measured outside of the head. The most accurate way to calculate LFP and EEG signals from simulated neural network activity is to use biophysically detailed multicompartment neuron models, coupled with volume conductor theory [2–5]. For single neurons or small populations, this is in principle straightforward [6,7], and this approach has been pursued also for large recurrently connected networks by a handful of studies [6,8–15].
However, biophysically detailed modeling of neural activity at the population levels is extremely computationally demanding, and often not viable in practice [1]. Therefore, when studying neural network activity, it is more common to rely on simplified representations of neurons and neural activity, through for example point-neuron network models [16–18] or firing-rate models [16,19,20]. These simplified representations of neural activity are more computationally tractable and typically orders of magnitude faster than biophysically detailed simulations [18], but many brain signals, like the LFP and EEG signals, are generated by spatially distributed neural membrane currents, which are not available from the simplified schemes (since the spatial structure of individual neurons is not explicitly modeled) [5]. An important topic is therefore what is the best approach for calculating approximations of different brain signals from neural activity simulated from point-neuron network simulations or firing-rate models.
Neural circuits, here represented by a putative cortical column (panel A), are studied at different levels of biological detail, depending on the scientific question (panel B). By using a forward model (panel C) one can calculate measurable signals (panel D) from neural activity simulated at different levels of abstraction. In general, calculations of such brain signals are only biophysically well-founded when using biophysically detailed cell models, while simplified representations of neurons will require “heuristic” approaches where it can be hard to estimate the accuracy of the resulting brain signal predictions.
Several different approaches to calculate LFP/EEG/MEG signals from point-neuron or firing-rate models have been suggested [19–28], but quantitative evaluations of the accuracy of such approaches have often been hard to come by, due to the lack of “ground truth” signals to compare the approximations to. It has therefore often been unclear how well these approximations work, although there are important exceptions that we discuss later.
A common approach with a long history to get approximate LFP/EEG signals from firing-rate models is simply to assume that the signal is proportional to the firing rate [19]. For EEG/MEG signals, it is sometimes instead assumed that an equivalent current dipole is proportional to the firing rate, and the dipole can be inserted into a head model to obtain the EEG/MEG signal [20]. Although this approach can certainly be useful, it neglects some basic principles in how these signals are generated, and as a result, some error will be introduced in the time-domain of the predicted signals [5,22,29].
Hagen et al. [2016] [23] presented the so-called “hybrid scheme”, where the neural network activity is first simulated in a point-neuron network, and saved to file. Afterward, in a separate step, the spiking activity is replayed onto biophysically detailed cell models, from which the resulting LFP signals can be calculated. The hybrid scheme is a computationally expensive approach because it relies on representing all neurons that are within the reach of the recording electrode [30,31] with a high level of morphological and electrophysiological detail. On the other hand, it is well grounded in the biophysics of extracellular signal generation.
Hagen et al. [2016] [23] also used the hybrid scheme to test a “kernel approach”, where they calculated LFP kernels for each synaptic pathway in the model. Each population kernel represented the average postsynaptic LFP contribution given an action potential in the presynaptic population, and the LFP signal could then be approximated by convolving the firing rate of each presynaptic population with the corresponding population kernel and summing the LFP contributions for each synaptic pathway in the model. This kernel approach was confirmed to give accurate approximations to the LFP, at a very low computational cost once the kernels were known because the LFP could be predicted directly from the firing rate of each population, instead of from the transmembrane currents of each individual neuron. A major drawback of this approach was that the calculation of the population kernels was still very computationally demanding.
Mazzoni et al. [2015] [22] tested so-called “proxy” methods for calculating LFP signals (later also extended to EEG signals [25]) directly from point-neuron network simulations, and found that a weighted sum of synaptic currents, which are available from point-neuron network simulations, could be used to predict the LFP calculated by a more comprehensive approach, similar to the “hybrid scheme” discussed above. The proxies were demonstrated to be quite accurate and provided excellent LFP predictions for the use-case considered. On the other hand, they are in a sense phenomenological and typically poorly grounded in the underlying biophysics of extracellular potentials, which can in some cases be a drawback.
Teleńczuk et al. [2020] [24] used experimentally measured LFP kernels from spike-trigger averaged LFP recordings, and used these kernels to approximate LFP signals, by convolving them with firing rates from point-neuron network simulations. This approach has the advantage of being independent of the modeling choices that are required when simulating LFP kernels [24,32]. This approach was later expanded upon by Tesler et al. [2022] [27], to also enable MEG signal predictions from point-neuron network models or firing-rate models. However, kernels measured from spike-triggered averages are potentially troubled by correlations, and Hagen et al. [2016] [23] obtained different results when calculating kernels directly, and from spike-triggered averages, even within the same model. This can also be directly observed, as the measured kernels are not always causal, which we would expect them to be given that they represent the postsynaptic contribution from a presynaptic spike. Further, the measured excitatory LFP kernels were proposed to be disynaptic inhibitory kernels [24,33], illustrating a problem with interpreting results based on LFP kernels from spike-triggered averages. Note that the degree to which measured spike-triggered LFP kernels are contaminated by correlations will depend on the scenario. For example for the monosynaptic thalamic activation of cortical postsynaptic target cells considered by Swadlow et al. [2002] [34], the contamination was very small.
The earlier attempts to model LFP kernels have required a large number of single-cell simulations [23,32,35] to represent the postsynaptic population. However, a very efficient yet highly biophysically detailed framework for calculating population kernels was recently proposed by Hagen et al. [2022] [29]. In this framework, a single biophysically detailed cell simulation was sufficient to accurately predict a population kernel by first obtaining the membrane currents of the single postsynaptic neuron in response to conductance-based synaptic input, and letting this represent the population-averaged membrane currents following synaptic activation. All other effects, including the spatial extent of the population and the variability of synaptic parameters, were then accounted for by a series of linear convolutions in the spatial and temporal domains. This approach greatly increases the applicability of the kernel approach, since LFP/EEG kernels can be calculated accurately and efficiently, even by common laptop computers. The LFP calculated from the kernel approach by Hagen et al. [2022] [29] was tested against the “ground truth” LFP calculated from a multicompartment, biophysically detailed neural network simulation, and the kernel approach was found to be quite accurate in most scenarios.
As reviewed above, several recent projects have used the kernel approach to estimate LFP, EEG, or MEG signals directly from firing rates [23,24,27–29,32,36], and it has proved a promising tool for future studies of neural activity at the population level. Therefore, it is important to have a good qualitative understanding of how the kernel approach works, and good quantitative measures of how accurate it is under different circumstances.
In this study, we start by building a better understanding of how and when the kernel method can be expected to work, and when caution is advised. We then develop a theoretical framework for predicting the accuracy of the kernel approach and show that the relative error is a function of the single-cell kernel heterogeneity and spike-train correlations. Finally, we demonstrate that the kernel approach is most accurate for the LFP contributions that can be expected to dominate the LFP signal, like highly concentrated and correlated synaptic input to large populations of pyramidal neurons.
2. Results
Many measurable brain signals, like cortical LFPs, ECoGs, EEGs, and MEGs are expected to share the same biophysical origin, namely the membrane currents following large numbers of synaptic inputs to populations of geometrically aligned pyramidal neurons [4,5]. To accurately calculate these signals from simulated neural activity, we therefore need to take into account all synaptic events.
Since volume conduction is linear [37], the compound extracellular potential
generated by a population of neurons is a linear superposition of the individual cell contributions
(
). Therefore, calculating the extracellular potential of a population of
neurons is typically done by focusing on the synaptic input to each cell, calculating single-cell contributions (see Methods), and finally summing all cells (Fig 2A), here referred to as the postsynaptic perspective.
2.1. Single-cell spike-LFP kernels
In principle, we can also switch the perspective to each presynaptic cell: Each action potential from a given cell leads to an activation of the outgoing synapses, causing a distributed “extracellular potential flash” from all postsynaptic target cells (Fig 2B), referred to as the single-cell spike-LFP kernel. For simplicity, we will here refer to this as the single-cell kernel. If we convolve the single-cell kernel with the spike train of the presynaptic neuron, we get the extracellular potential including all postsynaptic effects from this neuron. If we know all single-cell kernels and corresponding spike trains, we can then calculate the extracellular potential as the sum of all single-cell postsynaptic contributions. If this is also done for external input, we have accounted for all synaptic events.
A: The postsynaptic perspective, where all incoming synaptic input to a postsynaptic cell is taken into account, and the time-dependent LFP contribution of the postsynaptic cell is calculated. The blue and red colors are used to illustrate positive and negative regions, respectively, of the LFP at a moment in time. The total population LFP V ( r , t ) is then the sum of all such single-cell contributions . This is the standard way of calculating LFP signals from neural simulations. B: The presynaptic perspective, where all outgoing synapses from a single cell are considered. For passive cells with static (no plasticity), current-based synapses, every action potential of a presynaptic neuron j will evoke the same postsynaptic currents, and hence, each action potential has a fixed LFP response
. By taking into account all postsynaptic targets, the single-cell kernel
can be calculated, and the single-cell LFP contribution can be found by convolving the single-cell kernel with the corresponding spike train of the presynaptic cell. The population LFP is again the sum of all single-cell contributions, and if this is done for all cells, and all external incoming synapses, the LFP calculated by these two approaches will be identical, under the assumptions listed above.
The above argument is based on the assumption that the single-cell kernel is similar each time a neuron spikes. This holds if we ignore synaptic plasticity and assume that extracellular potential contributions caused by individual synaptic activations superimpose linearly for each cell, meaning that both the cell models and the synaptic inputs are fully linear. In reality, the membrane currents of a cell can depend on the joint effect of all its spiking inputs, for example, active dendritic channels or voltage-dependent synaptic currents cause nonlinear interference of inputs. However, previous work has shown that LFPs can be well predicted with quasi-linear approximations of ion channels [38,39], and that kernel-based approaches can give accurate LFP predictions also for conductance-based synapses [29,40]. In this investigation, we therefore assume fully linear cell models and synaptic input. In this case, the above assumption holds and we obtain
where is the LFP response of postsynaptic neuron i to an individual spike of presynaptic neuron j, and
is the spike train of presynaptic neuron j. Here ∗ denotes a temporal convolution. If we combine equation (1) and equation (2) and rearrange summands, then we get what we refer to as the presynaptic perspective (Fig 2B),
with the single-cell kernel
This prediction of the population LFP from single-cell kernels is in the following denoted as the “ground truth” against which we test approximations.
2.2. Population rate-LFP kernels
Neurons in neural circuits often share statistical properties in terms of morphology, electrophysiology, connections, and spiking activity. Based on such similarities they can be grouped into neuronal populations. In the classical view, a population is a group of neurons with similar input statistics as well as similar internal properties and dynamics, such that they have similar spiking statistics. For the generation of LFP contributions, however, not only the spiking statistics should be similar for cells within a population, but also their translation into LFPs as measured by the single-cell kernels.
If all single-cell kernels of a population of neurons were identical, then they would in particular be identical to the population-averaged kernel
such that the compound LFP of the population could be perfectly predicted by the population rate
without the need to consider the detailed information of individual neuronal spike trains. The population-averaged kernel
can therefore be interpreted as a population rate-LFP kernel. For simplicity, we will here refer to this as the population kernel.
In general, however, the properties and projections of neurons are only statistically similar rather than identical, such that the single-cell kernels differ from . As a consequence
is only an approximation of the true compound LFP V ( r , t ) . In the following, we study the error of this approximation and how it depends on the neuronal and the network properties.
Single-cell kernels depend on a multitude of neuron and network features including network connectivity, neuronal morphology, synapse positions, electrode position and electrical properties of cells, leading to potentially complicated spatio-temporal profiles. Yet they are by definition causal and their time course is determined by synaptic dynamics and dendritic filtering properties [41]. As with LFP responses to individual synaptic inputs, the amplitude and polarity of single-cell kernels is expected to strongly depend on the relative position of cells with respect to the recording electrodes.
The precise shape of single-cell kernels from biophysically detailed models will depend on a large number of different parameters, including synaptic locations, strengths, and time constants. However, before calculating the precise shape of single-cell kernels from biophysically detailed models, we first show some key aspects of the population kernel approximation using a simple illustrative model, where single-cell kernels are defined as double-exponential functions which only differ in amplitudes (see Methods, Fig 3A).
A: Two toy single-cell kernels (blue and orange), and the mean, that is, the population kernel (black). B: Raster plot of the two corresponding spike trains, with the same color code as in panel A. Each colored marker corresponds to a spike, and the individual spike trains are plotted at different heights along the y-axis. C: The population rate (average number of spikes per time bin, Δt = 0 . 1ms), that is, the mean firing rate from the spike trains in panel B. D: The gray line shows the ground truth toy LFP signal calculated as the sum of each single-cell contribution, which is again calculated by convolving the single-cell kernels with the corresponding spike trains. The black line shows the LFP calculated by convolving the population kernel with the population rate. The red line shows the difference between the ground truth LFP and the population kernel LFP.
Each single-cell kernel (Fig 3A) is convolved with a different spike train (Fig 3B) and the resulting extracellular potential (Fig 3D) is compared to the prediction of the population kernel (black line in Fig 3A) that is convolved with the population rate (Fig 3C). The population kernel prediction generally resembles the ground truth. It is, however, different in detail due to the heterogeneity in single-cell spike kernels. The approximation improves at times where multiple neurons spike synchronously (Fig 3D t=50 ms). This hints at a more general aspect: if all spike trains in equation (1) are identical, then the population kernel prediction becomes exact even though the single-cell kernels are different. In conclusion, this simple toy model illustrates the two main features that determine the quality of the population kernel prediction: spike-kernel heterogeneity and spike-train correlations. Predictions become poor when spike-train correlations are low and spike-kernel heterogeneity is large, whereas large spike-train correlations and low spike-kernel heterogeneity lead to low errors (Fig 4).
Each column shows 1000 single-cell kernels with different amplitude standard deviations (top), and different levels of spike-train correlations (middle). Spike trains with varying levels of correlations were generated through Multiple Interaction Processes (MIP) [42], controlled by the parameter f, where f = 0 corresponds to uncorrelated homogeneous Poisson processes, while f = 1 corresponds to fully correlated (identical) spike trains (see Methods). The mean firing rate is shown in black, and the standard deviation in gray. The toy LFP is calculated (bottom). Relative error Erel, quantified by the normalized standard deviation of the difference between the ground truth signal and the population kernel signal (see Methods), vanishes for identical kernels, regardless of correlation (first column). For variable kernels with some correlation, the kernel approach will result in some relative error (second column). For variable kernels and zero correlation, the relative error will be large (third column). For perfect correlation, the relative error vanishes regardless of kernel variability (fourth column).
This behavior of the prediction error can be derived analytically by employing a statistical description of the setup. As mentioned above, a population of neurons is defined via statistical similarities between neuronal spike trains and spike kernels. In the following, we assume that both quantities, appearing as a product (convolution) in equation (3), are drawn from distributions with known means and covariances. A natural first choice for the definition of the prediction error would be the mean deviation of the population kernel prediction Ṽ ( r , t ) from the ground truth V ( r , t ) , where
denotes the average across time. We could then ask what this quantity is on expectation across different realizations of kernels. In fact, it is zero, because each individual single-cell kernel on expectation coincides with the expectation of the population-averaged kernel. This measure is therefore not informative about the prediction error of the population kernel method for a single realization of single-cell kernels. The error is better assessed by the standard deviation of the discrepancy of the population kernel prediction from the ground truth. The squared error then is
, where
denotes the variance across time. The expectation of this quantity can be computed analytically (see Methods)
with denoting the expectation across realizations of the kernels. We further introduced the population averaged spike-train autocovariances
, the population averaged spike-train cross-covariances
, and the autocorrelation and cross-correlation of single-cell kernels
,
. The expression for E2 shows that, as expected, the error vanishes if the population of neurons spikes in a fully correlated manner (
) or if all neurons have the same spike-LFP kernels (
). For low average cross-covariances
as observed in cortex, the error is primarily determined by the size of the presynaptic population
, i.e., the number of single-cell kernels, the correlations in spike-LFP kernels, and the spike-train autocovariances. To assess the overall performance of the population-based prediction, it is useful to also consider the relative error
, defined as the expected error E normalized by the standard deviation of the ground-truth signal (see Methods). In the uncorrelated case, this normalized error Erel is approximately independent of the size
of the presynaptic population. For finite correlations,
decays as
. For our toy model, the analytical predictions for the absolute and the relative error perfectly match the results of numerical simulations (Fig 5). Theory and simulation confirm the anticipated trend that the error grows with increasing kernel heterogeneity and decreasing spike-train correlations (Fig 5B and 5D). The effect of spike-train correlations is, however, much more pronounced in the relative error (Fig 5C and 5E), as can be explained by the theory (see Appendix B). Note that the relative error is low in regions where the signal amplitude is large (Fig 5A).
A: The LFP amplitude (quantified by its standard deviation across time) for different levels of amplitude variability in single-cell kernels, and different levels of correlations between spike trains. B: Simulated absolute error, quantified by the standard deviation of the difference between the ground truth signal and the population kernel signal. C: Simulated relative error, quantified by the standard deviation of the difference between the ground truth signal and the population kernel signal (panel B), normalized by the ground truth signal amplitude (panel A). D, E: Same as in panels B and C, but predicted from theory (equation (7)). F: Difference map between results from simulations (panel B) and theory (panel D), for the absolute error. G: Difference map between results from simulations (panel C) and theory (panel E), for the relative error. Correlated spike trains were generated using MIP processes (see Methods).
2.3. Sources and effect of kernel heterogeneities
As we have seen, the error depends on single-cell kernel heterogeneities. After having derived the general dependence of the population kernel prediction on the statistics of single-cell kernels and spike-train correlations, we next investigate more systematically where heterogeneity in single-cell kernels stems from. To this end, we need to go beyond the toy model of the previous section and employ a mechanistic model of extracellular potential generation based on the spatial distribution of cells, connectivity specifications and biophysically detailed cell models.
We consider LFP and EEG signals from cortical populations. The major contribution to these signals stems from synaptic inputs onto pyramidal neurons [5,23]. In the following, we therefore investigate the LFP and EEG kernels of a population of layer 5 pyramidal neurons, positioned around a linear multi-contact electrode that records the LFP at different depths, while the EEG is recorded outside the scalp (Fig 6A). Synaptic inputs from a single presynaptic neuron are modeled as spikes delivered to a random subset of (the so-called out-degree) neurons in the considered postsynaptic population. To account for the natural heterogeneity in cortical connectivity, parameters such as synapse locations, synaptic strengths, time constants, and delays are randomly drawn from predefined distributions. The calculations of postsynaptic membrane currents and resulting extracellular potentials are based on a morphologically reconstructed pyramidal neuron from Hay et al. [2011] [43]. Note that a similar approach to calculating kernels, referred to as “unitary fields” for hippocampus was previously presented by Teleńczuk et al. [2020] [32]. A single-cell kernel represents the post-synaptic LFP (EEG) response to the firing of a single presynaptic neuron. The population kernel corresponds to the average of the single-cell kernels obtained for different presynaptic neurons, each targeting a subsets of
neurons in the postsynaptic population. Each population kernel represents one specific synaptic pathway from a given presynaptic population to a given postsynaptic population. Details of the setup outlined here are described in Fig 6 and Methods. In the following, we assess the sources of kernel heterogeneities by systematically varying the different features of this setup.
A: A population of cortical pyramidal neurons (morphologies depicted in shades of light gray and soma locations as black dots) receives synaptic input from a presynaptic population. Each incoming axon forms, in total, connections with different postsynaptic neurons. The strength J of each synapse is randomly drawn from a lognormal distribution. The synaptic time constant
and the synaptic delay are drawn from normal distributions (graphs to the left). The vertical position of each synapse is drawn from the segment locations of the cells weighted by a normal distribution (green curve to the right). Some exemplary synapse positions are plotted on the postsynaptic population as green dots. Vertical soma positions are drawn from a capped normal distribution (black curve to the right). Horizontal soma positions are uniformly distributed on a disc within radiusRpop. The LFP response to an activation of all
synapses of a single incoming axon is calculated for different cortical depths (dark red dots). The EEG response outside the head, directly above the population, is calculated using a simple spherical head model. For each parameter configuration, we generate 100 single-cell kernels resulting from different random realizations of neuron and synapse parameters. Each of these kernels describes the postsynaptic LFP (EEG) response to the firing of a different presynaptic neuron. B–D: LFP and EEG responses for different synaptic target zones (B: apical; C: basal; D: uniform). Gray: single-cell kernels. Black: population kernel. The “basal input” case is used as the “default case” throughout this study. E: Mean (solid curves) and standard deviation (bands) of the maximum LFP deflection at different cortical depths for different synaptic target zones (see legend). See Methods for details on the model and parameter values.
It is well known that the LFP/EEG response of individual cells to synaptic input strongly depends on the location of the synapses [4,7,30,41]. Since the single-cell kernel is the superposition of such signals from all target cells of a given spike, we expect that this dependence translates into a strong influence of synaptic locations on the shape of the single-cell kernels. Different synaptic pathways to pyramidal cell populations are often associated with different synaptic target regions. For example, Hagen et al. [2016] [35] reported (their Table 8), based on experimental data from Binzegger et al. [2004] [44], that the largest subfamily of layer 5 pyramidal cells received the most synapses in layer 5, which corresponds to basal input, and the majority of these synapses were excitatory. We therefore in the following consider this as our “default” case. Another smaller subfamily of layer 5 pyramidal cells received the most synapses in layer 1, where the sources were again predominantly excitatory. We will refer to this as the “apical” case. We also consider a “uniform” case where excitatory synaptic input is uniformly distributed over the entire morphologies, as this has also been used in literature [22,29,32]. We find that the single-cell LFP/EEG kernels looks very different when stimulating cells in the population only apically, only basally, or uniformly (Fig 6B-6E).
Note that we here only consider excitatory synaptic input, but for current-based synaptic input as used here, the only difference between excitatory and inhibitory currents is a change of sign. This transfers directly to the LFP-kernels, and the inhibitory equivalents of the excitatory LFP-kernels shown here can be obtained through a sign reversal. Since this sign-change does not affect kernel heterogeneity, the error from using the kernel method is agnostic to this change.
We notice substantial variability in single-cell spike kernels (light gray), however, for the cases of apical or basal input we observe that different single-cell kernels seem to have a similar overall shape, and therefore a pronounced population kernel. In the case of the uniform input, there is more diversity in single-cell LFP kernels, such that the population kernel has very low signal amplitude at all depths. The reason is that individual apical or basal inputs lead to rather stereotypic (but opposite) LFP/EEG responses, irrespective of the exact location of the synapse on the dendrite. In contrast, when considering all possible input locations (uniform) the diversity in the LFP/EEG responses to individual synaptic inputs is larger, leading to substantial cancellation. Furthermore, we notice that the variability seems to be higher close to the input region and decreases with distance from the input region. As a result, there is generally less kernel heterogeneity in the EEG kernels than in the LFP kernels (Fig 6B-6D).
By choosing a set of kernels, first from the basal input which we will treat as the “default case” (Fig 6C), and combining them with spike trains (see Methods), we can then calculate the LFP signal by convolving each individual kernel (Fig 7A, gray curves) with its corresponding spike train (Fig 7B, individual spike trains in gray) and summing the results for all single-cell contributions (Fig 7C, gray curves). This is what we treat as ground truth in the following analysis. Further, we convolve the population kernel (Fig 7A, black curves) with the population rate (Fig 7B, black line) to obtain the population kernel LFP (Fig 7C, black curves). For brevity, we first focus on the LFP signal, but the general results also apply to EEG signals, which we will get back to later.
To evaluate the accuracy of the population kernel approach in approximating the ground truth case, we compare the LFP signals (Fig 7C black versus gray curves). We calculate the observed relative error (see Methods), and compare to the relative error predicted from theory, and find them to be almost indistinguishable, demonstrating that the theory is well suited to predict the error (Fig 7D).
We can now evaluate the error of the kernel approach for different parameters of the kernels. To evaluate the relative importance of different factors, we compare different parameter configurations to the “default case” shown in Figs 6C and 7. We start with uncorrelated Poisson spike trains. In the following analysis, we will only show LFP amplitudes and errors but kernels from all tested parameter combinations (see Methods, Table 1) and resulting LFPs are shown in S1 Fig.
A: The LFP kernels at different depths (see Fig 6A) with each single-cell kernel in gray and the population kernel in black. The kernels shown here are from the “default” case, corresponding to Fig 6C. B: Raster plot of uncorrelated spike trains (see Methods) with a firing rate of 10 s−1. Below the spikes, the population firing rate (constructed by summing all individual spike trains) is shown in black. C: The ground truth LFP signal (gray), the population kernel LFP signal (black), and the difference between them (red), at different depths. D: The relative error at different depths (see Methods), either observed from simulations (solid curve) or predicted from theory (dotted curve).
For basal or apical synaptic input (Fig 8A1, black or brown curves), the ground truth and the population kernel LFP give indistinguishable predictions for the signal amplitude at different depths (the signal amplitude is here represented by the signal standard deviation). This is not the case for the uniformly distributed synaptic input (Fig 8A1, purple curves), which has a much lower amplitude, and a pronounced difference between the ground truth and the population kernel LFP. This is reflected in the error (Fig 8A2) and the relative error (Fig 8A3), where we observe very high relative errors at all depths for the uniform input, and substantially lower error for apical or basal input. Furthermore, for the latter two cases, the error decreases with distance from the input site. This is in agreement with our earlier observations regarding the kernels (Fig 6B-6E). Notice also that the observed error (Fig 8A2-8A3, solid curves) and the error predicted from theory (Fig 8A2-8A3, dotted curves) closely overlap, illustrating again that the theory is perfectly able to predict the error.
For uncorrelated Poisson input with a rate of 10 s−1 (see Methods), the figure shows the standard deviation of the LFP at different depths (column 1), and the absolute (column 2) and relative error (column 3) from using the population kernel, for different modifications of the original parameter set (“default”). Each row corresponds to varying a certain feature, see Methods and Table 1 for full description of different parameter configurations. A: Three different synaptic input regions, that is, apical dendrites, basal dendrites (“default”), or uniformly distributed over the whole cell. B: Three different numbers of postsynaptic targets Kout (outdegree), including half and double of the default value of 500. C: Three different levels of variability of synaptic parameters, including half and double the parameter values used for controlling the variability of the synaptic time constants, the synaptic delays, and the synaptic weights. D: Three different standard deviations of the normal distribution used when drawing synaptic locations, including half and double the default value of 100 μm. E: Three different radii, including half and double the default radius of 250μm.
Intuitively we would expect the number of postsynaptic targets per neuron, , to strongly affect the signal amplitude and the error, since more postsynaptic targets can be expected to increase the amplitude and decrease the variability of the kernels. The reason for this low variability is that each single-cell kernel corresponds to a sum of many extracellular potential responses
. These are all “activated” simultaneously by the incoming spike such that differences in
to some degree average out. As a consequence, we would expect the population kernel prediction to become significantly worse if neuronal out-degrees are small. This is indeed the case if we reduce the out-degree
towards lower values (Fig 8B1-8B3). A theoretical analysis confirms that the relative error decreases as
(see Methods).
The synaptic parameters we consider are the synaptic weight, the synaptic time constant, and the synaptic delay. The synaptic weights are lognormally distributed in analogy to Hagen et al. [2016; 2022] [23,29], while the synaptic time constants and delays are normally distributed. As predicted by the theory (see Methods), decreasing or increasing the standard deviations of these distributions by a factor of two has a negligible effect on signal amplitudes (Fig 8C1), while the error increases with increasing variability (Fig 8C2-8C3).
The spatial spread of the synaptic input is seen to have an important effect on both the signal amplitudes (Fig 8D1) and the errors (Fig 8D2-8D3), where a broader region of input gives a much weaker signal and much larger relative errors, similarly to what we saw for the uniformly distributed synaptic input (which can be seen as an extreme case of a broad input region, Fig 6D and 6E).
When the postsynaptic cells are more spatially concentrated, within half the original radius, we find a larger LFP amplitude in the center of the population as expected (Fig 8E1). The relative error is however only weakly affected (Fig 8E3).
2.4. Sources and effect of spike correlations
To evaluate the error of the kernel approach, we also need to consider the effect of different types of spiking statistics, with different levels of correlation. To this end, we employ the same setup described in the previous subsection but replace the uncorrelated Poissonian input spikes with spike trains generated by two different methods. In a first approach, we create spike trains as realizations of a Multiple Interaction Process (MIP; [42]) with firing rate ν, fraction f of shared spikes, and pairwise correlation coefficient . With this model, the spike-train auto- and cross-covariances
and
can easily be controlled, but are delta-shaped and thus rather artificial (see Appendix B). As an alternative approach, we employ a recurrent point-neuron network model of excitatory and inhibitory neurons (“Brunel network”; [45]) that can operate in different dynamical regimes and thereby produce spike trains with a more natural correlation structure. Here, we use the same parameters and corresponding network states described in Brunel [2000] [45], and extract spikes from the asynchronous irregular (AI; Fig 9C), and the slow synchronous irregular regime (SI slow; Fig 9D).
A, B: Spiking activity generated by Multiple Interaction Processes (MIP; [42]) with firing rate and correlation coefficients
(A) and 0.01 (B). C, D: Spiking activity generated by a recurrent network of point neurons [45] operating in the asynchronous irregular (“AI”; C) and in the slow synchronous irregular regime (“SI slow”; D). Top panels: Raster plots for 100 exemplary neurons. Bottom panels: Normalized spike-train auto- (black) and cross-covariance (gray) functions. The depicted curves represent population averaged correlations obtained from binned spike trains of an ensemble of 100 neurons, with an observation time of 15.5 s, and a binsize of 2−4 ms. See Methods for details on the spike-generation models and parameter values.
In the parameter configurations discussed above, we used uncorrelated spike trains. However, as earlier discussed, the spike-train correlation will also affect the error (equation (7)). We therefore combine the kernels from the default case used above, with spike trains exhibiting different levels of correlation, including those illustrated in Fig 9. The amplitude of the LFP is highly dependent on the spike trains, and for the MIP spike trains the amplitude increases with both firing rate and correlation (Fig 10A).
The absolute errors from the MIP spike trains appear roughly independent of the correlation, but dependent on the firing rate (Fig 10B and 10D), while the relative errors are instead independent of the firing rate but dependent on the correlation (Fig 10C and 10E). The relative error is independent of the firing rate ν because the spike-train auto- and cross-covariances and
scale linearly with ν. In consequence, both the absolute squared error (equation (7)) and the ground-truth variance (Eq (A.3)) are proportional to ν. Also the correlation dependence is confirmed by theory (see Appendix B) and in line with earlier observations in Fig 5B, where we saw in a toy model that the absolute error is only dependent on the correlation for very high levels of correlations (f>0.1).
The lowest relative error is from the Brunel SI slow state. This is as expected, because of the highly correlated spiking activity.
A: Dependence of the LFP amplitude on the recording depth for various presynaptic spike-train ensembles generated by the MIP and by the Brunel network model (see legend and main text), for the kernels corresponding to the default case. MIP spike trains are characterized by the firing rate ν and the spike-train correlation coefficient . Brunel spike trains are obtained from the “AI” and the “SI slow” regime. Solid and dashed lines refer to the ground-truth signal and the LFP predicted by the kernel method. B: Same as panel A, but showing the absolute prediction error. Solid and dotted lines represent results obtained from simulations and theory, respectively. C: Same as panel B, but showing the relative prediction error. D: The maximum error across depth for MIP spike trains with different firing rates ν and correlations f. E: Same as panel D but showing the maximum relative error across depth.
2.5. Combined effect of kernel heterogeneity and spike-train correlations
We summarize the results in Fig 11A and 11B, which combines different kernel parameters with different types of spiking activity. If we start by focusing on the kernel parameters (rows), we see that in all cases, uniform synaptic input gives low signal amplitudes and large relative errors. The next highest relative errors are for the case with the broader synaptic input region, which together with the uniform input case demonstrates the importance of the spatial spread of the synaptic input. The lowest relative errors are observed for the large out-degree (large ), followed by the narrow input region. If we instead focus on the different types of spiking activity (columns), we see that the lowest relative error is for Brunel SI slow, while the highest relative error is from the uncorrelated MIP processes.
A convenient rule-of-thumb emerges from the results discussed above: The relative error associated with applying the population kernel method is in general inversely proportional to the signal amplitude (Fig 11C). This is an important insight because it means that we can expect the population kernel approach to work best for the synaptic pathways that are dominating the LFP signal, and worst for the synaptic pathways that have a weak LFP contribution. Note that this relationship also holds for the EEG signal, where the error is also substantially lower (Fig 11C, gray dots). As an illustrative example, it has been argued that the LFP and EEG signal is mainly driven by perisomatic inhibitory input to pyramidal cells [23,24,29,32,33], while excitatory input to pyramidal cells is less important, as it is more uniformly distributed across the postsynaptic pyramidal cells, and therefore gives a relatively weak contribution to the LFP/EEG signal. In this case, we would also expect a large relative error for the excitatory-to-excitatory pathway, but since this synaptic pathway is in this case only associated with a minor LFP/EEG contribution, the high relative error might be acceptable.
A: Maximum ground truth LFP amplitude across depths for different combinations of kernel parameters (rows) and spiking activity (columns). B: Maximum simulated relative errors across depths, for different combinations of kernel parameters (rows) and spiking activity (columns). The rows and columns are sorted so the largest relative errors are in the bottom left, while the lowest relative errors are in the top right. C: The relative error as a function of the signal amplitude for the LFP signal (black dots), and for the EEG signal (gray dots) for all parameter combinations shown in panels A and B. Since the EEG signal intrinsically has a much lower signal amplitude, the LFP and EEG signals are normalized by the maximum observed signal amplitude seen in either of the two signals, so they are easier to visually compare. The dashed line is a visual guideline corresponding to a perfect inverse correlation.
2.6. Application to firing rate models
The kernels considered so far correspond to the kernels from a single synaptic pathway (see Appendix D for a discussion on multiple pathways). Given some prior knowledge or reasonable estimation of synaptic parameters, and how synapses are distributed on postsynaptic neurons, approximate kernels can be derived and used also for firing rate models.
To illustrate its applicability, we here choose a simple population rate model of the form [46,47],
where r and v are the firing rate and membrane potential, respectively, and τ is the membrane time constant. The model is particularly interesting in the context of multi-scale modeling as it has been shown to be an exact macroscopic description of the average dynamics of a population of all-to-all coupled excitatory quadratic integrate-and-fire (QIF) neurons [46]. The other parameters J, η and Δ are derived from the microscopic definition of the QIF network and describe the synaptic weight, and the center and half-width of a Lorentzian distribution of heterogeneous, quenched external inputs, respectively. This population rate model and its dynamic repertoire have been analyzed extensively over the past years with multiple extensions. These include the incorporation of multiple populations to model working memory [47], inhibitory coupling to produce theta-nested gamma oscillations [48], and sparse coupling and external fluctuations [49,50]. The basic model in equation (8) has been shown to produce a non-trivial transient oscillatory behavior upon stimulus-induced (I(t)) switching between two steady-state attractors (Fig 12A). Using the population kernel prediction such behavior can be modeled in terms of LFP and EEG (Fig 12C), providing the basis for comparisons of population rate model dynamics with experimentally obtained LFPs and EEGs.
For firing-rate models with different populations we can combine different kernels for different synaptic pathways (see Appendix D). As an example, for an inhibitory-to-excitatory pathway, we could choose a sign-inversed (to change from excitatory to inhibitory input currents) version of the “default” case kernel, to represent perisomatic inhibitory input. Likewise, for an excitatory-to-excitatory pathway, we could use the kernel from uniform synaptic input. All kernels constructed in this study are available online (see Methods), and can in principle easily be modified to accommodate different scenarios.
A: Stimulus induced switching dynamics of rate model described by equation (8), with Δ = 2, η = − 10, , and τ = 100ms [47]. The stimulus I(t) is a square pulse with an amplitude of 4, a delay of 1s, and a duration of 3s, resulting in switching dynamics similar to what was observed by Montbrió et al. [2015, Fig 2(a)] [46]. B: Population kernel for the “default” case (basal input) of the setup introduced in Sect 2.3 “sec:kernel_heterogeneity” (see Fig 6C). C: Transient behavior as observed in the population kernel LFP and EEG signals, calculated by convolving the population rate in panel A with the kernels in panel B. Before the convolution with the LFP/EEG kernel, the population rate is transformed from units of hertz to units of spikes/Δt, and scaled by the considered size of the presynaptic population which was in this case 10,000 [46].
3. Discussion
3.1. Summary
In this study, we have attempted to illustrate what the kernel approach is (Figs 2 and 3), and built an intuition for when we can expect it to be applicable (Figs 4 and 5). We further developed a mathematical framework to analyze the expected error of the kernel approach and showed that it was capable of accurately predicting the observed errors (Fig 5). From equation (7) we saw that the error was dependent on both the single-cell kernel heterogeneity and the level of correlation between spike trains.
Since LFP, EEG, and MEG signals are, at least in the cortex and in the hippocampus, expected to primarily originate from synaptic input to populations of pyramidal cells, we built a biophysically detailed model population receiving different types of synaptic input, where the individual parameters could be easily adjusted (Fig 6). We then combined these kernels with different types of spiking activity with varying levels of firing rates and correlations (Fig 9). This allowed us to assess how the error introduced by the population kernel approach was affected by different parameter choices for the kernels (Fig 8) and spiking activity (Fig 10).
The results show that the relative error of using the kernel approach will be lowest for the strong signal contributions (e.g., spatially clustered synaptic input and high levels of correlations), and highest for the weak signal contributions (e.g., uniformly distributed synaptic input and low levels of correlations; Fig 11). This implies that those scenarios where the population kernel prediction breaks down are less relevant when considering the total LFP/EEG signal: For cortical scenarios, the LFP/EEG is dominated by apical and basal inputs for which the population kernel prediction only yields a small relative error. Conveniently, this makes the kernel method particularly suited to study certain types of pathological neural activity, like epileptic seizures, which are characterized by highly correlated large-amplitude oscillations [28]. Note also that the same holds for LFP signals created by other morphological types of neurons: stellate cells and interneurons lack the asymmetry introduced by the apical dendrites in pyramidal cells. Unless asymmetry is introduced by synapse positions, their LFP contribution can therefore be assumed to resemble the uniform input scenario shown above. The population kernel prediction would break down for populations with symmetric morphologies and synapse distributions. However, their overall contribution to the measured LFP can be expected to be negligible in the presence of pyramidal-neuron LFP contributions. Furthermore, we show that the population kernel prediction becomes particularly accurate in cases where neurons have a large number of postsynaptic targets, as is for example the case in cortex [51].
In summary, these results demonstrate that the kernel approach is a promising method for calculating LFP, EEG, or MEG signals directly from firing rates, as also demonstrated in Fig 12.
3.2. Limitations
An important caveat of the present study is that we considered a fully linear scenario, with passive postsynaptic neurons and current-based synapses. This allowed us to treat the case, where each single-cell kernel was coupled to its corresponding spike train, as “ground truth”. We could then quantify the error of approximating the LFP/EEG directly from the population kernel and population firing rate. In assuming linearity, we are however ignoring several potentially important factors that may contribute to LFP and EEG signals.
Firstly, we ignored the extracellular action potentials (EAPs) that in principle precede each single-cell kernel. Note that we could in principle easily have included these EAPs in the single-cell kernels by choosing a location for each presynaptic neuron, and calculating the EAP on the recording electrodes from an action potential in the presynaptic neuron. EAPs can have amplitudes of several hundred microvolts if the soma is very close to a recording electrode, but the amplitude falls off rapidly with distance [5,52,53], and we would therefore expect a very high single-cell kernel heterogeneity in these EAP contributions. We therefore do not expect that the population kernel would give accurate predictions of EAP-contributions to LFP/EEG signals. However, at least for large cortical populations, we do not expect EAPs to be a major contributor to LFP and EEG signals [4,29,54], but the reader should keep in mind that any putative EAP contribution is neglected in this analysis.
Secondly, in assuming passive postsynaptic neurons, we neglected effects from subthreshold active conductances. It has been demonstrated in modeling studies that subthreshold active conductances can in certain cases be important in shaping the LFP [38,39], however, this effect can be taken into account also in linear models through linearization [29,38,39,55]. The effect of other types of non-linearities, such as dendritic action potentials, on the validity of the kernel method LFP estimates should be assessed in future studies.
Thirdly, we relied on current-based instead of conductance-based synapses. Since conductance-based synapses depend on the membrane potential, and change the effective membrane conductance of the postsynaptic neurons, the LFP response to synaptic input will for conductance-based synapses depends on the ongoing synaptic input to the postsynaptic population. It was previously demonstrated by Hagen et al. [2022] [29], and further expanded upon by Meneghetti et al. [2024] [40], that the kernel approach can make accurate LFP predictions also for conductance-based synaptic input, by taking into account the postsynaptic membrane potential and the “background level” of synaptic input that each population was receiving. This is particularly important when considering large co-fluctuations in firing rates, which can lead to substantial non-linear effects which must be accounted for [40]. However, while using conductance-based synapses had an important effect on kernel amplitudes [29], it is not expected to have a strong effect on single-cell kernel heterogeneity. Therefore, the error analysis presented here is equally relevant to models using both current-based and conductance-based synapses for calculating kernels.
Also, our analysis here focuses on cortical networks where the LFP/EEG is dominated by inputs onto pyramidal neurons and other contributions are negligible. We show that the relative error of the population kernel method is in general small for large current dipoles, but sizable for overall small signals. It is therefore plausible that the kernel method will work less reliably in other brain regions such as basal ganglia, where there are no pyramidal neurons.
For simplicity, we here demonstrated the applicability of the population-kernel approach in the context of multi-scale modeling (Fig 12) by choosing a firing rate model that captures the macroscopic dynamics of fully connected excitatory networks of quadratic integrate-and-fire neurons [46,47]. Such firing rate models have been extended in various directions over the past years [48–50], in particular to account for multiple neuronal populations. The here presented approach can be readily extended to such multi-population networks to obtain predictions of their extracellular potential signatures.
3.3. Inference and approximation of population kernels
Population kernels were here constructed from the average of all single-cell kernels for a population of neurons. The latter kernels can be measured in experiments [24,33,34,56] and simulations [32,35], for example using microstimulation of individual neurons. This is, however, experimentally not feasible for a large number of neurons and in simulations it is computationally expensive. Since variability in single-cell LFP kernels is low in some scenarios, we can expect that an approximation of the population kernel based on single-cell kernels of a small subpopulation is still valid, and indeed Hagen et al. [2022] [29] demonstrated that population kernels could be accurately estimated based on a single biophysically detailed cell simulation.
A direct way to obtain population kernels is via simultaneous stimulation of the whole population of neurons, for example using optogenetic techniques. Also, population kernels can be inferred via deconvolution techniques [57] from given compound extracellular potentials and population rates. This procedure, however, relies on the fact that those neurons from which spike trains are recorded are those with the dominant single-cell kernels. If other populations of neurons from which no spikes are recorded contribute significantly to the extracellular potential, then the inferred population kernel is invalid. In the case of spike recordings from multiple populations, one can use the MIMO (multiple input - multiple output [58]) scheme for deconvolutions of the individual population kernels.
3.4. Definition of population
Typically a population is defined via common input statistics and physiological parameters between neurons such that output spiking statistics are similar. Here, we need in addition that the single-cell spike kernels of neurons in a population are similar. This includes similar postsynaptic targets and projection patterns to them as well as passive properties of postsynaptic targets. So the definition of a population is not only based on incoming connection statistics but also on outgoing connection statistics. Also what defines a population might dynamically change: if correlations, i.e., spiking statistics, between two populations are large then merging them into one population even if they have different kernels would lead to a good population-kernel prediction.
3.5. Conclusion
As reviewed in the Introduction, several different approaches to calculate LFP/EEG/MEG signals from point-neuron or firing rate models have been suggested [19,20,22,24,26–28], but quantitative evaluations of the accuracy of these approaches have often been lacking. Here, we have presented a thorough analysis of how the kernel method works, and when we can expect it to be a good approximation. Our results further establish the kernel approach as a promising method for calculating brain signals from large-scale neural simulations, and we hope that the kernel approach can therefore be used with more confidence.
4. Methods
4.1. Forward modeling
The calculation of the extracellular potential from neural simulations was done using a well-established forward-modeling scheme based on electrostatics with current sources computed via solving the membrane potential dynamics of each cell given all its inputs [5]. The neural simulations were controlled through LFPy 2.3 [6], running on top of NEURON 8.2 [59]. The calculation of extracellular potentials relied on the built-in functionality in LFPy, where the line-source approximation [2,5] for an infinite, homogeneous volume conductor was used. The extracellular potential was calculated at 16 different depths, with an inter-electrode distance of 100 μm. The extracellular conductivity was set to 0.3 S/m [60].
4.1.1. Calculating EEG signals.
Current dipole kernels were calculated from the neural simulations using built-in functionality in LFPy [6]. Such current dipoles could in principle be used with arbitrarily simple or detailed head models [5,7] to calculate EEG signals. Here, we used the simple four-sphere head model [61] implemented in LFPy.
4.2. Toy model for spike-LFP kernel
The spike-LFP kernels from the toy model (Figs 3, 4 and 5) were double exponential functions (rise time ms, decay time
ms), which only varied in amplitude
. The implementation was equivalent to the “Exp2Syn” mechanism in NEURON, and given by,
The mean amplitude was always 1.0 μV, while the standard deviation of the amplitude was varied as detailed in the individual figures. The time resolution of the simulations was 0 . 1ms.
4.3. Biophysically detailed simulations
Biophysically detailed neural simulations are highly parameter dependent, and we aimed to keep our results generic rather than to focus on a particular system. To test ranges of values we choose to compare default values (see below) with half and twice the default value, as this results in a fairly broad range independent of units or magnitude of the original parameter. The default values are meant to be plausible, but are also chosen to introduce substantial kernel heterogeneity.
We used the rat cortical layer 5 pyramidal cell model from Hay et al. [2011] [43], where all active conductances were removed to make the cell passive [38,39]. We used current-based synaptic input with exponential decay, and a time resolution of . For calculating single-cell spike-LFP kernels, we generated for each presynaptic neuron j, a population of
(default value: 500) postsynaptic instances of the pyramidal cell model. The cells were aligned with and randomly rotated around the z-axis, and the z-positions of the somas were drawn from a capped normal distribution (mean: − 1270μm, SD: 100μm). The cap was introduced to avoid neurons protruding out of the cortex, and the standard deviation of 100 μm was chosen to roughly reflect that layer 5 in rats is a few hundred micrometers thick [62]. The somas were uniformly distributed in the xy-plane within a radius
with a default value of 250μm, similar to the cortical column radius used by Markram et al. [2015] [62]. Each postsynaptic neuron i had a single synapse with weight
drawn from a lognormal distribution, calculated through scipy.stats.lognorm (mean: 0 . 1nA, default s-value: 0 . 4nA, see scipy.stats.lognorm documentation). The spatial distribution of the synapses in the depth direction was drawn from all available locations on the cells, but weighted by a normal distribution in the depth direction, with a default mean of − 1270μm, and a default standard deviation of 100μm to represent synaptic input to layer 5. The synaptic delays were drawn from a normal distribution (syn delay mean: 1ms, default syn delay SD: 0 . 2ms), similar to values used by Hagen et al. [2016] [23]. We used the same default parameters for synaptic time constants
.
The default values of the parameters as well as the different variations tested in this study are listed in Table 1.
4.4. Synthetic spike-trains with correlations
Synthetic spike trains with varying levels of correlations and firing rates were generated through Multiple Interaction Processes (MIP) [42]. Here, a “mother spike train” was first generated with the same firing rate as the target spike trains. The spike times were generated through a homogeneous Poisson process using Elephant [63]. For each “child spike train”, a fraction f of the spikes where randomly selected from the mother spike train, while the remaining spikes were generated through homogeneous Poisson processes. Consequently, f varies between 0 and 1, and f = 0 corresponds to uncorrelated homogeneous Poisson processes, while f = 1 corresponds to fully correlated (identical) spike trains. Since each spike is copied with probability f2 into two child spike trains, the correlation coefficient of the latter is given by (for details see Appendix B).
4.5. Error measures
We define the absolute squared error of the population kernel signal Ṽ ( r , t ) in approximating the ground truth signal V ( r , t ) as , where
denotes the variance across time, computed separately for each electrode position r. The relative squared error is defined as
, that is, the absolute squared error at each electrode, normalized by the maximum (over the electrodes) variance of the ground truth signal. Note that the error is normalized by the largest value of the ground truth signal variance because the ground truth signal will often have electrodes with very near-zero signal amplitudes, and therefore very high, but irrelevant relative errors. In the case of the toy model (Fig 10) that is agnostic to spatial positions, the normalization does not involve a maximum over the electrodes.
4.6. Firing rates
Firing rates are constructed from spike trains by counting the number of spikes within time steps of length Δt, and normalizing by Δt. For kernels generated with the toy model (Figs 3-5), the time step duration is Δt = 0 . 1ms, while for the simulations with biophysically detailed kernels (Figs 6-12), it is .
4.7. Point-neuron network simulation
The point-neuron network model was a random balanced network with delta synapses [45], based on the brunel_delta_nest.py example that comes with NEST. We used NEST 3.6 [64], with the same network parameters and network states as Brunel [2000] [45], that is, we extracted spikes from an asynchronous irregular (AI) regime (g = 5, η = 2 . 0, J = 0 . 1) [45,Fig 8C], and a slow synchronous irregular (SI) regime (g = 4 . 5, η = 0 . 9, J = 0 . 1) [45,Fig 8D]. The time resolution was 0.1 ms.
4.8. Mathematical derivation of error estimate
In simulations, we measure the squared error
of the population kernel method as the variance of the difference signal V ( r , t ) − Ṽ ( r , t ) across time. By definition, due to the time average , the error depends on the statistics of spike trains s. In addition, in principle, it depends on all details of the single-cell spike-LFP kernels k. Yet, for networks of biologically realistic size, the LFP is made up of many contributions, such that the squared error
will not vary too much between different statistically equivalent realizations of single-cell kernels. Therefore, the expectation
across different realizations of single-cell kernels k can be assumed to be informative about the error
for one particular realization.
The expected squared error is
Inserting the definition of the ground truth LFP (equation (3)) and the population-kernel approximation (equation (6)) yields the error expression (equation (7)) of the main text (for details see Appendix A).
For the prediction of the relative squared error, we divide the error by the variance of the ground truth LFP
On expectation, the latter time average can be calculated analogously to the error (for details see Appendix A). This allows us to obtain some intuition on the expected relative squared error
in relation to features of single-cell kernels. To do so, we employ equation (4) and write the single-cell kernel in terms of individual extracellular potential responses , where we explicitly split the synaptic strength
and the adjacency values
from the impulse response
. The latter characterizes the LFP response, measured at time t and location r, to a unit input arriving at the synapse between neurons i and j. The single-cell spike-LFP correlations then read
with Mean ( J ) and Var ( J ) denoting the mean and variance of synaptic weights, impulse-response statistics and
, and
the out-degree of presynaptic neurons (see Appendix A). Interestingly, the error
(square root of numerator in equation (13)) - due to cancellations between
and
- scales as
(Fig 8B2), while the signal standard deviation
(square root of denominator in equation (13)) scales as
(Fig 8B1), such that the relative error decreases with the out-degree
as
(Fig 8B3). Furthermore, the signal standard deviation is roughly independent of the variability in synaptic strengths (Fig 8C1). This variability Var ( J ) only enters in the term of
that is proportional to
and thus subleading compared to the other terms in
and
that are proportional to
. In the error these terms proportional to
exactly cancel, such that the error increases with larger variability in synaptic strengths (Fig 8C2). The closer the different synaptic locations k and l are (see narrow vs broad input region or apical/default vs uniform), the larger the product of different impulse responses
+ τ ) . Therefore, the signal standard deviation, which contains products of different impulse responses in
(see Appendix A), grows when synaptic locations become more similar (Fig 8A1,D1). In contrast, the error only depends on
, which in turn only depends on products of the same impulse responses (see Appendix A). Therefore, the error is less sensitive to the width of the input region (Fig 8D2). Still, both impulse response statistics
and
depend strongly on the type of input region, leading to a strong dependence of the signal standard deviation and error on the input region (Fig 8A2). Also, both terms increase the smaller the radius of the population, because LFP-generating sources are closer to the recording electrode. Therefore, both the signal standard deviation (Fig 8E1) and the error (Fig 8E2) increase with smaller population radius.
Supporting information
S1 Appendix. Mathematical derivations.
Appendix A: Details for the mathematical derivation of the expected error between the ground truth LFP and the population kernel approximation. Appendix B: Details regarding the generation of MIP spike trains and their correlations. Appendix C: Extension to multiple populations.
https://doi.org/10.1371/journal.pcbi.1012303.s001
(PDF)
S1 Fig. All tested LFP kernels and the resulting LFP signals.
The LFP kernels for all tested parameter combinations in Table 1, and the resulting LFP signals (bottom row). The spike trains were from an uncorrelated Poisson process with a firing rate of .
https://doi.org/10.1371/journal.pcbi.1012303.s002
(PDF)
Acknowledgments
We are grateful to our colleagues in the NEST developer community for continuous collaboration. All network simulations were carried out with NEST (http://www.nest-simulator.org).
References
- 1. Einevoll GT, Destexhe A, Diesmann M, Grün S, Jirsa V, de Kamps M, et al. The scientific case for brain simulations. Neuron 2019;102(4):735–44. pmid:31121126
- 2. Holt GR, Koch C. Electrical interactions via the extracellular potential near cell bodies. J Comput Neurosci 1999;6(2):169–84. pmid:10333161
- 3. Einevoll GT, Kayser C, Logothetis NK, Panzeri S. Modelling and analysis of local field potentials for studying the function of cortical circuits. Nat Rev Neurosci 2013;14(11):770–85. pmid:24135696
- 4.
Ness TV, Halnes G, Næss S, Pettersen KH, Einevoll GT. Computing extracellular electric potentials from neuronal simulations. Springer Cham; 2022. p. 179–99.
- 5.
Halnes G, Ness TV, Næss S, Hagen E, Pettersen KH, Einevoll GT. Electric brain signals: Foundations and applications of biophysical modeling. Cambridge University Press; 2024.
- 6. Hagen E, Næss S, Ness TV, Einevoll GT. Multimodal modeling of neural network activity: computing LFP, ECoG, EEG, and MEG signals with LFPy 2.0. Front Neuroinform. 2018;12:92. pmid:30618697
- 7. Næss S, Halnes G, Hagen E, Hagler DJ Jr, Dale AM, Einevoll GT, et al. Biophysically detailed forward modeling of the neural origin of EEG and MEG signals. Neuroimage. 2021;225:117467. pmid:33075556
- 8. Reimann MW, Anastassiou CA, Perin R, Hill SL, Markram H, Koch C. A Biophysically detailed model of neocortical local field potentials predicts the critical role of active membrane currents. Neuron 2013;79(2):375–90. pmid:23889937
- 9. Tomsett RJ, Ainsworth M, Thiele A, Sanayei M, Chen X, Gieselmann MA, et al. Virtual Electrode Recording Tool for EXtracellular potentials (VERTEX): Comparing multi-electrode recordings from simulated and biological mammalian cortical tissue. Brain Struct Funct 2015;220(4):2333–53. pmid:24863422
- 10. Dai K, Gratiy SL, Billeh YN, Xu R, Cai B, Cain N, et al. Brain Modeling ToolKit: An open source software suite for multiscale modeling of brain circuits. PLoS Comput Biol 2020;16(11):e1008386. pmid:33253147
- 11. Baratham VL, Dougherty ME, Hermiz J, Ledochowitsch P, Maharbiz MM, Bouchard KE. Columnar localization and laminar origin of cortical surface electrical potentials. J Neurosci 2022;42(18):3733–48. pmid:35332084
- 12. Borges FS, Moreira JVS, Takarabe LM, Lytton WW, Dura-Bernal S. Large-scale biophysically detailed model of somatosensory thalamocortical circuits in NetPyNE. Front Neuroinform. 2022;16:884245. pmid:36213546
- 13. Rimehaug AE, Stasik AJ, Hagen E, Billeh YN, Siegle JH, Dai K, et al. Uncovering circuit mechanisms of current sinks and sources with biophysical simulations of primary visual cortex. Elife. 2023;12:e87169. pmid:37486105
- 14. Romani A, Antonietti A, Bella D, Budd J, Giacalone E, Kurban K, et al. Community-based reconstruction and simulation of a full-scale model of region CA1 of Rat Hippocampus. 2023.
- 15. Tharayil J, Isbister JB, Neufeld E, Reimann M. Computational modeling reveals biological mechanisms underlying the whisker-flick EEG. 2024.
- 16.
Gerstner W, Kistler WM, Naud R, Paninski L. Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge University Press; 2014. Available from: https://books.google.no/books?id=D4j2AwAAQBAJ
- 17. Potjans TC, Diesmann M. The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cereb Cortex 2014;24(3):785–806. pmid:23203991
- 18. Billeh YN, Cai B, Gratiy SL, Dai K, Iyer R, Gouwens NW, et al. Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex. Neuron. 2020;106:1–16.
- 19. Deco G, Jirsa VK, Robinson PA, Breakspear M, Friston K. The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Comput Biol 2008;4(8):e1000092. pmid:18769680
- 20. Sanz-Leon P, Knock SA, Spiegler A, Jirsa VK. Mathematical framework for large-scale brain network modeling in The Virtual Brain. Neuroimage. 2015;111:385–430. pmid:25592995
- 21. Destexhe A. Spike-and-wave oscillations based on the properties of GABAB receptors. J Neurosci 1998;18(21):9099–111. pmid:9787013
- 22. Mazzoni A, Lindén H, Cuntz H, Lansner A, Panzeri S, Einevoll GT. Computing the Local Field Potential (LFP) from integrate-and-fire network models. PLoS Comput Biol 2015;11(12):e1004584. pmid:26657024
- 23. Hagen E, Dahmen D, Stavrinou ML, Lindén H, Tetzlaff T, van Albada SJ, et al. Hybrid scheme for modeling local field potentials from point-neuron networks. Cereb Cortex 2016;26(12):4461–96. pmid:27797828
- 24. Telenczuk B, Telenczuk M, Destexhe A. A kernel-based method to calculate local field potentials from networks of spiking neurons. J Neurosci Method. 2020;344:108871. pmid:32687850
- 25. Martínez-Cañada P, Ness TV, Einevoll GT, Fellin T, Panzeri S. Computation of the electroencephalogram (EEG) from network models of point neurons. PLoS Comput Biol 2021;17(4):e1008893. pmid:33798190
- 26. Glomb K, Cabral J, Cattani A, Mazzoni A, Raj A, Franceschiello B. Computational models in electroencephalography. Brain Topogr 2022;35(1):142–61. pmid:33779888
- 27. Tesler F, Tort-Colet N, Depannemaecker D, Carlu M, Destexhe A. Mean-field based framework for forward modeling of LFP and MEG signals. Front Comput Neurosci. 2022;16:968278. pmid:36313811
- 28. Stenroos P, Guillemain I, Tesler F, Montigon O, Collomb N, Stupar V, et al. EEG-fMRI in awake rat and whole-brain simulations show decreased brain responsiveness to sensory stimulations during absence seizures. Elife. 2024;12:RP90318. pmid:38976325
- 29. Hagen E, Magnusson SH, Ness TV, Halnes G, Babu PN, Linssen C, et al. Brain signal predictions from multi-scale networks using a linearized framework. PLoS Comput Biol 2022;18(8):e1010353. pmid:35960767
- 30. Lindén H, Tetzlaff T, Potjans TC, Pettersen KH, Grün S, Diesmann M, et al. Modeling the spatial reach of the LFP. Neuron 2011;72(5):859–72. pmid:22153380
- 31. Kajikawa Y, Schroeder CE. How local is the local field potential?. Neuron 2011;72(5):847–58. pmid:22153379
- 32. Teleńczuk M, Teleńczuk B, Destexhe A. Modelling unitary fields and the single-neuron contribution to local field potentials in the hippocampus. J Physiol 2020;598(18):3957–72. pmid:32598027
- 33. Teleńczuk B, Quyen MLV, Cash SS, Hatsopoulos NG, Destexhe A, Neurosciences D, et al. Local field potentials primarily reflect inhibitory neuron activity in human and monkey cortex. Sci Rep 2017;7(40211):1–16.
- 34. Swadlow HA, Gusev AG, Bezdudnaya T. Activation of a cortical column by a thalamocortical impulse. J Neurosci 2002;22(17):7766–73. pmid:12196600
- 35. Hagen E, Fossum JC, Pettersen KH, Alonso J-M, Swadlow HA, Einevoll GT. Focal local field potential signature of the single-Axon Monosynaptic Thalamocortical Connection. J Neurosci 2017;37(20):5123–43. pmid:28432143
- 36. Skaar JEW, Stasik AJ, Hagen E, Ness TV, Einevoll GT. Estimation of neural network model parameters from local field potentials (LFPs). PLoS Computat Biol 2020;16(3):e1007725.
- 37. Miceli S, Ness V, Einevoll G, Schubert D. Impedance spectrum in cortical tissue: Implications for propagation of LFP signals on the microscopic level. eNeuro. 2017;4(1):1–15.
- 38. Ness TV, Remme MWH, Einevoll GT. Active subthreshold dendritic conductances shape the local field potential. J Physiol 2016;594(13):3809–25. pmid:27079755
- 39. Ness TV, Remme MWH, Einevoll GT. h-Type membrane current shapes the local field potential from populations of pyramidal neurons. J Neurosci 2018;38(26):6011–24. pmid:29875266
- 40.
Meneghetti N, Rimehaug AE, Einevoll GT, Mazzoni A, Ness TV. Kernel-based LFP estimation in detailed large-scale spiking network model of mouse visual cortex. bioRxiv. 2024; p. 2024.11.29.626029. https://doi.org/10.1101/2024.11.29.626029
- 41.
Lindén H, Pettersen KH, Einevoll GT. Intrinsic dendritic filtering gives low-pass power spectra of local field potentials. Springer Science+Business Media; 2010. p. 29.
- 42. Kuhn A, Aertsen A, Rotter S. Higher-order statistics of input ensembles and the response of simple model neurons. Neural Comput 2003;15(1):67–101. pmid:12590820
- 43. Hay E, Hill S, Schürmann F, Markram H, Segev I. Models of neocortical layer 5b pyramidal cells capturing a wide range of dendritic and perisomatic active properties. PLoS Comput Biol 2011;7(7):e1002107. pmid:21829333
- 44. Binzegger T, Douglas RJ, Martin KAC. A quantitative map of the circuit of cat primary visual cortex. J Neurosci 2004;24(39):8441–53. pmid:15456817
- 45. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 2000;8(3):183–208. pmid:10809012
- 46. Montbrió E, Pazó D, Roxin A. Macroscopic description for networks of spiking neurons. Phys Rev X. 2015;5(2).
- 47. Schmidt H, Avitabile D, Montbrió E, Roxin A. Network mechanisms underlying the role of oscillations in cognitive tasks. PLoS Comput Biol 2018;14(9):e1006430. pmid:30188889
- 48. Segneri M, Bi H, Olmi S, Torcini A. Theta-nested gamma oscillations in next generation neural mass models. Front Comput Neurosci. 2020;14:47. pmid:32547379
- 49. Goldobin DS, di Volo M, Torcini A. Reduction methodology for fluctuation driven population dynamics. Phys Rev Lett 2021;127(3):038301. pmid:34328756
- 50. di Volo M, Segneri M, Goldobin DS, Politi A, Torcini A. Coherent oscillations in balanced neural networks driven by endogenous fluctuations. Chaos 2022;32(2):023120. pmid:35232059
- 51.
Abeles M. Corticonics: Neural circuits of the cerebral cortex. Cambridge University Press; 1991.
- 52. Pettersen KH, Einevoll GT. Amplitude variability and extracellular low-pass filtering of neuronal spikes. Biophys J 2008;94(3):784–802. pmid:17921225
- 53. Hagen E, Ness TV, Khosrowshahi A, Sørensen C, Fyhn M, Hafting T, et al. ViSAPy: a Python tool for biophysics-based generation of virtual spiking activity for evaluation of spike-sorting algorithms. J Neurosci Methods. 2015;245:182–204. pmid:25662445
- 54. Pettersen KH, Hagen E, Einevoll GT. Estimation of population firing rates and current source densities from laminar electrode recordings. J Comput Neurosci 2008;24(3):291–313. pmid:17926125
- 55. Remme MWH, Rinzel J. Role of active dendritic conductances in subthreshold input integration. J Comput Neurosci 2011;31(1):13–30. pmid:21127955
- 56. Bereshpolova Y, Stoelzel CR, Su C, Alonso J-M, Swadlow HA. Activation of a visual cortical column by a directionally selective thalamocortical neuron. Cell Rep. 2019;27(13):3733-3740.e3. pmid:31242407
- 57. Mukamel R, Gelbard H, Arieli A, Hasson U, Fried I, Malach R. Coupling between neuronal firing, field potentials, and FMRI in human auditory cortex. Science 2005;309(5736):951–4. pmid:16081741
- 58. Perreault EJ, Kirsch RF, Acosta AM. Multiple-input, multiple-output system identification for characterization of limb stiffness dynamics. Biol Cybern 1999;80(5):327–37. pmid:10365425
- 59.
Carnevale NT, Hines ML. The NEURON book. Cambridge: Cambridge University Press; 2006.
- 60. Goto T, Hatanaka R, Ogawa T, Sumiyoshi A, Riera J, Kawashima R. An evaluation of the conductivity profile in the somatosensory barrel cortex of Wistar rats. J Neurophysiol 2010;104(6):3388–412. pmid:20810682
- 61. Næss S, Chintaluri C, Ness TV, Dale AM, Einevoll GT, Wójcik D. Four-sphere head model for EEG signals revisited. Front Human Neurosci. 2017.
- 62. Markram H, Muller E, Ramaswamy S, Reimann MW, Abdellah M, Sanchez CA, et al. Reconstruction and simulation of neocortical microcircuitry. Cell 2015;163(2):456–92. pmid:26451489
- 63.
Denker M, Yegenoglu A, Grün S. Collaborative HPC-enabled workflows on the HBP Collaboratory using the Elephant framework. Neuroinformatics 2018; 2018. p. P19. Available from: https://abstracts.g-node.org/conference/NI2018/abstracts/uuid/023bec4e-0c35-4563-81ce-2c6fac282abd
- 64.
Villamar J, Vogelsang J, Linssen C, Kunkel S, Kurth A, Schöfmann CM, et al. NEST 3.6; 2023.