Figures
Abstract
Even when driven by the same stimulus, neuronal responses are well-known to exhibit a striking level of spiking variability. In-vivo electrophysiological recordings also reveal a surprisingly large degree of variability at the subthreshold level. In prior work, we considered biophysically relevant neuronal models to account for the observed magnitude of membrane voltage fluctuations. We found that accounting for these fluctuations requires weak but nonzero synchrony in the spiking activity, in amount that are consistent with experimentally measured spiking correlations. Here we investigate whether such synchrony can explain additional statistical features of the measured neural activity, including neuronal voltage covariability and voltage skewness. Addressing this question involves conducting a generalized moment analysis of conductance-based neurons in response to input drives modeled as correlated jump processes. Technically, we perform such an analysis using fixed-point techniques from queuing theory that are applicable in the stationary regime of activity. We found that weak but nonzero synchrony can consistently explain the experimentally reported voltage covariance and skewness. This confirms the role of synchrony as a primary driver of cortical variability and supports that physiological neural activity emerges as a population-level phenomenon, especially in the spontaneous regime.
Author summary
Owing to the sheer complexity of biological networks, identifying the design principles for neural computations will only be possible via the simplifying lens of theory. However, to be accepted as valid explanations, theories need to be implemented in idealized neuronal models that can reproduce key aspects of the measured neural activity. Only then can these theories be subjected to experimental validation. In this manuscript, we address this requirement by asking: under which conditions can biophysically relevant neuronal models reproduce physiologically realistic subthreshold activity? We answer this question by focusing on the membrane voltage correlation and skewness, two key statistical signatures of the variable neuronal responses that have been well characterized in behaving mammals. As our core result, we show that the presence of weak but nonzero spiking synchrony is necessary to elicit physiological neuronal responses. The identification of synchrony as a primary driver of neural activity runs counter to the currently prevailing asynchronous state hypothesis, which serves as the basis for many leading neural network theories. Recognizing a central role for synchrony supports that neural computations fundamentally emerge at the collective level rather than as the result of independent parallel processing in neural circuits.
Citation: Becker LA, Baccelli F, Taillefumier T (2025) Subthreshold moment analysis of neuronal populations driven by synchronous synaptic inputs. PLoS Comput Biol 21(10): e1013645. https://doi.org/10.1371/journal.pcbi.1013645
Editor: Renaud Blaise Jolivet, Maastricht University: Universiteit Maastricht, NETHERLANDS, KINGDOM OF THE
Received: March 19, 2025; Accepted: October 22, 2025; Published: October 31, 2025
Copyright: © 2025 Becker et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The codes used in this article were deposited in https://github.com/MathNeuro/AONCB_moments.
Funding: LB, FB, and TT were supported by the CRCNS program of the National Science Foundation under Award No. DMS-2113213. LB and TT were also supported by the Vision Research program of the National Institutes of Health under Award No. R01EY024071 and by the Division of Mathematical Science of the National Science Foundation under CAREER Award No. NSF-DMS 2239679. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Cortical electrophysiological recordings have revealed that subthreshold neuronal responses exhibit a surprising level of variability in behaving rodents and primates [1]. Even when driven by the same stimulus or when performing the same motor actions, the membrane voltage of neurons in cortical visual or motor areas typically display large, skewed fluctuations [2–5]. We recently argued via mathematical and computational analyses that the observed magnitude of these voltage fluctuations is inconsistent with a purely asynchronous regime of activity [6]. In a purely asynchronous regime, neurons fire independently from one another, so that the probability that a neuron receives synchronous inputs is exceedingly low. To support this argument, we characterize the subthreshold response of a biophysically relevant, conductance-based, neuronal model subjected to synaptic drive with various degrees of synchrony [6]. Given realistic values for synaptic efficacies and input numbers, we identify input synchrony as the main driver of cortical subthreshold variability. This result was derived analytically for a perfect form of synchrony whereby synaptic inputs can activate simultaneously. This result was also confirmed numerically for more realistic forms of synchrony resulting from the waxing and waning of input rates [7,8]. Importantly, the proposed leading role of synchrony in shaping cortical variability is consistent with two further experimental observations: First, the statistical analysis of large-scale in-vivo population recordings indicates the reliable presence of weak but non-zero spiking correlations, the statistical signature of synchrony [7,9,10]. Second, in-vivo voltage-clamped measurements at the soma reveal large conductance fluctuations that can only be explained by the nearly simultaneous activation of many synaptic inputs [11].
Because of the apparent weakness of the spiking correlations measured in vivo [9], the role played by synchrony in cortical variability has typically been overlooked in prevailing modeling approaches [12–14]. However, in keeping with past studies [15,16], our work has shown that when passed through the large number of synaptic inputs, even weak synchrony can be the leading determinant of cortical variability. In this regard, we stress that the level of synchrony required to explain cortical variability are consistent with the amount of spiking correlation reported in [9]. Here, we ask whether input synchrony can consistently and quantitatively explain other features of the subthreshold cortical variability in addition to the observed magnitude of the membrane voltage fluctuations. Specifically, the primary focus of this work will be to show that input synchrony can account for the membrane voltage covariability measured across pairs of jointly recorded neurons. Intracellular recordings of pairs of neurons in both anesthetized and awake animals reveal a high degree of membrane voltage correlations [17–21]. These simultaneous recordings also reveal that excitatory and inhibitory conductance inputs are highly correlated with each other across neurons and thus, most likely, within the same neuron [19,21]. Aside from explaining these forms of covariability, a secondary focus of our work will be to show that input synchrony is also responsible for the observed skewness of the membrane voltage distribution. In-vivo voltage measurements typically exhibit large upward depolarization during spontaneous activity, leading to a baseline level of activity with a positive skewness, which substantially decreases during evoked activity [3,4,22].
To answer our guiding question, we derive exact analytical expressions for the stationary mixed voltage moments of a feedforward pool of neurons driven by synchronous input drives [23,24]. We develop our subthreshold analysis for a variant of classically considered neuronal models, called the all-or-none-conductance-based (AONCB) model, which was introduced in [6]. The hallmark of AONCB neurons is that their synaptic activation mechanism occurs as an all-or-none process rather than as an exponentially relaxing process. The benefit of considering such a mechanism is that it yields equivalent dynamics to those of classical conductance-based models in the limit of instantaneous synapses, while being amenable to exact probabilistic analysis [25,26]. Given a feedforward pool of AONCB neurons, we model their conductance drives as correlated shot noises [23,24]. A benefit of shot-noise-based models compared to classical diffusion-based models is to allow for synchronous synaptic activation events to be temporally separated in distinct impulses [27–29]. Each of these impulses elicits a transient positive conductance fluctuation, whose amplitude is determined by the number and size of the synchronous inputs. We can formalize this picture by modeling conductance drives with varying degree of synchrony as multidimensional jump processes, specifically via compound-Poisson processes [30,31]. In this approach, the degree of input synchrony is entirely captured by the joint distribution of the conductance jumps, which can be quantitatively related to spiking correlations between pairs of inputs.
Our exact analysis, which relies on techniques from queuing theory [32,33], extends our previous results obtained in [6] along two directions: First, it considers synchronous input activity with heterogeneous rates and heterogeneous correlations as opposed to homogeneous populations of exchangeable inputs. Second, it applies to an arbitrary number of feedforward neurons to characterize voltage moments of any order as opposed to being restricted to the mean and variance of the voltage response. Considering biophysically relevant parameters, we leverage these exact results to derive interpretable approximate expressions for voltage correlation between synchronously driven neurons, as well as for their skewness. Utilizing these expressions in combination with simulations, our first main result is to show that weak but nonzero synchrony, in amount that are consistent with physiologically observed spiking correlations, can explain the surprisingly large degree of voltage correlations observed in simultaneous pair recordings. This result is obtained by contrast to pairs of neurons operating in the asynchronous regime, which would require to share an unrealistically large number of inputs. Our second main result is to show that the same amount of input synchrony also explains the large voltage skewness observed during spontaneous activity. These results challenge the prevailing view that neural networks operates in the asynchronous regime and argue for weak but nonzero synchrony to be a primary driver of neural variability, at least in conductance-based neurons. In practice, persistent synchrony may spontaneously emerge in large but finite neural networks, as finite-dimensional interacting dynamics generally exhibit nonzero correlations. However, most theoretical approaches to analyze the impact of these finite-size correlations have been inspired from mean-field techniques, which are derived for infinite-size networks under some Gaussian assumptions [34–40]. It remains unclear if these approaches can account for the stable emergence of synchrony in large-but-finite networks of conductance-based models.
Technically, to perform our analysis of feedforward AONCB models, we exploit our ability to derive the exact update rule governing shot-noise-driven AONCB dynamics in the limit of instantaneous synapses. Such a limit is obtained by considering that synaptic activation occurs instantantly while maintaining the cross-membrane transfer of charge constant. In the limit of instantaneous synapses, the AONCB update rule specifies the evolution of the voltages of a set of K neurons in between two consecutive synaptic input events, as measured at the population level. Because the only source of stochasticity is due to the synchronous shot-noise drive, this evolution is deterministic between the times of these consecutive events, marked be
for simplicity. In other words, there is a function
such that
where J0 is the random jump that represents the synaptic event occurring at time T0. Equipped with the above relation, one can derive conservation equations from the time invariance of the stationary shot-noise drive, which states that the distribution of at time T0 and T1 must be identical. In turn, these conservation equations fully characterize the mixed stationary moments of
due to the memoryless properties of compound Poisson processes.
As a price for its mathematical tractability, our approach presents key modeling limitations. Specifically, our stationary treatment operates in the limit of instantaneous synapses and, more importantly, assumes an instantaneous form of synchrony, whereby synapses are allowed to activate at the exact same time. Numerical simulations show that when these assumptions are relaxed, our results can still capture the stationary response of synchronously driven AONCB neurons for input models with jittered synchrony [6]. However, it remains unclear whether our results can account for the stationary variability of the neuronal response for more realistic models of synchrony, which exhibit characteristic timescales that vary according to the regime of activity [41–43].
Methods
All-or-none-conductance-based neurons
We adopt the all-or-none-conductance-based (AONCB) model introduced in [6] for the subthreshold dynamics of a neuron’s membrane voltage. In this model, the membrane voltage of a neuron, denoted by V, obeys the first-order stochastic differential equation
where randomness arises from the stochastically activated excitatory and inhibitory conductances, respectively denoted by and
(see Fig 1a). The time-dependent conductances
and
result from the action of
excitatory and
inhibitory synapses, respectively:
and
. In the absence of synaptic input, i.e., when
, and for zero external current I, the voltage relaxes exponentially toward its leak reversal potential
with a time constant
, where C denotes the capacitance of the cell membrane and G denotes the passive conductance of the cell [44]. In the presence of synaptic inputs, the membrane voltage fluctuates in response to the transient synaptic currents
and
, where
and
denotes the excitatory and inhibitory reversal potential, respectively. Without loss of generality, we assume in the following that
and that
.
(a) Electrical diagram of conductance-based model for which the neuronal voltage V evolves in response to fluctuations of excitatory and inhibitory conductances and
. (b-c) The voltage trace and the empirical voltage distribution are only marginally altered by taking the limit
for short synaptic time constant:
ms in (b) and
ms in (c). In both (b) and (c), we consider the same compound Poisson process drive with
,
, and
, and the resulting fluctuating voltage V is simulated via a standard Euler discretization scheme. The corresponding empirical conductance and voltage distributions are shown on the right. The latter voltage distribution asymptotically determines the stationary moments of V.
In the AONCB model, the input spiking activity of the upstream neurons is specified via a K-dimensional stochastic point process [30,31]. Let us denote the excitatory components and the inhibitory components of this point process by
and
, respectively, where l and k are upstream neurons’ indices. For all upstream neurons
and
,
and
are defined as the counting processes that register the spiking occurrences of neurons k and l, respectively, up to time t. For instance, denoting by
the increasing sequence of the spiking times of excitatory neuron k, we have
where denotes the indicator function of set A (
if x is in A and
if x is not in A). Note that by convention, we label spikes so that
. Similar definitions hold for inhibitory synapses in terms of the counting processes
,
. The hallmark of AONCB models is that their synaptic conductances operate all-or-none with a common activation time
. Given the point process
, this amounts to considering that the conductance process
follows
where is the synaptic conductance of the excitatory input k. The above equation prescribes that at each spike delivery to synapse k, the conductance
instantaneously increases by an amount
for a period
, after which it decreases by the same amount (see Fig 1b). Thus, the synaptic response prescribed by Eq (2) is all-or-none as opposed to being graded. This all-or-none behavior was introduced in [6] because it allows one to derive integral expressions for the stationary mean and variance of the voltage, even when the neuron is driven by synchronous synaptic inputs. Again, similar definitions hold for inhibitory synapses in terms of the counting processes
and the inhibitory synaptic conductances
,
.
Jump-process model for synchronous synaptic inputs
To model input synchrony, we consider that distinct synapses can activate (and therefore deactivate) at exactly the same time. In other words, the counting processes associated to distinct synaptic inputs, say and
, are allowed to share points, meaning that it may be that
for some spike indices m and n. Because synaptic activations can be simultaneous, it is convenient to distinguish between synaptic event times and synaptic-event sizes. Synaptic-event times mark all these instants when at least one synaptic input activates; whereas synaptic-event sizes capture the total conductance increase at a synaptic event. Accordingly, we define the increasing sequence of synaptic event times
by temporally ordering the set of synaptic spiking times
without multiple counts and so that by convention . Denoting the counting process that registers synaptic events by N(t), observe that in general, we have
. This inequality becomes strict whenever the process N(t) counts a single event when many synapses activate at the same time, say Tn. To specify the synaptic-event size at time Tn, let us define
-valued binary variables
and
such that
if and only if excitatory synapse k activates at time Tn and
if and only if inhibitory synapse l activates at time Tn. The synaptic-event size at time Tn is then defined as the two-dimensional jump
With these notations, one can then write the time-dependent conductance that drive an AONCB neuron as a jump process
where N(t) is the point process governing the synaptic-event times Tn and where the jumps specify the corresponding synaptic-event sizes.
To fully define our jump-process-based model for synchrony, it only remains to specify the behaviors of N(t) and as random processes. Here, as in [6], we make the simplifying assumptions that: (i) N(t) is a Poisson process with constant event rate b and (ii) that the K-dimensional vectors of synaptic activation variables
are independently and identically distributed. Note that assumption (ii) implies that the conductance jumps
are independently and identically distributed on the positive orthant
, with some joint distribution denoted by
. These assumptions correspond to neglecting any form of temporal dependencies in the inputs. Although this neglect restricts our modeling to an unrealistically precise form of synchrony, we justified in [6] that this approach is predictive of the response of AONCB neurons to more realistic, jittered, synchronous inputs. Observe that the above approach generalizes the framework proposed in [6] as we consider arbitrary distribution of
over
. In particular, we do not assume that the inputs are exchangeable.
Input synchrony and spiking correlations
In our jump-process-based framework, input synchrony follows from the simultaneous activation of inputs at the exact same time. This notion of synchrony, which is defined in continuous time, can be related to the more familiar notion of spiking correlations, which is a measure of synchrony between inputs in discrete time. Specifically, we show in [6] that within our jump-process-based framework, the spiking correlation between any two inputs k and l is defined as
where denotes the expectation with respect to the distribution of synaptic activation variables
. Note that we may omit referencing whether the inputs are excitatory or inhibitory for notational simplicity, as will be the case in the following.
The definition of spiking correlations given in Eq (4) allows one to establish a direct link between spiking correlation and input synchrony within our jump-process-based framework (see Fig 2). When synapses operate asynchronously, distinct inputs k and l activate in isolation, so that with probability one over synaptic events, which implies
. By contrast, in the presence of synchrony, inputs k and l coactivate reliably, so that
with nonzero probability over synaptic events, which implies
. In the extreme case of full synchrony,
with probability one over synaptic events and
. Incidentally, this reveals that our jump-process-based framework can only account for positive spiking correlations, a key limitation of our approach. Note that, as the activation variables Xk are
-valued, higher-order correlation coefficients can also be defined via
Excitatory (red) and inhibitory (blue) spikes are drawn from a jump-process with pkl > 0, and a total conductance Ge/i given by their weighted sums. Inputs for the two example neurons are correlated (or shared) within and across both the excitatory and inhibitory populations, and between both neurons, as expressed through their jump-count distributions.
For all n>0, the above coefficient satisfies and is nonzero as soon as all n inputs coactivate reliably across synaptic events. Fully specifying the distribution of the vector of activation variables
amounts to know all the activation probability
and all the correlation coefficients
,
, up to order K.
Within our modeling framework, synchrony only impacts the dynamics of AONCB neurons via the conductance jumps , which are linear combinations of the activation variables
and
by virtue of Eq (3). Thus, assuming all the synaptic input conductances
and
are known, one can parametrize the jump distribution
in terms of all the spiking correlation coefficients up to order K. However, such a parametrization is excessively cumbersome, and for the purpose of obtaining exact results, we will always consider
as a known arbitrary quantity. In turn, we will see that under the biophysically relevant assumptions of small input weights, approximate results can be directly stated in terms of the spiking correlation coefficients.
Marcus dynamics in the limit of instantaneous synapses
We obtain our exact results about the variability of synchronously driven AONCB neurons in the limit of instantaneous synapses. Informally, this corresponds to considering that synaptic inputs act instantly to transfer charges within a neuron. In order to define this limit regime formally, let us introduce the dimensionless synaptic weights and
and consider the associated dimensionless conductance jumps
and
,
. Denoting by
the ratio of the duration of synaptic activation relative to the passive membrane time constant, one can write the conductance jump process in terms of the dimensionless jumps
as
thereby exhibiting a natural scaling with respect to the parameter ϵ. The limit of instantaneous synapses corresponds to taking while holding the dimensionless synaptic weights constant (see Fig 1c). Such a scaling maintains the charge transfer during a synaptic event, thereby preserving the impact of synaptic activations on AONCB dynamics as
. In the following, we denote the common distribution of the independent variables
by
and just as for
, we consider the latter distribution as a known quantity in our calculations.
Assuming that the jump distribution is known, one can exploit the ϵ-scaling in Eq 6 to develop a simplified analytical treatment of the AONCB neuron dynamics in response to synchronous inputs. This simplified treatment follows from the fact that when
, the driving conductance process becomes a two-dimensional shot noise [6], i.e., the temporal derivative of a two-dimensional compound Poisson processes
[30,31]. Specifically, we have
where the notion of convergence can be made precise but is irrelevant for practical purposes. Shot-noise-driven models are amenable to exact analysis, albeit with some caveats as one can generally define several notions of solution [27,45]. To identify the physical solution, one considers dynamics subjected to a regularized version of the shot noise, whose regularity is controlled by a positive parameter, say, [25,26]. Then, shot-noise-driven dynamics are recovered in the limit of vanishing regularization, typically in the asymptotic regime
. Our previously introduced parameter
precisely plays the role of such a regularizing parameter. Correspondingly, one can derive the shot-noise-driven dynamics of AONCB neurons by considering solutions to Eq (1) for
and then taking
.
The above approach yields the so-called Marcus dynamics for AONCB neurons, which can be stated concisely for constant current I as follows: The membrane voltage V relaxes exponentially toward the resting potential I/G with time constant τ, except when subjected to synaptic impulses at times . At these times, the voltage V updates discontinuously according to
, where the jumps are given via the Marcus rule:
Observe that the above rule implies that the voltage must lie within , the allowed range of variation for V. Note that such a formulation also specifies an exact even-driven simulation scheme given knowledge of the synaptic activation times and sizes
[46]. We adopt the above Marcus-type numerical scheme, which differs from classical Euler-type discretization scheme, in all the simulations that involve instantaneous synapses. More generally, the above Marcus formulation of AONCB dynamics, which combines exponential relaxation and random jump discontinuities, is at the root of all the exact results that we derive in the following.
Feedforward population models
It is mostly a matter of notations to generalize the Marcus jump dynamics given in Eq (7) to a population of feedforward neurons. To see this, let us consider a set of neurons A with cardinality denoted by . Each neuron
may receive inputs from a set of
excitatory inputs and
inhibitory inputs. Accordingly, each neuron
experiences jumps whose sizes depend on the input activation variables
and
via their own dimensionless excitatory and inhibitory weights
and
. This corresponds to considering
shot-noise-driven AONCB neuron dynamics with
-dimensional jumps
Thus, specifying a shot-noise population model requires assuming knowledge of a population-level -dimensional distribution of the jumps
, denoted by
, instead of a two-dimensional distribution
.
With the above notations, one can generalize Marcus dynamics to neural feedforward populations, which do not connect recurrently to one another (see Fig 3). Such a population of neurons still collectively experiences synaptic events at time , which follow a Poisson process with overall rate b. Although the rate b depends on the set of neurons A, we will not refer to this dependence explicitly unless required. The key property to observe is that given two sets of neurons A and B such that
, we have
where the expectation is with respect to the jumps . This simply means that the rate associated to the smaller population is obtained by subsampling the rate of the larger population. Incidentally, observe that one can recover the individual input rate, which we denote
for the k-th synapse of type
, via:
AONCB neurons receive excitatory (red) and
inhibitory (blue) feedforward inputs from a near-infinite pool of possible inputs
. Neurons one (black), two (green), and three (purple) are driven by different, but correlated, spiking inputs resulting in correlated subthreshold activity.
With these remarks in mind, note that in between two consecutive synaptic events, each neuron relaxes independently toward its resting potential
, with membrane time constant
. At synaptic-event times Tn, the population voltages update discontinuously as
where for all , the individual jumps Ja,n are still given via the Marcus update rule given in Eq (7). Thus, the only additional complication to the single-neuron case follows from the multidimensionality of the collective jump update in (10).
In the following, we consider the stationary version of the process governed by (10). Intuitively, one obtains the stationary version of a process by assuming that the initial condition has been pushed infinitely far in the past. As a result, the initial condition has no bearing on the value of the process at, say, time t = 0, which can be viewed as a “typical” time for an otherwise time-shift invariant process. The latter property of time-shift invariance formally defines stationary processes, which can be analyzed via a variety of mathematical results. In this work, we utilize one such result, the so-called PASTA principle from queuing theory, which stands for “Poisson arrivals see time averages” [32,33]. Within the context of AONCB neurons with instantaneous synapses, these arrivals refer to synaptic input activations, assumed to follow a Poisson process.
PASTA (Poisson Arrivals See Time Averages) principle
In a nutshell, the PASTA principle states that sampling a stationary process driven by Poisson input arrivals at a typical time, say V(0) at t = 0, is equivalent to sampling the same process just before an input arrival, say
, where T1 denotes the first input arrival time after t = 0. Making this point rigorous requires the introduction of the concept of Palm distribution, which considers stationary point processes at a typical point, i.e., at an input spike, rather than at a typical time, i.e., in between input spikes [32]. A defining property of Poisson processes is that their Palm distribution is the same as their stationary distribution. As a result, stationary expectations, i.e., expectations at a typical time, can be evaluated as expectations at a typical point, justifying the PASTA principle (see Fig 4).
(a) Example distinct but correlated excitatory (red) and inhibitory (blue) inputs for 2 neurons with activity V1 and V2 (b). (c-d) Zoomed in example voltage activity for both neurons with corresponding joint voltage distributions sampled at dashed lines. Joint voltage densities are equivalent whether sampling just before an input arrival (c; orange and purple lines) or at some arbitrary time T (d; green and purple lines).
The PASTA principle is valid for all processes V that are driven by compound Poisson processes Z, including those with multidimensional jumps [33]. Here, “driven” means that the future history of arrivals is independent from the past history of the process
, whereas the future history of the process
generally depends on the past history of arrivals {Z(s)}s<t. In particular, and most importantly for this work, the PASTA principle applies to shot-noise-driven AONCB neurons. As a result, one can evaluate the stationary moments of membrane voltages as expectations with respect to the associated Palm distribution. Specifically, denoting by
a multiset of elements in A, which allows for multiple instances of its elements (i.e., for repeated neuronal indices in our case), let us define the corresponding n-th order, shifted, stationary moment as
Then, the PASTA principle implies that we must have
where T0 and T1, denote the two consecutive synaptic-event times framing t = 0: . The main point of considering two consecutive times T0 and T1 is that: (i)
follows from
via application of a single Marcus update rule and (ii) the inter-event interval
is an exponentially distributed variable that is independent of
. More precisely, given
, we have
where Ja,0 is a random jump specified by Eq (7) for neuron a and where is an exponential waiting time with mean 1/b. Then, one can leverage Eq (12) in combination with Eq (11) to find fixed-point equations for the shifted moments
. It turns out that these equations define an exactly solvable triangular system of equations for the shifted moments
. We derive and solve such a system in S1 Text, S1 Appendix, ultimately yielding generic expressions for the centered moments
, where
denotes the stationary mean of the voltage
.
Biophysical parameters
To investigate the biophysical relevance of our analysis, we use realistic estimates for the various parameters featured in the AONCB neurons. Specifically, we assume the common values for the passive membrane time constant and
for reversal potentials. Given these common assumptions, determining the dynamics of a synchronously driven AONCB neuron still requires to specify realistic values for the dimensionless synaptic weights
and for the spiking correlation coefficients
across synaptic inputs.
To estimate synaptic weights, we consider that when delivered to a neuron close to its resting state, unitary excitatory inputs cause peak membrane fluctuations of up to at the soma, attained after a peak time of
. Such fluctuations correspond to typically large in-vivo synaptic activations of thalamo-cortical projections in rats [47]. Although activations of similar amplitudes have been reported for cortico-cortical connections [48,49], recent large-scale in vivo studies have revealed that cortico-cortical excitatory connections are typically much weaker [50,51]. At the same time, these studies have shown that inhibitory synaptic conductances are about fourfold larger than excitatory ones, but with similar timescales. Fitting these values within the framework of AONCB neurons for
reveals that the largest possible synaptic inputs correspond to dimensionless weights
and
. Following on [50,51], we consider that the comparatively moderate cortico-cortical recurrent connections are an order of magnitude weaker than typical thalamo-cortical projections, i.e.,
and
. Such a range is in keeping with estimates used in [52,53].
With respect to input synchrony, physiological estimates of the spiking correlations are typically reported to be weak, with coefficients ranging from 0.01 to 0.04 [9,14,42]. It is important to note that such weak values do not warrant the neglect of correlations as the number of synaptic connections can be large in cortex. For instance, as we will see for AONCB neurons, synchrony significantly impacts voltage variability as soon as is larger than or about 1. Assuming the lower estimate of
, this criterion is achieved for
inputs, which is well below the typical number of excitatory synapses for cortical neurons
. Although the relevance of weak correlations to neural variability has been noted before [15,16]. there can be competing explanatory factors. In the following, we argue via biophysical considerations that the leading competing explanation, synaptic heterogeneity, can only account for the small fraction of the observed subthreshold variability.
By considering typical values for synaptic weights, we overlook the role of synaptic heterogeneity in shaping neuronal covariability [54,55]. This is an important caveat as for any synchronous input model, one can design an equivalent asynchronous input model but with heterogeneous synapses [6]. As we will later justify in the discussion, in asynchronous models, the impact of synaptic heterogeneity on neuronal variability is mediated by the squared coefficient of variation of the synaptic weights: CV. Experimental measurements of CV[w] have consistently been reported to be below one [56,57], implying that synaptic heterogeneity can induce up to a two-fold increase in the observed subthreshold variability compared to asynchronous, homogeneous models. Our previous work [6] showed that this is inconsistent with the high level of observed subthreshold variability in cortical neurons with moderate synaptic weights, which would require CV2[w] to be at least an order of magnitude larger. This supports that synchrony is the main determinant of neuronal variability. For this reason, although we will derive all our results assuming synaptic heterogeneity, we will primarily discuss their practical consequences assuming typical moderate weights.
Numerical simulation methods
Several techniques have been proposed to model and simulate synchronous spiking activity besides our jump-process-based modeling framework. For instance, some of these techniques generate synchrony by leveraging the superposition and thinning properties of Poisson processes [58], by considering Poisson processes with multidimensional stochastic intensities [59], or by considering doubly-stochastic Cox processes [60,61]. Although our modeling framework relies on compound Poisson processes, its practical numerical implementation can be understood within the latter doubly-stochastic approach. In this approach, synchrony arises from governing the spiking mechanisms of distinct neurons with correlated rate processes. Given a block-structured synaptic-input correlation matrix
,
, each homogeneous subpopulation (block) of inputs is assigned a given rate process
,
. For compound Poisson processes, these rate processes are actually formally defined as completely random measures, which can be interpreted as directing, infinite-dimensional, de Finetti processes [6].
In practice, however, we only simulate the rate processes ,
, with finite temporal resolution in bins of durations
. As a result, the random measure associated to each population is replaced be a bin-indexed i.i.d. process:
, where j is the bin index. In the following, we drop the dependence on the time index as the vectors
are i.i.d. over time. Then the crux of the method is to sample vectors
so that these can be used as mixing variables to generate spiking counts
with the desired synchrony structure. Note that at finite resolution, these spike counts are obtained via mixing of the binomial approximations for Poisson processes. This means that the probability law of the counts
is given by
where the expectation is with respect to the law of the mixing variables .
We specify the law of via Gaussian copulas, for which many sampling methods are available. Specifically, we consider the L-dimensional Gaussian copula defined as
where denotes the L-dimensional cumulative distribution of the centered Gaussian with correlation matrix Σ. The matrix Σ will be later chosen so that given a choice of marginals for the mixing variables
,
, the desired correlation structure is numerically achieved. Given that we consider the binomial mixing model (13), we choose the cumulative marginal distribution Bj of
to follow a beta distribution
with parameters
As shown in [6], such parameter choices ensure that the mean spike count is exactly in each bin, whereas the subpopulation spiking correlation is
in each bin, so that the model becomes exact in the limit
. Then, we sample
according to the copula cumulative distribution
where the covariance matrix Σ has unit diagonal and off-diagonal terms chosen so that the resulting distribution satisfies
This process is fast as we numerically found that given fixed noncorrelation parameters, for all , the coefficients
each appear to be related to
via the same affine function.
We use the synchronous spike trains generated via the above copula method to drive AONCB neurons. The resulting simulated traces are then used to empirically estimate the various stationary moments considered in this work. Empirically estimating the stationary moments of a population of neurons A only requires simulating voltages on the synaptic event times Tn, , when at least one neuron from A receives a synaptic input. These times are given as
where we have defined
Remember that in the above definitions, ki,j denotes the spike count of the subpopulation i in time bin j. At the cost of defining a larger number of subpopulations L, we can always define the L homogeneous input subpopulations so that they each correspond to a single typical synaptic weight wi,a for all and all
. Then, the Marcus update rule (10) applies at Tn,
, with jumps Ja,n,
, computed for the dimensionless conductance jumps
In turn, the sequences of voltage values ,
, are computed sequentially via
Finally, consider a simulation period T for which there are N synaptic event times Tn, , and with the convention T0 = 0 and TN + 1 = T. Given Am, a multiset of elements of A such that
, the corresponding m-th order empirical stationary moment is evaluated as
Results
Cortical activity typically exhibits a high degree of variability in response to identical stimuli [62,63], with individual neuronal spiking exhibiting Poissonian characteristics [64,65]. Such variability is striking because neurons are thought to receive large numbers (104) of synaptic contacts [66]. Given such large numbers, in the absence of synchrony, neuronal variability should average out, leading to quasi-deterministic neuronal voltage dynamics [67]. However, contrary to this prediction, electrophysiological in-vivo measurements reveal that neuronal membrane voltage exhibits fluctuations with typical variance values of
[3,4]. Note that these variance estimates are obtained from voltage traces where action potentials have been excluded, so that the measured variability can be primarily attributed to synaptic inputs. In our previous work [6], we argue that achieving physiological cortical variability requires input synchrony within the AONCB modeling framework. We summarize these results in Fig 5, where we examine three distinct conditions of input synchrony: asynchronous input, separately synchronous excitatory and inhibitory inputs, and jointly synchronous excitatory and inhibitory inputs.
(a) Example Monte Carlo simulations of AONCB neurons with large synaptic weights at various input firing rates (blue, 1Hz; orange, 10 Hz; red, 30 Hz) for different correlation conditions (top, asynchronous; middle, within-pool correlations; bottom, across-pool correlations). Voltage variance as a function of rate can be seen on the right with same correlation conditions. (b) Same as (a) but for moderately sized synaptic weights.
In this work, we extend our prior results derived in [6] to the more general setting of neuronal feedforward populations driven by heterogeneously synchronous inputs, but still under the assumption of instantaneous synapses. In this more general setting, we then ask whether realistic level of synchrony impact other important aspect of the subthreshold variability, specifically voltage covariability across neurons and voltage skewness in individual neurons.
Voltage covariance
In order to specify the voltage covariance of synchronously driven AONCB neurons, one must first evaluate their stationary voltage mean. Applying the PASTA principle to a single AONCB neuron labeled by 1 in S1 Text, S2 Appendix, we find this mean to be
where the auxiliary random variable Y1 is defined as . Note that in the expression above, b1 is the rate of the compound Poisson process modeling the synaptic inputs to neuron 1 and that the expectation is with respect to the jump distribution of
. The expression of the stationary mean voltage can be conveniently recast as
where the rate coefficients ,
are given by
Eq (14) has superficially the same form as for deterministic rate dynamics with constant conductances, in the sense that the mean voltage is a weighted sum of the reversal potentials ,
and
. One can check that for such deterministic rate dynamics, the synaptic efficacies involved in the stationary mean simply read
. By contrast, modeling synaptic conductances as asynchronous shot noise leads to synaptic efficacies under exponential form:
. In turn, accounting for input synchrony leads to synaptic efficacies expressed as expectation of these exponential forms, as in Eq (15), consistent with the fact that our approach models synchrony via the stochastic nature of the conductance jumps
.
Applying the PASTA principle to a pair of AONCB neurons labeled by 1 and 2, we find in S1 Text, S5 Appendix, that the stationary covariance of their voltages is given as
where the auxiliary random variables Ya, , satisfy
. The above expression calls for a few observations: First, note that b12 denotes the rate of the compound Poisson process modeling the synaptic inputs to the pair of neurons. In particular, we have
. Second, note that the expectation is evaluated with respect the jump probability of
. Third, note that Eq (16) is consistent with the expression for the voltage variance of an AONCB neuron derived in [6], which is recovered by setting
. Similarly as for the stationary mean, the expression of the stationary voltage covariance can be conveniently recast as
where the rate coefficients ,
are given by
Before discussing the implications of input synchrony across neurons, we discuss the implication of the results presented above for the voltage variance.
Impact of synchrony on individual voltage variability
In the framework considered here, the voltage variance for neuron 1 is obtained by considering Eqs (16) and (17) under the assumption that neurons 1 and 2 are exchangeable and share the same inputs with identical weights. In particular, this corresponds to
,
, and
. We devote the next three subsections to discuss the variance formula for
under various assumption of synchrony.
Fully asynchronous inputs.
Let us first consider the purely asynchronous case for which each synaptic input behaves independently. In the context of our jump-process-based model for the synaptic drive, this means that the rate of the governing Poisson process is the sum of the individual synapses’ input rates, i.e.,
Independence of the synaptic inputs also means that every synaptic event comprises a single active synapse and that the probability that a particular input is active is determined in keeping to its input rate. For instance, the probability that the k-th synapse of type is the active one is
. Next, by independence between excitation and inhibition, we have
whenever
. We also have
since for all synaptic events, either
or
. Then evaluating the expectation in Eq (18) with respect to the monosynaptic activation probability
leads to
. As a result, Eq (17) reads
The above result coincides with the expression obtained in the effective-time-constant approximation [53], except for the exponential form of the synaptic efficacies. As stated earlier, this exponential form is due to the shot-noise nature of the drive, and one can recover the classically derived expression by making the additional assumption of small synaptic weight, i.e., , for which we have:
Pool-specific synchrony.
Suppose that neuron 1 receives inputs that only exhibit synchrony when considered separately as a pool of excitatory or inhibitory inputs. We still have but in the presence of within-pool synchrony, the rates
are only subadditive functions of the synaptic rates because inputs can coactivate within the pool:
Further specifying a functional form for requires adopting a parametric model for synchrony, e.g., derived from the beta-binomial statistical model as in [6]. However, even without choosing a parametric model, one can always assume that b1 is known as it is entirely specified by Eqs (8) and (9). The presence of pool-specific synchrony means that the number of coactivating inputs may vary across synaptic events. Consequently, the rate coefficients given by Eq (18) shall be evaluated as true expectations
where the randomness of the jump is the hallmark of pool-specific synchrony. Although
can be much larger than any individual synaptic weights, for small enough synaptic weights and weak enough synchrony, the small-weight approximation still holds with large probability:
. Under the small-weight approximation, one can express the rate coefficients
directly in terms of the synaptic activation variables
In turn, remembering that by definition, we have , the variance with pool-specific synchrony reads
As the diagonal correlation coefficients are unit value, i.e., , one can see that pool-specific synchrony can only increase voltage variability via the remaining off-diagonal terms
,
. Moreover, assuming uniform spiking correlations
,
, uniform synaptic weights
, and uniform input rates
, one obtains
where is the number of synaptic input received by neuron 1 and
is the spiking correlation between synaptic inputs of type α impinging on neuron 1. This shows that synchrony significantly impacts voltage variability as soon as
, which generally holds in cortex [66].
Synchrony between excitation and inhibition.
Let us finally consider the impact of having synchrony between excitation and inhibition on the variance , for which excitatory and inhibitory synapses may coactivate. As a result of such synchrony, the rate b1 is no longer additive:
. Similarly, it no longer holds that
and one cannot simplify the fractional form of Eqs (15) and (18). This fractional form can be understood as a mitigation of the antagonistic forces exerted by excitation and inhibition during a mixed synaptic event and closely mirrors the form of the Marcus rule update in Eq (7). Thus, synchrony between excitation and inhibition alters the expression of all coefficients
,
, and in particular, it causes the cross-coefficient
to be positive. Accordingly, in the small-noise approximation, one obtains
The above expression only differs from Eq (21) by the presence of cross terms involving . The latter quantity is necessarily negative as the voltage V is bounded above and below by
and
, respectively. Thus, the impact of synchrony between excitation and inhibition is to reduce variability in the membrane voltage, as intuition suggests [68]. Under assumptions of uniformity about spiking correlation coefficients, synaptic weights, and input rates, we further have
Observe that the uniform spiking correlation between excitatory and inhibitory inputs to neuron 1, denoted above by , necessarily satisfies
, so that as expected, the variance remains a nonnegative number.
Impact of synchrony on voltage covariability
Simultaneous whole-cell measurements in pairs of neighboring cells have revealed highly synchronized membrane voltage recordings, with crosscorrelation coefficients as large as [17]. Because these correlation measurements are essentially independent of the occurrence of spikes, their high levels of synchrony are almost entirely attributable to subthreshold activity. Simultaneous pair recordings in the voltage-clamp configuration further showed that similarly high levels of synchrony hold among excitatory and inhibitory inputs, as well as across excitatory and inhibitory inputs [19]. Although the nature of synchrony may be altered by stimulus drive, which can shift synchrony to higher-frequency voltage fluctuations, the overall degree of synchrony across nearby cells has consistently been measured to be high in anesthetized animals [20]. These findings were confirmed in awake behaving animals, where simultaneous pair recordings yield crosscorrelation coefficients
during spontaneous, restful activity, and comparatively weaker albeit still large values
during sensory drive or motor activity [18,21].
In this work, we consider whether the level of spiking synchrony necessary to explain voltage variability in single neurons can also explain the high degree of covariability observed between neighboring cells. To address this question within our modeling framework, we now discuss Eq (16) in the presence of synchrony between inputs to neuron 1 and neuron 2, which generally implies nonzero voltage covariance: .
Synchrony between inputs to distinct neurons.
By contrast with the variance case, evaluating via Eq (16) requires knowledge of the input rate to the neuronal pair b12 as opposed to the individual neuronal input rates b1 and b2. In S1 Text, S6 Appendix, we show that the pair-specific input rate satisfies
where we have denoted by the aggregate jump experienced by neuron a. The term q12 is the probability that neurons 1 and 2 receive synchronous inputs given that at least one neuron of the pair receives an input. Specifying a functional form for q12 requires choosing a parametric model for synchrony, i.e., for the jump distribution
. Given such a model, one can use Eq (22) to relate the value of b12 to b1 and b2, which in turn, can be deduced from the individual synaptic rates (see S1 Text, S6 Appendix). When inputs to neurons 1 and 2 are independent, we have q12 = 0 so that
, whereas when neurons are optimally synchronous, we have
, so that
. For intermediary level of synchrony between inputs to neurons 1 and 2, one generally has
. At the same time, it no longer necessarily holds that
, or equivalently that
for all
. As a result, all four cross-coefficients
can be positive in Eq (18), depending on whether there is synchrony in between various neuron-specific pools of excitatory and inhibitory inputs. The fact that
, as well as the fact that
, are the hallmark of the presence of synchrony between inputs to neuron 1 and 2.
As in the case of the voltage variance, considering the biophysically relevant small-weight approximation yields directly interpretable formulas. These formulas are interpretable in as much as they only involve the input spiking rates and the spiking correlation coefficients
. Specifically, in the small-weight approximation, one obtains the generic covariance formula
where denotes the spiking correlation coefficients between inputs of type α and inputs of type β to distinct neurons. Under assumptions of uniformity about spiking correlation coefficients, synaptic weights, and input rates, we further have
which shows that although spiking correlation coefficients must be nonnegative within our framework, one can have if, e.g., one assumes
and
. As for the voltage variance, the small-weight approximation of the covariance
takes a simple interpretable quotient form. Specifically, the numerator expression has the same quadratic form as the one obtained for the voltage covariance of current-based models, which is recovered by neglecting the rate-dependent component of the denominator. Moreover, just as for the voltage variance, the covariance for conductance-based models follows from normalization by an effective time constant, which is obtained as the harmonic mean of the effective neuron-specific time constants. Finally, observe that if the covariance depends nonlinearly on the input rates
and the synaptic weights
and
, it is a linear function of the spiking correlation coefficients
.
Shared asynchronous inputs.
We first consider under which conditions realistic levels of voltage variability and covarability may emerge in the absence of input synchrony. Our previous work supports that achieving realistic voltage variability in the absence of input synchrony is possible when driven by a relatively small number () of strong synaptic inputs (
). Given this asynchronous setting, neuronal voltages may still covary across pairs of neurons if they share a fraction of their synaptic inputs. The case of shared inputs corresponds to considering distinct inputs of the same type α to neuron 1 and 2, say input k to 1 and input l to 2, but with identical rates and unit pairwise spiking correlation coefficient:
and
. For simplicity, we consider the symmetric case for which two identical neurons 1 and 2 receive the same number
and
of synaptic inputs via uniform synaptic weights
and
and uniform firing rate
and
, while sharing a fraction
of the excitatory inputs and a fraction
of inhibitory inputs. In this symmetric case, the voltage correlation between neurons is given by
where q measures the relative share of variability due to excitation as opposed to inhibition
When excitation alone is considered, we simply have that as intuition suggests. Given that
, q is defined as a decreasing function of the mean voltage m in the presence of both excitation and inhibition. To estimate q, we consider biophysically realistic parameter values for which
and
and we assume that excitatory and inhibitory rates are similar:
. In this setting, one can check that excitation is responsible for 90% of the variability at resting potential m = 0, and that this share declines to about 45% for
depolarization. Thus, to achieve
during spontaneous activity and
in the the driven regime, one must assume that both neurons share between
of their excitatory inputs and between
of their inhibitory inputs. Both proportions of shared inputs, especially for the excitatory inputs, are larger than those typically observed experimentally, with estimates ranging from as low as 5% [9] to at most 30% shared inputs [69,70]. This suggests that neurons driven by a relatively low number of large asynchronous synapses can exhibit realistic levels of voltage variance but fail to produce realistic levels of voltage covariability.
We illustrate these results for asynchronous input drives in Fig 6. In Fig 6a–6b, we emphasize that the voltage covariability mostly depends (quasi-linearly) on the fraction of shared excitatory inputs as opposed to the number of excitatory inputs
. This is by contrast with Fig 6c–6d which shows that voltage covariability only marginally depends on the fraction of shared inhibitory inputs
, and even less on the number of inhibitory inputs
. However, in both cases, at fixed
and
, increasing the synaptic connectivity numbers
and
impacts the individual neurons statistics, and in particular, leads to voltage variance increases as observed in-vivo [3,4,19]. The primary dependence of voltage covariability on the shared fraction of excitatory inputs follows from the fact that for biophysically relevant parameters, excitatory inputs are responsible for most of the voltage variability, yielding near unit values for q. This is because at physiological level of depolarization, the membrane voltage sits much further from the excitatory reversal potential
than from the inhibitory reversal potential
, leading to a comparatively larger excitatory driving force. Finally, in Fig 6e–6f, we confirm that albeit unrealistic, assuming shared fractions
and
leads to physiologically plausible individual and joint neural responses, with voltage correlations
at resting state and
in the driven regime.
(a-b) Changes in neural statistics in response to a varying excitatory drive with Hz, while inhibition is held constant at
,
Hz, and
. (a) Voltage correlation as a function of excitatory inputs
and the percent of
shared between the two neurons
. (b) Changes in voltage statistics (mean, variance, and voltage correlation) while holding either a constant percent shared
at 75% (top, green) or a constant
of 100 (bottom, purple). (c-d) Changes in neural statistics in response to a varying inhibitory drive with
Hz, while excitation is held constant at
,
Hz, and
. (c) Voltage correlation as a function of inhibitory inputs
and the percent of
shared between the two neurons
. (d) Changes in voltage statistics (mean, variance, and voltage correlation) while holding either a constant percent shared
at 75% (top, green) or a constant
of 25 (bottom, purple). (e-f) Changes in neural statistics in response to increasing drive with
,
,
and
. (e) Voltage correlation as a function of input rate
and
in Hz. (f) Changes in voltage statistics (mean, variance, and voltage correlation) while jointly increasing both the excitatory and inhibitory drive re/i. In all cases the synaptic weights are large with
.
Synchronous inputs across neurons.
We next consider under which conditions realistic levels of voltage variability and covariability may emerge in the presence of physiological input synchrony. This corresponds to spiking correlation coefficients within the range . Our previous work supports that achieving realistic voltage variability in the presence of input synchrony is possible when driven by a relatively large number (
) of moderate synaptic inputs (
). Given this synchronous setting, we consider the case of two identical neurons driven by synchronous synaptic inputs with symmetric, uniform spiking correlation structure. Moreover, we consider that when considered separately, inputs to each neuron exhibit a similar degree of spiking correlations, whereas inputs to separate neurons are allowed to exhibit a distinct degree of spiking correlations. Specifically, we consider two such correlation structures: In case (i), for ease of comparison with the case of shared independent inputs, we consider that excitatory and inhibitory inputs are independent so that
and the correlations are parametrized by
for all
. In case (ii), we consider the more physiologically relevant case of uniformly correlated excitatory and inhibitory inputs for which
for all
and such that for all
with
,
. For both cases (i) and (ii), the individual voltage variance can be written as the weighted average of the asynchronous variance
and the fully synchronous variance
:
. Similarly, the co-variance depends linearly on the spiking correlation
so that
where the fully synchronous covariance is such that
. Therefore, in the symmetric case, the expression of the voltage correlations between neuron 1 and 2 simplifies to
where the dimensionless parameter measures the ratio of the fully synchronous variance to the asynchronous variance:
When excitation alone is considered, i.e., for q = 1, the above expression reduces to and voltage correlations exhibit two regimes: for small connectivity number
,
is linear function of
with
; for large connectivity number
,
saturates to
. Thus realistic level of correlations are achieved for relatively large connectivity numbers if one assumes weaker spiking correlation across neuronal inputs than within neuron-specific inputs so that
. Assuming that excitatory and inhibitory rates are similar, i.e.,
, the parameter
, and thus the coefficient
, become nonmonotonic functions of the mean depolarization m, which are however both decreasing over the physiologically range for which
. For these values, one can check that including inhibition reduces the value of
to 60% of its value at resting potential m = 0 (i.e.
) and that this value further declines to about 8% for
depolarization (i.e.
). Such a reduction is due to the fact that including synchronous inhibition cancels out part of the variability due to synchronous excitatory inputs, a cancellation effect that is magnified for larger depolarizations, as the inhibitory driving force increases. In turn, given these estimates for
with
, choosing the biophysically realistic values
yields
during spontaneous activity and
in the the driven regime, as observed physiologically. This suggests that including the weak level of synchrony observed in the input drive is enough to jointly account for the voltage variability and covarability of the subthreshold dynamics of neuronal pairs.
We illustrate the impact of synchrony on voltage covariability discussed above in Fig 7. To compare this impact with that of shared independent inputs, we first examine case (i), for which synchronous excitation and synchronous inhibition act independently. Similarly to the case of shared inputs, Fig 7a–7b, emphasizes that the voltage covariability primarily depends (linearly) on the cross excitatory correlations, , as opposed to the number of excitatory inputs
. This is by contrast with Fig 7c–7d which shows that voltage covariability only marginally depends on the cross inhibiotry correlations,
, and even less on the number of inhibitory inputs
.
(a-b) Changes in neural statistics in response to a varying excitatory drive with Hz and
, while inhibition is held constant at
,
Hz,
,
, and
. (a) Voltage correlation as a function of excitatory inputs
and cross excitatory correlation
. (b) Changes in voltage statistics (mean, variance, and voltage correlation) while holding either a constant cross excitatory correlation of
(top, green), or a constant
of 104 (bottom, purple). (c-d) Changes in neural statistics in response to a varying inhibitory drive with
Hz and
, while excitation is held constant at
,
Hz,
,
, and
. (c) Voltage correlation as a function of inhibitory inputs
and cross inhibitory correlations
. (d) Changes in voltage statistics (mean, variance, and voltage correlation) while holding either a constant cross inhibitory correlation of
(top, green), or a constant
of 250 (bottom, purple). (e-f) Changes in neural statistics in response to increasing drive with
,
,
and
. (e) Voltage correlation as a function of input rate
and
in Hz. (f) Changes in voltage statistics (mean, variance, and voltage correlation) while jointly increasing both the excitatory and inhibitory drive re/i. In all cases the synaptic weights are moderately sized with
.
Finally, assuming , we confirm in Fig 7f that including physiological levels of input synchrony yields realistic voltage correlations at resting state (
for
) as well as in the driven regime (
for
). These results indicate that assuming broad level of input synchrony, compatible with the weak level of spiking correlations observed in-vivo, is enough the explain the high degree of voltage correlations observed in simultaneous pair recordings.
Voltage skewness
In principle, one can exhibit the higher-order crosscorrelation structure of the subthreshold voltage dynamics by simultaneously recording more than two neurons. However, these recordings are exceedingly challenging to perform and we are not aware of any reports of such measurements. That said, higher-order correlation statistics can be straightforwardly explored in single (or pair) recordings by investigating higher-order moments of the voltage traces. In particular, one can measure the asymmetry of the voltage distribution via its skewness, which is defined as the scaling-invariant quantity , where
and
denotes the second and third centered moments of the stationary voltage V, respectively. In-vivo electrophysiological measurements have revealed that voltages are generally right-skewed, i.e.,
, with unimodal voltage distributions exhibiting comparatively heavier right tails than left tails [4,22]. Furthermore, it has been shown that this positive skewness varies with physiological synaptic drive. Specifically, spontaneous activity exhibits considerable positive skewness, with values as large as
, whereas evoked activity displays significantly reduced skewness, with values as low as
, compatible with a near-symmetric, Gaussian voltage distribution [4,22].
The skewness measurements discussed above are essentially independent from the spiking regime of the recorded neuron, which supports that voltage skewness originates from subthreshold neuronal mechanisms. Consistent with this observation, one can provide an elementary, heuristic explanation for the emergence and activity-modulation of the voltage skewness by recognizing the antagonistic role of excitatory and inhibitory driving forces [27–29]. During spontaneous activity, the voltage sits close to and far away from
on average, leading to a vastly larger excitatory driving force than the inhibitory driving force. As a result, resting equilibrium must be achieved by balancing many small hyperpolarization events by comparatively fewer and larger depolarization events, leading to a substantial right skew. During the evoked regime, the voltage moves closer to
and away from
, leading to more commensurate excitatory and inhibitory driving forces. Therefore, evoked equilibrium can now be achieved by balancing comparable hyperpolarization and depolarization events, leading to near symmetric voltage fluctuations. Albeit simple, this explanation involves the tuning of many possible contributing factors, including connectivity numbers, synaptic weights, driving rates, but also input synchrony. In the following, we leverage our jump-process-based framework to show that explaining the observed skewness actually requires an alternative explanation primarily involving the antagonistic role of relaxation and excitation alone. Furthermore, we also show that including physiological levels of input synchrony is necessary to quantitatively explain this skewness.
Analytical predictions for excitation alone.
Applying the PASTA principle within our jump-process-based framework allows one to derive generic explicit formulas for any centered moments of an AONCB neuron’s voltage. Although these formulas quickly become unwieldy for higher orders, one can express the third stationary, centered moment under the compact form
where denotes the stationary voltage variance (see Eq (16)) and where the auxiliary random variables Y and R are defined as
As usual, the expectation in Eq (26) is with respect to the jump variable , whose distribution parametrizes input synchrony. In the small-weight approximation, one can further exhibit the dependence of
on the synchrony input structure in terms of the generalized spiking correlation coefficients defined in Eq (5). In general, this approach yields expressions for
in terms of the second-order and the third-order spiking-correlation coefficients
and
,
. For simplicity, we only give this expression for the case of excitation alone with uniform synaptic weights and rates. For this case, we have
where denotes the uniform third-order spiking correlation coefficient. The above expression shows that even in the absence of inhibition,
results from the contribution of two terms with opposite signs, suggesting that zero skewness may be achieved even without the antagonistic action of inhibition. To check this point, we adopt the beta-binomial statistical model from [6], which involves a single correlation coefficient via
. Then, assuming that
as suggested by biophysical considerations, one has the approximate skewness expression
In the expression above, the quantity 𝕤[V] is the voltage skewness of the asynchronous, current-based model with identical parameters and driving inputs, which corresponds to considering a constant driving force instead
in the dynamics given by Eq (1) without inhibition. Thus, even in the absence of a voltage-dependent driving force, the voltage is right-skewed in the spontaneous regime with, e.g., 𝕤[V] ≃ 0.25 for
and
, whereas this skew decreases with the input drive with, e.g.,
for
and
. For large input numbers
, the characteristic time between transient depolarization
remains smaller than the relaxation time τ, even in the spontaneous regime for which
. As a result, the voltage trace is nearly piecewise linear so that its skewness is directly inherited from the Poissonian fluctuations of the input spike count over the timescale τ. These become nearly Gaussian distributed—and thus unskewed—for large rate
, explaining the overall decreasing trend of the voltage skewness with input drive.
However, the right-skew exhibited by the asynchronous current-based model is generally too small in the spontaneous regime and too large in the driven regime compared to the values reported experimentally. To remedy these discrepancies, one can increase the spontaneous-regime skewness by including input synchrony. At the same time, one can decrease the driven-regime skewness by considering a voltage-dependent driving force, as in the AONCB models. Indeed, for all synchrony conditions, Eq (27) shows that the voltage of excitatory-driven AONCB neurons remains right-skewed in the driven regime up to and maintains a small negative skewness for larger drive. The emergence of this small negative skew follows from the modulations of the Poissonian input fluctuations enacted by the voltage-dependent driving force, which amplifies downward input count fluctuations (larger driving forces) compared to upward ones (smaller driving forces). That said, this modulation is negligible at low driving rate so that increasing the magnitude of the skewness requires to act on the extra factor capturing the impact of synchrony. One can check that for
and
, we have
for
but
for
. Altogether, the above discussion shows that realistic levels of voltage skewness can be explained by excitation alone in conductance-based neurons if these are driven by synchronous inputs.
Numerical results for excitation and inhibition.
A principled treatment of the general expression for the skewness of an AONCB neuron is burdensome (see Eq (26)) and perhaps more importantly, unnecessary to understand skewness under physiological conditions as we will see. For this reason, we resort to numerics to investigate skewness in the presence of excitatory and inhibitory driving inputs. We present our main results in Fig 8 where we show that for biophysically relevant parameters, the voltage skewness is almost exclusively shaped by excitatory inputs and that input synchrony remains a requirement to achieve realistic levels of skewness in the presence of excitation and inhibition. In Fig 8a we show two representative traces for the subthreshold membrane voltage of an AONCB neuron driven by asynchronous and synchronous inputs, as well as the corresponding voltage distributions. For moderate depolarization, the impact of synchrony is to substantially enhance excitation-driven depolarization, leading to rightly skewed distributions. In Fig 8b, we confirm that these asynchronous and synchronous skews can be accurately estimated via Eq (26). Specifically, we show that the PASTA analysis developed in S1 Text, S3 Appendix, produces accurate estimates of the higher-order voltage moments.
(a) Example Monte-Carlo simulations of AONCB neurons, and voltage distributions, with asynchronous (green) and synchronous (purple) inputs. (b) Comparison of analytically derived moment expressions using equation (??) with numerical estimates obtained via Monte-Carlo simulations for the asynchronous (green) and synchronous (purple) conditions considered in (a). (c) Effects of increasing excitatory and/or inhibitory rate on skewness for excitatory inputs only (red), inhibitory inputs only (blue) or joint excitation and inhibition (purple). (d) Effects of jointly increasing excitatory and inhibitory input rate on skewness for various type of synchrony-based input correlations: uncorrelated (uncorr, green), within correlation
,
and
(within corr, purple), within and across correlation
(across corr, blue).
Next, we show that realistic levels of skewness, i.e., in the spontaneous regime and
in the driven regime, can be achieved in the presence of both excitation and inhibition. In Fig 8c, we represent
as a function of the driving rate r in three driving conditions: excitation alone (
,
), inhibition alone (
,
), and joint excitation and inhibition (
). Consistent with intuition, at low input drive, the voltage fluctuations are dominated by transient hyperpolarizing or depolarizing inputs when driven by inhibition or excitation alone, respectively. This leads to right-skewed voltage distributions when
,
and left-skewed distributions when
,
. Both skews vanish with increasing drive, as the mean voltage increases or decreases away from the resting potential but with values that remains away from the reversal potentials
and
. In both cases, the same mechanism of inheritance of the Poissonian input fluctuations explains this reduction in skewness. However, considering excitation and inhibition together for physiological conditions reveals that the excitation-related mechanism is overwhelmingly responsible for the voltage skewness. This is due to the large difference between the magnitudes of the driving forces
, which gets nonlinearly magnified in the second and thrid moments involved in the skewness definition. For simplicity, we only consider within group synchrony in Fig 8c, which supports that voltage skewness does not arise from the antagonistic actions of excitation and inhibition but is almost entirely due to shot-noise excitatory drive. In Fig 8d, we show that including synchrony between excitation and inhibition does not alter skewness, consistent with the negligible role played by inhibition. Note that the predicted skewness range of values and trends are in agreement with observations reported in vivo when synchrony is included, whereas erasing synchrony altogether yields unrealistically low levels of skewness [4].
Discussion
Synchrony modeling and variability analysis
In this work, we have generalized the analytical approach proposed in [6] to quantify the impact of synaptic input synchrony on the membrane voltage variability of the so-called all-or-none-conductance-based (AONCB) neuronal model. Our generalization proceeds in two directions: First, we have extended our jump-process-based framework to model the synchronous drive to AONCB neurons resulting from inputs with heterogeneous rates and correlation structures. Such an extension allows for the consideration of more realistic input models than the ones considered in [6], which relied on the restrictive hypothesis of input exchangeability. This involves the introduction of generalized higher-order spiking correlation coefficients which may vary according to the set of inputs considered. Second, we have adapted results from queueing theory to analyze the shot-noise-driven dynamics of AONCB neurons. These results hold in the limit of instantaneous synapses for which one assumes that the synaptic integration timescale is much smaller than the membrane time constant of the neuron. Such an approach allows one to extend the results of [6] to compute the arbitrary-order, centered, voltage moments of any feedforward assemblies of neurons driven by inputs with heterogeneous synchrony structure. The obtained formulas are derived recursively from the fixed-point relation that the moments obey in the stationary regime. These formulas involve expectations with respect to the distribution of the jumps of the driving process which model synchronous inputs. Prior to this work, physics-inspired techniques have been used to compute the time-dependent moments for the subthreshold dynamics of conductance-based neuronal models driven by shot noise [71,72]. However, these techniques were applied in the asynchronous regime and produced moment formulas in integral forms where parametric dependencies can be hard to capture. Notably, these techniques were recently expanded to include the impact of spiking activity on the time-dependent mean and auto-correlation structure of the neuronal response [73,74].
Our previous work supports that accounting for the surprising large variance of the membrane voltage measured in vivo necessitates weak but nonzero input synchrony, consistent with spiking correlation measurements in large spiking neuronal population [6]. Here, we have utilized our newly derived results to investigate whether weak but nonzero synchrony is consistent with other statistics of the measured subthreshold neuronal activity, including voltage covariability across pairs of neurons [17–21] and voltage skewness in single recordings [3,4,22]. We find that weak but nonzero input synchrony consistently explains the large degree of voltage correlations observed in vivo (0.4–0.8). This is by contrast with asynchronous inputs for which the emergence of similarly strong voltage correlations would imply that recorded neurons share at least 65% of their excitatory inputs, inconsistent with anatomical studies [66]. Moreover, we find that excitation is the primary driver of voltage covariability owing to the generally stronger driving force associated to excitatory synaptic events. Inspecting the detailed impact of the input correlations structure confirm this primary role of excitation as shown in Fig 9, which shows that excitation-specific correlations are the determinant of the voltage covariability. At the same time, while it plays a secondary role, inhibition can modulate voltage covariability in the evoked regime, when a relative increase in the inhibitory driving force partially cancels the excitation-induced correlations. However, such a modulation can also lead to an overall reduction in variability, akin to the variability quenching reported for non-primate mammals [1], but contrary to in-vivo observations in primates [4]. Similarly, we find that excitation is also the primary driver of the subthreshold voltage skewness measured in vivo and that explaining the modulation of skewness by the drive does not require inhibition to first approximation. However, we found that explaining the high degree of skewness observed in the spontaneous regime requires including weak but nonzero input synchrony, consistent with physiological observations in primates [4]. We devote the remaining of this work to discuss potential limitations to our analysis and future implications of our results.
(a-b) Changes in cross-neuron voltage correlations in response to varying excitatory and inhibitory cross-neuron correlations and
, with
,
,
Hz,
, and
. (a) Voltage correlations with large synaptic weights (left) and moderate synaptic weights (right). (b) Changes in voltage correlations while holding either a constant cross-neuron excitatory correlation of
(top, red) or cross-neuron inhibitory correlation of
(bottom, blue) for both large (dashed) and moderately sized (solid) synaptic weights. (c-d) Changes in cross-neuron voltage correlations in response to varying same-pool cross-neuron correlations
and across-pool cross-neuron correlations
, with
,
,
Hz,
, and max
. Changes in voltage correlations while holding either a constant same-pool cross-neuron correlation of
(top, purple) or cross-pool correlation of
(bottom, green) for both large (dashed) and moderately sized (solid) synaptic weights.
Interpretation in the small-weight approximation
A caveat of our exact moment calculations is that the obtained formulas can be difficult to interpret in relation to measurable quantities (see, e.g., Eq (16)). This difficulty stems from the fact that when the moments under consideration involve an assembly of (more than one) neurons, the corresponding formulas feature the rates of spiking events experienced by the assembly of neurons collectively (see S1 Text, S1 Appendix). In the presence of synchrony, these collective rates do not behave additively as several neurons can experience input activations at the exact same time. One can gain insight about these collective rates by considering a parametric model for input synchrony. For simplicity, let us consider the homogeneously synchronous beta-binomial model introduced in [6]. This model considers K exchangeable inputs whose synchrony is parametrized by a single parameter such that the pairwise spiking correlation is given by
. Given identical individual spiking rate r and
, the collective rate of spiking event experience by K neurons can be shown to be
where ψ denotes the digamma function. Thus, in the presence of synchrony, the collective rate grows strictly sublinearly (logarithmically) as a function of the number of inputs. Such a logarithmic growth is characteristic of rich-gets-richer clustering processes, a view that can be adopted to describe the beta-binomial model for synchrony [75,76]. In this view, one builds synchronous input activity iteratively by specifying how to add a
-th extra input to an existing population of K inputs while maintaining overall homogeneous synchrony. To do so over a period T, one allocates extra-input spikes to already existing spiking events—or clusters—of size
with probability
, while the extra input spikes on its own to start
new clusters, where
is a Poisson random variable with parameter
. Thus, the number of new spiking events is inversely proportional to the number of inputs on average, leading to a logarithmic growth of their average total number of spiking events, and therefore of the collective rate
. We expect such logarithmic growth to be a generic feature of the collective drive exerted by synchronous inputs, but exactly computing such drives for heterogeneously synchronous inputs is generally a challenging task (see S1 Text, S6 Appendix).
Fortunately, the small-weight approximation allows one to sidestep the need to specify the collective rates of heterogeneous input assemblies, while making it possible to interpret our moment calculations in relation to directly measurable quantities. The small-weight approximation assumes that with strong probability, the total jump size remains small, i.e., . Such an assumption, which is justified by numerical simulations, is natural given the weak level of observed spiking correlations, i.e.,
, and given the constraint that
, even for large connectivity numbers. Under this assumption, the small-weight approximations are obtained by Taylor expanding the exponential terms featured in our exact moment formulas. Importantly, the resulting approximate expressions can then be expressed in terms of individual input spiking rate rk (as opposed to the collective rates) and of the generalized correlation coefficients
. These coefficients defined in Eq (5) measured the propensity of the set of inputs
to activate simultaneously and is expected to decay with n, the number of considered inputs. For the beta-binomial model considered above with a single type of inputs, one can check that
where Γ denotes the standard gamma function and where we recall that ρ denotes the pairwise spiking correlation. Thus, vanishes as a power law with exponent
when
. This is consistent with intuition, which suggests that higher degree of synchrony, i.e., smaller
, corresponds to slower decay of the high-order correlations.
More realistic forms of synchrony
A possible limitation of our analysis stems from our modeling of spiking synchrony via jump processes, which enacts a perfect form of synchrony with exactly simultaneous synaptic activations. Considering instantaneous synchrony overlooks the fact that empirical spiking-correlation estimates such as
depend on the timescale over which the processes
and Nj count the spiking events of inputs i and j. Experimental measurements have revealed that
decreases with smaller timescale
and vanishes in the limit
[7,8]. One can conveniently reproduce these experimental observations in more realistic models of synchrony obtained by jittering instantaneously synchronous spikes. This is because moving individual spiking times with independently and identically distributed time shifts, e.g., with centered Gaussian variables with standard deviation
, erases temporal correlations at timescale shorter than
. Such an erasure implements the desired decrease of spiking correlations
with smaller timescale
, while causing an overall decrease of
over all timescales
compared with the instantaneous, unjittered spiking correlations. In fact, one can produce empirical correlations with biophysically relevant temporal profile by jittering instantaneously synchronous spikes with
with
.
It turns out that considering the above, more realistic forms of jittered synchrony reveals that although unrealistic, perfect input synchrony still yields biologically relevant estimates of the stationary voltage statistics. Indeed, as noticed in [6], we find that the stationary voltage statistics of AONCB neurons, even for realistic forms of synchrony, are essentially determined by the input synchrony measured over a single time scale. We find this relevant timescale to be around , about twice the duration of the membrane time constant over which inputs are integrated by the neurons. Accordingly, the biophysically relevant range of spiking correlations that we consider in this work, i.e.,
, corresponds to measurements performed over the timescale
[7,8]. To confirm these observations, we validate our synchrony modeling approach in Fig 10, where we compare the response of the same AONCB neurons to instantaneously synchronous inputs and to jittered synchronous inputs. Specifically, in Fig 10a, we show representative voltage responses for an AONCB neurons driven by matched unjittered and jittered synchronous inputs. Although these traces exhibit distinct temporal dynamics, Fig 10b reveals that both forms of synchronous inputs yield nearly identical stationary voltage distributions, and thus nearly identical skewness coefficients as well. Similarly, in Fig 10c, we show representative voltage responses for a pair of AONCB neurons driven by matched unjittered and jittered synchronous inputs. We then check in Fig 10d, that these temporally distinct dynamics yield nearly identical stationary joint voltage distributions, and thus nearly identical voltage correlations as well. Thus, although our approach produces distinct transient responses depending on the form of the synchronous drives, it yields consistent stationary statistics.
(a) Example Monte Carlo simulations of mean-subtracted AONCB neurons under both instantaneous synchrony (thin line) and jittered input synchrony (thick line), with increasing excitatory input rates (warming colors). (b) Changes in skewness with increasing excitatory input rates for both instantaneous synchrony (purple) and jittered synchrony (green). Insets show voltage histograms corresponding to simulations in (a), color-coded to match. (c) Example Monte Carlo simulations of mean-subtracted AONCB neurons for two neurons (V1, top and V2, bottom), with cross-neuron input correlations under both instantaneous synchrony (thin line) and jittered input synchrony (thick line), with increasing excitatory input rates (warming colors). (d) Changes in voltage correlations with increasing excitatory input rates for both instantaneous synchrony (purple) and jittered synchrony (green). All simulations use
with moderate synaptic weight sizes. For instantaneous input synchrony, the input correlation is
, while for jittered input synchrony, the instantaneous correlation is
with a spike jitter of 50 ms, resulting in a spiking correlation of 0.03 when observed within a 25 ms window.
Obscuring role of synaptic heterogeneity
As noted when introducing biophysical parameters, an apparent limitation of our analysis is the consideration of typical synaptic weights. This assumption implies that conductance variability exclusively stems from the variable numbers of coactivating synapses, and thus from synchrony. This is certainly an oversimplification as synapses exhibit heterogeneity in weights [55], which must play a role in shaping neural variability [54]. Distinguishing between heterogeneity and synchrony contributions, however, is a fundamentally ambiguous task [77]. For instance, consider K inputs with identical weight w and identical spiking rate r. Such inputs are instantaneously synchronous if p(k), the probability that k of them coactivate, is not concentrated on k = 1 but also distributed on values k > 1. It is not too hard to realize that such a synchronous model is indistinguishable from considering K independent inputs with heterogeneous weights kw and associated activation rates Krp(k), . This ambiguity mirrors that of interpreting whether an EPSP results from a single or multiple coincidental synaptic activations. That said, recent experimental evidence supports that cortical response selectivity derives from strength in numbers of synapses, rather than difference in synaptic weights [78], consistent with a dominant role for synchrony.
In any case, our framework is general enough to account for the impact of heterogeneities. Actually, all our analytical results are derived assuming synaptic weights heterogeneities and these can be used to gauge the respective contribution of synaptic heterogeneity and input synchrony to neuronal variability. Performing such an analysis reveals that in the absence of synchrony, the heterogeneity contribution is primarily controlled by the coefficient of variation of the synaptic weights, denoted by . To see why, let us consider the case of asynchronous heterogeneous excitatory inputs but with fixed rate r for simplicity. In the small-weight approximation, the subthreshold variance (21) simplifies to:
Identifying the typical value of the synaptic weights to the mean synaptic weight , we then have:
where the angled brackets denote averages over synapses. This shows that the contribution of heterogeneity to variability is mediated by the squared coefficient of variation of the weights
to be compared with the corresponding variance for the homogeneous, synchronous case:
Note that in the equations above, the term formally plays the same role as the term
, where ρ denotes the input spiking correlation.
The coefficient has been inferred from EPSP amplitude measurements in several experimental studies including [56,57]. These measurements indicate that
is smaller than 1 (about 0.4 in most studies). This supports that synaptic weight heterogeneity can at most double subthreshold variability, which is not enough to achieve realistic voltage variance with moderate synaptic weights. This is by contrast with the synchrony contribution [6]. Indeed, considering 103 inputs with weak spiking correlation
(lower end of the measurement range) already yields
, which corresponds to increasing variability by an order of magnitude. This supports our hypothesis that synchrony is the main driver of variability, at least compared with the contribution of synaptic heterogeneity.
Effect of faulty synaptic transmission
Another possible limitation of our approach is that it neglects synaptic faultiness as a source of variability. Synaptic faultiness refers to the observation that chemical synapses release neurotransmitters stochastically in response to action potentials, so that the response of a post-synaptic neuron to a presynaptic spiking event may vary or fail altogether [79,80]. Because it amounts to considering additional sources of independent noise, we expect that including faulty synaptic transmission in our modeling framework will yield a decrease in the impact of synchrony, potentially challenging our results. To explore the impact of faulty synapses, let us model faulty synaptic activation variables as , where
denotes independent
-valued Bernoulli variables with parameters
. This corresponds to considering the case of all-or-none faultiness for which synaptic transmission fails with probability
. In this setting, and consistent with intuition, faulty transmission causes the effective synaptic rates and the effective spiking correlation coefficients to decrease according to
In order to maintain a realistic mean voltage response range, one must compensate for the effective reduction in driving rates caused by synaptic faultiness. In principle, such compensation can be achieved by inversely scaling the synaptic weights according to . However, such a scaling requires to introduce unrealistically large synaptic weights for high degrees of synaptic failure (
) [50,51]. It is thus more natural to compensate for the effect of synaptic failure by scaling the number of synaptic inputs instead. Assuming homogeneous weights
and failure rates
, this amounts to set
, while holding
. On can then check using formulas Eqs (23) and (27) that such a scaling of the input numbers also preserves the second- and third-order statistics of the voltage response in the small-weight approximation. Therefore, and perhaps counterintuitively, our results hold in the presence of faulty synaptic transmission at the mere cost of increasing the number of synaptic inputs. Such a cost is inconsequential as assuming drastic faultiness such as p = 0.1 involves considering inputs numbers
, which remain compatible with anatomical observations [66].
Effect of including gap junctions
One more limitation of our approach is that we have not investigated how gap junctions may impact subthreshold variability. Gap junctions establish instantaneous, bidirectional, electrical couplings between neighboring cells [81]. In cortex, these gap junctions are a distinct feature of inhibitory spiking cells, which form connected networks of electrically coupled cells [82,83]. Our framework can be extended to study the impact of such electrical couplings, which must clearly promote positive correlations across cells. For simplicity, consider an electrically coupled pair of neurons that only receive excitatory inputs. The voltages of the pair of neurons obey the following system of first-order stochastic equations
where G12 denotes the electrical coupling induced by gap junctions between neuron 1 and 2. Because the system of Eq (28) is linear, applying the PASTA principle in the stationary regime still yields analytically tractable conservation equations. The main complication stems from the fact that in between consecutive spiking arrivals, the deterministic evolution of V1 and V2 is coupled via the constant term g = G12/G. By contrast, one can check that the Marcus update rules Eq (10) still apply unchanged.
Although our analysis extend straightforwardly to include gap junctions, obtaining compact interpretable expressions for the resulting moments is a practical challenge. For conciseness, we only give here formulas for the first moment of the neuronal voltages. These can be shown to be weighted averages of the uncoupled means and
:
where we have defined the averaging weights
In the above definition, the rate coefficients ca, , are such that
, where the coefficients
,
, are those defined by (15) in the absence of gap junctions.
As intuition suggests, Eq (29) shows that the effect of electrical coupling between neurons 1 and 2 is to bring their voltages together, and in the limit of large coupling , we asymptotically have
. Perhaps also not surprisingly, tedious but straightforward calculations show that electrical coupling between symmetrical cells increases voltage correlations across neurons, while marginally decreasing the voltage variance by averaging the impact of synaptic inputs across neurons. It would be interesting to generalize such calculations to higher order in order to better understand how electrically coupled network of cells may play a special role in promoting synchrony.
Neglect of the spike-generating mechanism
Yet another limitation of our approach is that mathematical tractability was obtained at the cost of neglecting the impact of the cell’s own spiking on its membrane voltage fluctuations. This neglect, however, is justified given the specific goal of our approach, which is to quantify the impact of input synchrony on the subthreshold variability observed in vivo. Consistent with this approach, the estimates of voltage variance, skewness, and covariance that we used (some of which were also used in [6]) were measured for voltage traces that exclude action potentials in primates [3,4] or were collected in the rodent somatosensory or visual cortex, where activity is typically low [18–21]. In other words, the experimental measurements that we use for comparison in our analysis are unlikely to be influenced by the cell’s own spiking. It is also worth noting that the authors of [17] specifically tested whether the voltage correlations observed in the visual cortex of cats can be attributed to spiking in synchrony and concluded that these correlations were independent of the occurrence of action potentials. This is because in vivo subthreshold fluctuations in neurons whose spiking is precluded by hyperpolarizing current injections are similar to that of of spiking cells [84].
That said, we expect the cell’s own spiking to impact membrane fluctuations in high-activity areas, even if one only considers voltage traces that exclude action potentials. In principle, this impact can be studied by supplementing AONCB neurons with an integrate-and-fire spiking mechanism. A PASTA treatment is still possible in this augmented setting but it only yields a fixed-point characterization of the moments under the form of integral equations that need to be solved numerically. The study of these equations is beyond the scope of this work. Preliminary simulations indicate that the inclusion of a hard reset close to the leak reversal potential is problematic when modeling subthreshold variability. Indeed, by causing neurons to reset far away from their spiking threshold, such a reset rule generally yields unrealistic negative skewness in the driven regime. By contrast, we empirically find that one can achieve realistic nonnegative skewness by implementing a reset to their stationary mean drive, which depends on the driving condition. The effect of such a rule is to attenuate the impact of the cell’s own spiking on membrane voltage fluctuations. It is also consistent with the experimental observation that neurons reset near their mean voltage trend in vivo.
Biophysical relevance and modeling implications
Spike-count correlations have been experimentally shown to be weak in cortical circuits [9,14,42,85]. For this reason, a putative role for synchrony in neural computations remains a matter of debate [86–88] while most theoretical approaches have been developed in the asynchronous regime of activity [34–40]. However, when distributed over large networks, weak spiking correlations can still give rise to impactful synchrony, once information is pooled from a large enough number of synaptic inputs [15,16]. After identifying input synchrony as the primary driver of subthreshold variability in [6], we have argued in this work that a weakly synchronous regime of activity is also compatible with the large degree of voltage correlations observed in in-vivo pair recording. At one extreme, assuming that voltage correlations emerge from shared asynchronous inputs implies that neurons share an unrealistically large fraction of inputs. At the other extreme, weakly synchronous inputs can account for biophysically relevant voltage correlations, even without assuming any shared inputs. The latter explanation is compatible with the view that a strong, synchronous afferent drive dominating weak, recurrent, network interactions is the key to achieve realistic regimes of activity in conductance-based neurons [52,89]. That view also suggests an alternative explanation for the decline of voltage correlations in the evoked regime. Our analysis suggests that such a decline directly follows from the increase in the inhibitory driving-force induced by gradual depolarization, in agreement with the proposal that inhibition controls the regime of correlations in cortex [90]. However, in the presence of synchrony, this mechanism can lead to a form of variability quenching, which has been widely reported [1] but is inconsistent with in-vivo measurements in primates [4]. Alternatively, under the assumption that the synchrony regime is governed by the external drive, one can jointly achieve a reduction in voltage correlations and an increase in voltage variability by merely assuming that the external drive is less synchronous than the spontaneously synchronous activity. As illustrated in Fig 11, our analytical framework can still apply to this case and produce the desired neuronal response. In this case, two neurons receive both within- and cross-neuron correlated inputs, characterized by ,
,
Hz,
Hz,
,
,
,
, and
. Additionally, each neuron is driven by large, asynchronous excitatory inputs with
and
, and varying rates, representing thalamic inputs driven by external stimuli.
(a) Example Monte Carlo simulations of 2 AONCB neurons (top) with increasing uncorrelated excitatory drive (bottom). Each neuron is driven by inputs with within-neuron correlations and
and cross-neuron correlations
and
. (b) Voltage histograms corresponding to the three levels of uncorrelated input drive shown in (a). (c) Change in voltage variance as uncorrelated input drive rate increases. (d) Same as (c) but for voltage correlations.
That said, recurrent connections are likely contributing to shaping voltage synchrony, at least during spontaneous activity when external drive is absent [13,35,91,92]. Addressing this question requires to quantitatively understand how activity synchrony emerges in neural networks with prescribed degree of shared inputs. This involves a careful correlation analysis of large-but-finite network of spiking conductance-based neurons beyond classical mean-field techniques [52,53]. To date, most of these mean-field approaches have been conducted in balanced networks, whereby recurrent inhibition promotes an asynchronous regime of activity by counteracting correlation-inducing excitation [14]. Although a tight balance between excitation and inhibition implies asynchronous activity, [12,13,93], a loosely balance regime is compatible with the establishment of strong neuronal correlations [94–96]. However, a theoretical analysis of the emergence of these correlations is still lacking. In our previous work [6], we argued that such an analysis is hindered by the fact that correlations are finite-size phenomenon that cannot be studied via scaling limits in conductance-based neurons, where in particular, we show that the balanced scaling limit for which ,
with
is only variability-preserving for current-based models. Another hurdle to characterize the emergence of network synchrony is that spiking correlations exhibit characteristic timescales that may vary according to regimes of activity [97–99]. Because it formally relies on an instantaneous model of synchrony, our tractable framework cannot account for the emergence and modulation of these characteristic timescales. There can be several biological origins for these characteristic timescales: they may represent the typical dwelling time of metastable network dynamics (e.g., up-and-down state dynamics), they may be a result of chemical neuromodulatory mechanisms [100,101], or they may be inherited from complex brain rhythms patterns or from other brain regions’ top-down influences [102,103]. All these origins suggest analyzing the emergence of synchrony beyond our proposed framework in a multiscale dynamical setting.
Acknowledgments
We would like to thank Nicholas Priebe, David Hansel, and Eyal Seidemann for insightful discussions.
References
- 1. Churchland MM, Yu BM, Cunningham JP, Sugrue LP, Cohen MR, Corrado GS, et al. Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nat Neurosci. 2010;13(3):369–78. pmid:20173745
- 2. Haider B, Häusser M, Carandini M. Inhibition dominates sensory responses in the awake cortex. Nature. 2013;493(7430):97–100. pmid:23172139
- 3. Tan AYY, Andoni S, Priebe NJ. A spontaneous state of weakly correlated synaptic excitation and inhibition in visual cortex. Neuroscience. 2013;247:364–75. pmid:23727451
- 4. Tan AYY, Chen Y, Scholl B, Seidemann E, Priebe NJ. Sensory stimulation shifts visual cortex from synchronous to asynchronous states. Nature. 2014;509(7499):226–9. pmid:24695217
- 5. Okun M, Steinmetz N, Cossell L, Iacaruso MF, Ko H, Barthó P, et al. Diverse coupling of neurons to populations in sensory cortex. Nature. 2015;521(7553):511–5. pmid:25849776
- 6. Becker LA, Li B, Priebe NJ, Seidemann E, Taillefumier T. Exact analysis of the subthreshold variability for conductance-based neuronal models with synchronous synaptic inputs. Phys Rev X. 2024;14(1):011021. pmid:38911939
- 7. Smith MA, Kohn A. Spatial and temporal scales of neuronal correlation in primary visual cortex. J Neurosci. 2008;28(48):12591–603. pmid:19036953
- 8. Smith MA, Sommer MA. Spatial and temporal scales of neuronal correlation in visual area V4. J Neurosci. 2013;33(12):5422–32. pmid:23516307
- 9. Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NK, Tolias AS. Decorrelated neuronal firing in cortical microcircuits. Science. 2010;327(5965):584–7. pmid:20110506
- 10. Tchumatchenko T, Geisel T, Volgushev M, Wolf F. Signatures of synchrony in pairwise count correlations. Front Comput Neurosci. 2010;4:1. pmid:20422044
- 11. Pattadkal JJ, O’Shea RT, Hansel D, Taillefumier T, Brager D, Priebe NJ. Synchrony dynamics underlie irregular neocortical spiking. bioRxiv. 2024;:2024.10.15.618398. pmid:39464165
- 12. Sompolinsky H, Crisanti A, Sommers H. Chaos in random neural networks. Phys Rev Lett. 1988;61(3):259–62. pmid:10039285
- 13. van Vreeswijk C, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science. 1996;274(5293):1724–6. pmid:8939866
- 14. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A. The asynchronous state in cortical circuits. Science. 2010;327(5965):587.
- 15. Chen Y, Geisler WS, Seidemann E. Optimal decoding of correlated neural population responses in the primate visual cortex. Nat Neurosci. 2006;9(11):1412–20. pmid:17057706
- 16. Polk A, Litwin-Kumar A, Doiron B. Correlated neural variability in persistent state networks. Proc Natl Acad Sci U S A. 2012;109(16):6295–300. pmid:22474377
- 17. Lampl I, Reichova I, Ferster D. Synchronous membrane potential fluctuations in neurons of the cat visual cortex. Neuron. 1999;22(2):361–74. pmid:10069341
- 18. Poulet JFA, Petersen CCH. Internal brain state regulates membrane potential synchrony in barrel cortex of behaving mice. Nature. 2008;454(7206):881–5. pmid:18633351
- 19. Okun M, Lampl I. Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nat Neurosci. 2008;11(5):535–7. pmid:18376400
- 20. Yu J, Ferster D. Membrane potential synchrony in primary visual cortex during sensory stimulation. Neuron. 2010;68(6):1187–201. pmid:21172618
- 21. Arroyo S, Bennett C, Hestrin S. Correlation of synaptic inputs in the visual cortex of awake, behaving mice. Neuron. 2018;99(6):1289-1301.e2. pmid:30174117
- 22. Amsalem O, Inagaki H, Yu J, Svoboda K, Darshan R. Sub-threshold neuronal activity and the dynamical regime of cerebral cortex. Nat Commun. 2024;15(1):7958. pmid:39261492
- 23. Stein RB. A theoretical analysis of neuronal variability. Biophys J. 1965;5(2):173–94. pmid:14268952
- 24.
Tuckwell HC. Introduction to theoretical neurobiology: linear cable theory and dendritic structure. Cambridge University Press; 1988.
- 25. Marcus S. Modeling and analysis of stochastic differential equations driven by point processes. IEEE Trans Inform Theory. 1978;24(2):164–72.
- 26. Marcus SI. Modeling and approximation of stochastic differential equations driven by semimartingales†. Stochastics. 1981;4(3):223–45.
- 27. Richardson MJE. Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons. Phys Rev E Stat Nonlin Soft Matter Phys. 2004;69(5 Pt 1):051918. pmid:15244858
- 28. Richardson MJE, Gerstner W. Synaptic shot noise and conductance fluctuations affect the membrane voltage with equal significance. Neural Comput. 2005;17(4):923–47. pmid:15829095
- 29. Richardson MJE, Gerstner W. Statistics of subthreshold neuronal voltage fluctuations due to conductance-based synaptic shot noise. Chaos. 2006;16(2):026106. pmid:16822038
- 30.
Daley DJ, Vere-Jones D. An introduction to the theory of point processes. New York: Springer-Verlag; 2003.
- 31.
Daley DJ, Vere-Jones D. An introduction to the theory of point processes: volume II: general theory and structure. Springer Science & Business Media; 2007.
- 32.
Baccelli F, Brémaud P. The palm calculus of point processes. Elements of Queueing Theory. Berlin Heidelberg: Springer; 2003. p. 1–74. https://doi.org/10.1007/978-3-662-11657-9_1
- 33.
Asmussen S. Applied probability and queues. Springer. 2003.
- 34. Abbott L, van Vreeswijk C. Asynchronous states in networks of pulse-coupled oscillators. Phys Rev E Stat Phys Plasmas Fluids Relat Interdiscip Topics. 1993;48(2):1483–90. pmid:9960738
- 35. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci. 2000;8(3):183–208. pmid:10809012
- 36. Baladron J, Fasoli D, Faugeras O, Touboul J. Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons. J Math Neurosci. 2012;2(1):10. pmid:22657695
- 37. Touboul J, Hermann G, Faugeras O. Noise-induced behaviors in neural mean field dynamics. SIAM J Appl Dyn Syst. 2012;11(1):49–81.
- 38. Robert P, Touboul J. On the dynamics of random neuronal networks. J Stat Phys. 2016;165(3):545–84.
- 39. Kadmon J, Sompolinsky H. Transition to chaos in random neuronal networks. Phys Rev X. 2015;5(4):041030.
- 40. Clark DG, Abbott LF, Litwin-Kumar A. Dimension of activity in random neural networks. Phys Rev Lett. 2023;131(11):118401. pmid:37774280
- 41. Kohn A, Smith MA. Stimulus dependence of neuronal correlation in primary visual cortex of the macaque. J Neurosci. 2005;25(14):3661–73. pmid:15814797
- 42. Cohen MR, Kohn A. Measuring and interpreting neuronal correlations. Nat Neurosci. 2011;14(7):811–9. pmid:21709677
- 43. Goris RLT, Movshon JA, Simoncelli EP. Partitioning neuronal variability. Nat Neurosci. 2014;17(6):858–65. pmid:24777419
- 44. Rall W. Time constants and electrotonic length of membrane cylinders and neurons. Biophys J. 1969;9(12):1483–508. pmid:5352228
- 45. Chechkin A, Pavlyukevich I. Marcus versus Stratonovich for systems with jump noise. J Phys A: Math Theor. 2014;47(34):342001.
- 46.
Matthes K. Zur Theorie der Bedienungsprozesse. Trans. Third Prague Conf. Information Theory, Statist. Decision Functions, Random Processes. Prague: Publ. House Czech. Acad. Sci.. 1964. p. 513–28.
- 47. Bruno RM, Sakmann B. Cortex is driven by weak but synchronously active thalamocortical synapses. Science. 2006;312(5780):1622–7. pmid:16778049
- 48. Jouhanneau J-S, Kremkow J, Dorrn AL, Poulet JFA. In vivo monosynaptic excitatory transmission between layer 2 cortical pyramidal neurons. Cell Rep. 2015;13(10):2098–106. pmid:26670044
- 49. Pala A, Petersen CCH. In vivo measurement of cell-type-specific synaptic connectivity and synaptic transmission in layer 2/3 mouse barrel cortex. Neuron. 2015;85(1):68–75. pmid:25543458
- 50. Seeman SC, Campagnola L, Davoudian PA, Hoggarth A, Hage TA, Bosma-Moody A, et al. Sparse recurrent excitatory connectivity in the microcircuit of the adult mouse and human cortex. Elife. 2018;7:e37349. pmid:30256194
- 51. Campagnola L, Seeman SC, Chartrand T, Kim L, Hoggarth A, Gamlin C, et al. Local connectivity and synaptic dynamics in mouse and human neocortex. Science. 2022;375(6585).
- 52. Zerlaut Y, Zucca S, Panzeri S, Fellin T. The spectrum of asynchronous dynamics in spiking networks as a model for the diversity of non-rhythmic waking states in the neocortex. Cell Rep. 2019;27(4):1119-1132.e7. pmid:31018128
- 53. Sanzeni A, Histed MH, Brunel N. Emergence of irregular activity in networks of strongly coupled conductance-based neurons. Phys Rev X. 2022;12(1):011044. pmid:35923858
- 54. Iyer R, Menon V, Buice M, Koch C, Mihalas S. The influence of synaptic weight distribution on neuronal population dynamics. PLoS Comput Biol. 2013;9(10):e1003248. pmid:24204219
- 55. Buzsáki G, Mizuseki K. The log-dynamic brain: how skewed distributions affect network operations. Nat Rev Neurosci. 2014;15(4):264–78. pmid:24569488
- 56. Song S, Sjöström PJ, Reigl M, Nelson S, Chklovskii DB. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol. 2005;3(3):e68. pmid:15737062
- 57. Lefort S, Tomm C, Floyd Sarria J-C, Petersen CCH. The excitatory neuronal network of the C2 barrel column in mouse primary somatosensory cortex. Neuron. 2009;61(2):301–16. pmid:19186171
- 58. Kuhn A, Aertsen A, Rotter S. Higher-order statistics of input ensembles and the response of simple model neurons. Neural Comput. 2003;15(1):67–101. pmid:12590820
- 59. Trousdale J, Hu Y, Shea-Brown E, Josić K. A generative spike train model with time-structured higher order correlations. Front Comput Neurosci. 2013;7:84. pmid:23908626
- 60. Krumin M, Shoham S. Generation of spike trains with controlled auto- and cross-correlation functions. Neural Comput. 2009;21(6):1642–64. pmid:19191596
- 61. Macke JH, Berens P, Ecker AS, Tolias AS, Bethge M. Generating spike trains with specified correlation coefficients. Neural Comput. 2009;21(2):397–423. pmid:19196233
- 62. Arieli A, Sterkin A, Grinvald A, Aertsen A. Dynamics of ongoing activity: explanation of the large variability in evoked cortical responses. Science. 1996;273(5283):1868–71. pmid:8791593
- 63. Mainen ZF, Sejnowski TJ. Reliability of spike timing in neocortical neurons. Science. 1995;268(5216):1503–6. pmid:7770778
- 64. Tolhurst DJ, Movshon JA, Dean AF. The statistical reliability of signals in single neurons in cat and monkey visual cortex. Vision Res. 1983;23(8):775–85. pmid:6623937
- 65.
Rieke F, Warland D, de Ruyter van Steveninck R, Bialek W. Spikes. Cambridge, MA: MIT Press; 1999.
- 66.
Braitenberg V, Schüz A. Cortex: statistics and geometry of neuronal connectivity. Springer Science & Business Media; 2013.
- 67. Softky WR, Koch C. The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J Neurosci. 1993;13(1):334–50. pmid:8423479
- 68. Salinas E, Sejnowski TJ. Impact of correlated synaptic input on output firing rate and variability in simple neuronal models. J Neurosci. 2000;20(16):6193–209. pmid:10934269
- 69. Hellwig B, Schüz A, Aertsen A. Synapses on axon collaterals of pyramidal cells are spaced at random intervals: a Golgi study in the mouse cerebral cortex. Biol Cybern. 1994;71(1):1–12. pmid:7519886
- 70. Shadlen MN, Newsome WT. The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J Neurosci. 1998;18(10):3870–96. pmid:9570816
- 71. Wolff L, Lindner B. Method to calculate the moments of the membrane voltage in a model neuron driven by multiplicative filtered shot noise. Phys Rev E. 2008;77(4).
- 72. Wolff L, Lindner B. Mean, variance, and autocorrelation of subthreshold potential fluctuations driven by filtered conductance shot noise. Neural Comput. 2010;22(1):94–120. pmid:19764869
- 73. Stubenrauch J, Lindner B. Furutsu-Novikov–like cross-correlation–response relations for systems driven by shot noise. Phys Rev X. 2024;14(4).
- 74. Lindner B. Fluctuation-dissipation relations for spiking neurons. Phys Rev Lett. 2022;129(19):198101. pmid:36399734
- 75.
Thibaux R, Jordan MI. Hierarchical beta processes and the Indian buffet process. In: Artificial intelligence and statistics. 2007. p. 564–71.
- 76. Broderick T, Jordan MI, Pitman J. Beta processes, stick-breaking and power laws. Bayesian Anal. 2012;7(2):439–76.
- 77. Amarasingham A, Geman S, Harrison MT. Ambiguity and nonidentifiability in the statistical analysis of neural codes. Proc Natl Acad Sci U S A. 2015;112(20):6455–60. pmid:25934918
- 78. Scholl B, Thomas CI, Ryan MA, Kamasawa N, Fitzpatrick D. Cortical response selectivity derives from strength in numbers of synapses. Nature. 2021;590(7844):111–4. pmid:33328635
- 79. Allen C, Stevens CF. An evaluation of causes for unreliability of synaptic transmission. Proc Natl Acad Sci U S A. 1994;91(22):10380–3. pmid:7937958
- 80. Dobrunz LE, Stevens CF. Heterogeneity of release probability, facilitation, and depletion at central synapses. Neuron. 1997;18(6):995–1008. pmid:9208866
- 81. Pereda AE. Electrical synapses and their functional interactions with chemical synapses. Nat Rev Neurosci. 2014;15(4):250–63. pmid:24619342
- 82. Galarreta M, Hestrin S. A network of fast-spiking cells in the neocortex connected by electrical synapses. Nature. 1999;402(6757):72–5. pmid:10573418
- 83. Gibson JR, Beierlein M, Connors BW. Two networks of electrically coupled inhibitory neurons in neocortex. Nature. 1999;402(6757):75–9. pmid:10573419
- 84. Li B, Routh BN, Johnston D, Seidemann E, Priebe NJ. Voltage-gated intrinsic conductances shape the input-output relationship of cortical neurons in behaving primate V1. Neuron. 2020;107(1):185-196.e4. pmid:32348717
- 85. London M, Roth A, Beeren L, Häusser M, Latham PE. Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex. Nature. 2010;466(7302):123–7. pmid:20596024
- 86. Averbeck BB, Latham PE, Pouget A. Neural correlations, population coding and computation. Nat Rev Neurosci. 2006;7(5):358–66. pmid:16760916
- 87. Ecker A, Berens P, Tolias A, Bethge M. The effect of noise correlations in populations of diversely tuned neurons. Nat Prec. 2011.
- 88. Hu Y, Zylberberg J, Shea-Brown E. The sign rule and beyond: boundary effects, flexibility, and noise correlations in neural population codes. PLoS Comput Biol. 2014;10(2):e1003469. pmid:24586128
- 89. Wang H-P, Spencer D, Fellous J-M, Sejnowski TJ. Synchrony of thalamocortical inputs maximizes cortical reliability. Science. 2010;328(5974):106–9. pmid:20360111
- 90. Stringer C, Pachitariu M, Steinmetz NA, Okun M, Bartho P, Harris KD, et al. Inhibitory control of correlated intrinsic variability in cortical networks. Elife. 2016;5:e19695. pmid:27926356
- 91. Luhmann HJ, Sinning A, Yang J-W, Reyes-Puerta V, Stüttgen MC, Kirischuk S, et al. Spontaneous neuronal activity in developing neocortical networks: from single cells to large-scale interactions. Front Neural Circuits. 2016;10.
- 92. Davis ZW, Benigno GB, Fletterman C, Desbordes T, Steward C, Sejnowski TJ, et al. Spontaneous traveling waves naturally emerge from horizontal fiber time delays and travel through locally asynchronous-irregular states. Nat Commun. 2021;12(1):6057. pmid:34663796
- 93. van Vreeswijk C, Sompolinsky H. Chaotic balanced state in a model of cortical circuits. Neural Comput. 1998;10(6):1321–71. pmid:9698348
- 94. Ahmadian Y, Rubin DB, Miller KD. Analysis of the stabilized supralinear network. Neural Comput. 2013;25(8):1994–2037. pmid:23663149
- 95. Rubin DB, Van Hooser SD, Miller KD. The stabilized supralinear network: a unifying circuit motif underlying multi-input integration in sensory cortex. Neuron. 2015;85(2):402–17. pmid:25611511
- 96. Hennequin G, Ahmadian Y, Rubin DB, Lengyel M, Miller KD. The dynamical regime of sensory cortex: stable dynamics around a single stimulus-tuned attractor account for patterns of noise variability. Neuron. 2018;98(4):846-860.e5. pmid:29772203
- 97. Bair W, Zohary E, Newsome WT. Correlated firing in macaque visual area MT: time scales and relationship to behavior. J Neurosci. 2001;21(5):1676–97. pmid:11222658
- 98. de la Rocha J, Doiron B, Shea-Brown E, Josić K, Reyes A. Correlation between neural spike trains increases with firing rate. Nature. 2007;448(7155):802–6. pmid:17700699
- 99. Litwin-Kumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat Neurosci. 2012;15(11):1498–505. pmid:23001062
- 100. Marder E, Thirumalai V. Cellular, synaptic and network effects of neuromodulation. Neural Netw. 2002;15(4–6):479–93. pmid:12371506
- 101. Marder E. Neuromodulation of neuronal circuits: back to the future. Neuron. 2012;76(1):1–11. pmid:23040802
- 102. Murray JD, Bernacchia A, Freedman DJ, Romo R, Wallis JD, Cai X, et al. A hierarchy of intrinsic timescales across primate cortex. Nat Neurosci. 2014;17(12):1661–3. pmid:25383900
- 103. Gao R, van den Brink RL, Pfeffer T, Voytek B. Neuronal timescales are functionally dynamic and shaped by cortical microarchitecture. Elife. 2020;9:e61277. pmid:33226336