Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States

Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.


Fundamental Dependencies of Renewal Theory
Here we briefly present some basic equations from the field of renewal theory, which relate two quantities that are key to the contents of this paper: The stochastic intensity (or hazard h) and the interspike interval distribution p.
Intuitively the hazard can be regarded as a conditional instantaneous firing rate (i.e. h(t)dt gives the probability of firing a spike in an infinitesimally small interval dt around t), where the Each spike in a train is interpreted as carrying an analog label, whose value corresponds to the length of the ISI preceding the spike, i.e. to the difference in spike times between the considered spike and its predecessor (see numbers above spikes in arbitrary units). These analog values are therefore samples from the ISI distribution p(t) underlying the spike train. Depending on the computational context, p(t) may be stationary or not. (Figure adapted with permission from [25]) b) EIF neuron as ISI sampler and probability transducer: A 'user-defined', target ISI distribution p in (t) that is given by a small, multiplicative modulation (left black trace) of an exponential distribution (blue/red dotted trace) is specified as input current to the EIF neuron. Subject to noise, the neuron responds with a membrane potential fluctuating around some baseline value and thus with the stochastic firing of output spikes, whose ISI distribution also follows a modulated exponential distribution p EIF (t) (right black trace). Our theory shows that modulations at the output may approximate those at the input, provided the latter stay within sufficiently small margins (blue/red shaded areas). Approximations are improved upon increasing the baseline level of the membrane potential towards firing threshold.
conditioning is on the time of the last spike. In other words, h(t) is a time dependent rate profile that is not influenced in any way by spiking activity prior to the last spike. Assuming the last spike to have happened at time t 0 = 0, a standard result from renewal theory relates h(t) to p(t) in the following way [26; 27]: where S(t) gives the probability of not firing ('surviving') until time t and is hence called the survivor function. Eq 2.1 has a simple, intuitive interpretation: The probability of an ISI of length t is equal to the probability of not firing until t (S(t)), times the probability of a spike exactly at t, given that the neuron has not fired so far (h(t)). Conversely, Eq 2.1 may be solved for h(t), thereby expressing the hazard in terms of p(t) and the survivor function It is differential Eq 2.4 that is followed by what we define as an ideal ISI sampler: If the term on the right hand side is interpreted as a time dependent input to the sampler (e.g. a current injected into a neuron) and the samplers hazard dynamics are guaranteed to follow 2.4, then its output ISI distribution will be directly determined by the input. In other words, the input-specified ISI distribution will be transduced to the samplers output without distortion (cf. Fig 1b). As we show, Eq 2.4 can be approximated by an EIF neuron and we will thus call it the hazard equation of the ideal ISI-sampler.

The Exponential Integrate-and-Fire Model Combined with Stochastic Firing
In the EIF model, a neuron's membrane potential dynamics are given by the following differential equation [21] C m _ V ðtÞ ¼ Àg L ðVðtÞ À E L Þ þ g L D T exp VðtÞ À V T D T þ IðtÞ ð 2:5Þ , where V is the membrane potential, t m ≔ C m g L the membrane time constant based on leak conductance g L and membrane capacitance C m . I Ã ≔ I C m is the total input current of the neuron divided by the membrane capacitance, E L the leak reversal potential, V T the threshold voltage and Δ T the so called slope factor. The slope factor determines the effectiveness of the exponential term in Eq 2.6, which mimics the continuously increasing number of open sodium channels when V approaches the threshold V T and consequently leads to a strong increase of current into the neuron. Once V T is crossed, positive feedback induced by the exponential term renders the voltage dynamics self-sustained. That is, even in the absence of any driving current I, V starts to diverge to infinity in finite time. Therefore, a spike is said to have occurred once V crosses a peak potential V p ! V T , after which V is reset to some reset potential V r (see methods section 2.5 for a list of parameter values that we have used for the simulations presented in the results section). Note that the thus defined dynamics of the EIF neuron contain those of the more conventional LIF neuron as a special case, namely if V p = V t and Δ T ! 0.
For the EIF neuron, the voltage trajectory between the times at which V T and V p were respectively crossed is supposed to qualitatively match the spike waveform of biological neurons. Importantly however, the opening of sodium channels influences voltage dynamics even below the threshold, which, as we explain in the results section, is crucial for a neural approximation of the hazard equation of the ideal ISI-sampler (Eq 2.6). For the same reason, we make use of the high-conductance regime, consisting of a large g L and, consequently, a small τ m . Interestingly, it is also this regime that is characteristic for UP-states [28; 29; 30] (More specifically, it is the 3-5 fold increase in synaptic, not leak conductance that is characteristic for UPstates. However, based on the Ansatz in [31], the two types of synaptic synaptic conductances (exc./inh.) can be rearranged, such that their (constant) average plays the role of a leak conductance. This way, Eq 2.5 is reobtained with a higher leak conductance and new V T , I(t), both of which were redefined through the addition of constants).
In addition, for the approach put forward in this paper, the EIF/LIF voltage dynamics of Eq 2.6 are combined with a stochastic firing criterion, such that spikes may be fired even below the threshold. More specifically, we introduce a firing hazard, which depends exponentially on the membrane potential trajectory via some convolution kernel K(t) that is normalized to where K is a dimensionless scaling factor for adjusting the max. firing rate of the neuron at threshold (which we have chosen to be 10 Hz, based on the reported, low values for the firing rate during UP-states, see e.g. [3; 32]). Note that in Eq 2.7 the spiking determinism parameter is assumed to be identical to the slope-factor Δ T = 3mV of the EIF neuron, both for the EIF and LIF neuron. In other words, throughout the manuscript we have exclusively used in Eq 2.7 a spiking determinism parameter equal to 3mV, although in case of the LIF neuron the slope factor controlling membrane potential dynamics was zero. This way both types of neurons may be compared on a fair basis, with all factors being equal except the slope factor. The hazard h(t) in Eq 2.7 is an abstraction of what is called diffusive noise [27], i.e. the strong voltage fluctuations measured in biological neurons, due to their random bombardment by a large number of balanced inhibitory and excitatory inputs. In particular, such a network effect is thought to be a characterizing feature of UP-states [33; 3; 34; 5; 4; 8]. Whereas the hazard Eq 2.7 involves a deterministic membrane potential trajectory combined with stochastic firing, diffusive noise is based on a noisy membrane potential combined with deterministic firing (a spike is fired only if a fixed threshold is crossed). Although intuitively the two noise models may seem to be equivalent, in general they are not, but may well approximate each other phenomenologically in case of the more widespread LIF neuron [35]. For the EIF neuron however, no such phenomenological model exists so far, which is why for this study we have used the hazard Eq 2.7 as a proxy for the biologically more realistic case of diffusive noise. Note however that Eq 2.7 only assumes the logarithm of the hazard h(t) to depend on the membrane potential trajectory V(t) via some linear filter (with normalized filter kernel), an assumption that entails the frequently used exponential escape noise model [27] as a special case, i.e. when K is set to K(t) = δ(t) such that for any time t the hazard depends only on the momentary distance between V(t) and V T . If K(t) is not normalized to 1 the normalization constant may be absorbed by Δ T , provided the normalization constant is positive.

Construction and Performance Evaluation of Example ISI Distributions
We evaluated the representational capabilities of the EIF and LIF neuron based on two types of target ISI distributions p in (t) = p 0 (t)Δp in (t), where p 0 (t) ≔ h 0 exp(−h 0 t) is an exponential 'baseline' distribution with hazard h 0 and Δp in (t) a modulation function 'around' p 0 (t) (e.g. multiplicative noise).
For the first type of target distribution (low-frequency noise), ln Δp in (t) was chosen as a superposition of 60 sinusoids with random phases and unit amplitudes. Frequencies of the sinusoids were taken equidistantly from the interval [10,100] Hz. The thus obtained ln Δp in (t) was then centered to a mean of zero and scaled such that the min/max range of Δp in (t) was 1 ± r Δp . The parameter 0 r Δp 1 controls the degree by which the (unnormalized) p in (t) deviates from p 0 (t) and we will refer to it as the probability fluctuation ratio. The second type of modulation function (high-frequency noise) was constructed in an identical manner, but with frequencies chosen from [100, 200] Hz. Finally, both types of modulation functions were multiplied by a constant, such that the resulting p in (t) was normalized to one.
In both cases, we systematically varied h 0 and r Δp and, for each such combination, evaluated the quality of the EIF and LIF neuron to approximate Δp in (t). For that, pEIF; LIF ðtÞDpEIF; LIF ðtÞ -the output ISI distributions of the EIF and LIF neuron respectively-were obtained by numerical integration of Eq 2.1, based on the neuron's membrane potential V(t) and Eq 2.7. V(t) was given in response to some input current I(t), which was suitably chosen in order for the neuron to approximate the target probability modulation Δp in (t) (see results). To produce a 'clean' membrane potential trajectory suitable for numerical integration, the neuron was prevented from firing in the subthreshold region, but could fire and reset once the threshold (LIF case), or the peak potential (EIF case) had been crossed.
The neuron's performance of approximating ln Δp in (t) was evaluated using the normalized L1-distance:

Analytic Derivation of the Probability Modulation Transfer Function
Here we give full account of how the probability modulation transfer function (Eq 3.20) is derived, as it is central to our spectral analysis. In the following, we restrict ourself to the EIF neuron, such that cluttered notation is avoided. The LIF case may be derived analogously. Assume the multiplicative probability modulation Δp EIF (t) to be caused by some additive, small hazard modulation Δh(t).  Table 1 lists all parameters and values we have used for our numerical simulations of the EIFand LIF-neuron and for obtaining results of the spectral analysis of section 3.2.

Mapping the EIF Voltage Dynamics onto the Hazard Equation of the Ideal ISI-Sampler
Our first result is a mathematical description of how EIF voltage dynamics (Eq 2.6), together with the hazard model of Eq 2.7, enable the neuron to fire with an ISI distribution p EIF (t) ≔ p 0 (t)Δp EIF (t) that is given by the product of a baseline exponential distribution p 0 (t) ≔ h 0 exp (−h 0 t) and some controllable modulatory term Δp EIF (t) > 0. To achieve controllability, as we will show, the neuron's input current Fig  1b). For the LIF neuron however, the analogously defined p LIF (t) ≔ p 0 (t)Δp LIF (t) leads to an approximation Δp LIF (t) % Δp in (t) that is consistently worse when compared to the EIF neuron.
To mimic the voltage situation encountered during UP-states (Fig 2), the neuron's membrane potential V(t) = V 0 + ΔV(t) is assumed to fluctuate with some time-dependent voltage ΔV(t) around a constant baseline value V 0 < V T , causing the hazard h(t) = h 0 + Δh(t) to  fluctuate as well around h 0 . It then follows from Eq 2.7: We now assume the fluctuations ΔV(t) to be sufficiently small such that: where the approximations are due to the Taylor expansion of the exponential function around ΔV = 0. Subsequently, inserting Eqs 3.1 and 3.8 into Eq 3.4 yields In other words, the (K-filtered) input current is set linear to d dt ln p in ðtÞ. Note that the assumed form of p in (t) imposes a constraint on I Ã (t); In particular, if there are no modulations and thus p in (t) = p 0 (t), the resulting current is constant and equal to the current needed to establish con- Eq 3.14 is the main result of this paper and its significance is as follows: Suppose the term containing Δh(t) on the right hand side could be neglected. Then, for small modulations, the EIF neuron would implement the hazard equation of the ideal ISI sampler with d dt lnp in ðtÞ as input (cf. Eq 2.4). Consequently p EIF (t) would be given as the product of exponential baseline p 0 (t) and the undistorted modulatory term Δp EIF (t) = Δp in (t) and hence p EIF (t) = p in (t). Because of the derivative, this condition is certainly fulfilled for the high spectral components of ln Δp in (t), regardless of the baseline voltage V 0 that determines c 0 . In other words, high frequencies are transduced without distortion from the user-provided input to the ISI output of the EIF neuron.
For low frequencies in contrast, the Δh(t) term cannot be neglected, but when V 0 ! V T its weight j−Kc 0 + K − 1j decreases exponentially upon converging to 1. Decreasing this weight term becomes particularly relevant when it is strong, i.e. for large K, which is the predominant regime during UP-states, due to the simultaneous presence of high-conductance (i.e. small τ m ) and low firing rates * 10Hz (cf. Eq 2.7). This is exactly the proposed mechanism, by which UP-states with baseline levels close to the firing threshold facilitate transduction of the low-frequency components of ln Δp in (t).
For the LIF neuron, steps Eq 3.1 to Eq 3.14 may be repeated in an analogous fashion, upon neglecting the exponential term in Eq 3.3 and substituting the r.h.s. of Eq 3.13 by h 0 þ d dt ln p in ðtÞ. Thus, in the LIF case, the input current is also linearly dependent on d dt lnDp in ðtÞ, but with a different y-intercept as for the EIF. The analogous expression of Eq 3.14 then reads: Importantly, for V 0 ! V T the weight term j−Kc 0 − 1j converges to K + 1 here, not 1. Hence, unlike the EIF case, Δh(t) cannot be neglected for large K, even when voltage baseline is high. This means that low-frequency input modulations ln Δp in (t) are severely distorted at the output lnΔp LIF (t).
We will now quantify these arguments in a more rigorous fashion.

Spectral Analysis of the Probability Modulation Transfer Function for the EIF and LIF Neuron
In this section we investigate the degree of distortion, when a log-probability modulation ln Δp in (t) is to be conveyed from the input to the ISI output ln Δp EIF (t), ln Δp LIF (t) of the EIF and LIF neuron respectively. For that, a spectral analysis of the transfer functions from ln Δp in to lnDpEIF; LIF is conducted.
From the the definitions of p EIF (t) and p LIF (t) it follows: where As expected, when transduced to ln Δp EIF , the distortion of ln Δp in is continuously decreased when V 0 ! V T . This is because the higher the baseline V 0 , the lower the phase shift (for f ≳ 6Hz) and the closer the amplitude gain gets to unity. In case of the LIF neuron however the amplitude gain is virtually unaffected by V 0 , whereas the phase shift even increases slightly for V 0 ! V T . Importantly, for both measures, phase shift and amplitude gain, performance of the LIF neuron is at most as good as performance of the EIF neuron and corresponds to EIF performance in case of low voltage baselines V 0 . This is to be expected, since both neuron models become identical in this voltage regime. Conversely, when V 0 is close to V T (i.e. when the spiking nonlinearity kicks in) EIF performance continuously improves unlike the LIF neuron.
Recalling our discussion at the end of the previous chapter, moreover, one expects an increased voltage baseline to become maximally effective in the high conductance regime, where K is high as well. A comparison between Figs 3 and 4 shows that this is indeed the case. Although overall performance is better in the low conductance regime, its dependence on V 0 is weaker in the EIF case, both for the phase shift and amplitude gain. Because the high-conductance regime is a necessary byproduct of diffusive noise (due to the large fraction of open synaptic ion channels), our theory thus shows how its deteriorating effects may be overcome by an increased voltage baseline.
To examine the extend to which these theoretical results from small signal analysis hold true in practice, we will now have a look at numerical simulations of both types of neurons and for various settings of V 0 and the probability fluctuation ratio r Δp .

Performance of the Simulated EIF-and LIF-Neurons for Approximating Predefined ISI Distributions for Various Voltage Baselines and Values of the Probability Fluctuation Ratio
In this section we evaluate the capability of the EIF and LIF neuron to perform random sampling of predefined ISI distributions p in (t) during high conductance states. Evaluation is done based on numerical simulations of the two types of neurons, subject to log-modulation functions defined by sums of sinusoids (see methods). First, the neuron's approximation Random Sampling with ISIs of the Exponential Integrate and Fire Neuron performances for broadband log-modulation using specific instances of V 0 , h 0 and r Δp are shown. Then the corresponding summary results are given for sweeps across V 0 and r Δp .
In order to approximate p in (t) by the EIF neuron's spike output, input current I Ã (t) was computed according to Eq 3.13 (where, in case of the LIF neuron, ÀðK À 1Þh 0 þ d dt ln p in ðtÞ was substituted by h 0 þ d dt lnp in ðtÞ). For the sake of simplicity we used a simple escape-rate model for stochastic spike-generation, by setting the hazard filter kernel K equal to K(t) = δ(t). Fig 5 shows an example ISI distribution produced by broadband modulation. As predicted by our theory, there is a better overall match between p EIF and p in when compared to the LIF neuron.
Does this finding also hold for a larger range of values for V 0 and r Δp ? To answer that question V 0 was swept across 20 equidistant values between −65mV and V T − 1mV = −51.4mV For each such combination of V 0 and r Δp , the L1 norm (ln Δp in , ln Δp EIF;LIF ) performance measure was evaluated. Note that the sweep covered fairly large values of r Δp , where results from small signal analysis are not guaranteed to hold. Figs 6 and 7 show the corresponding results for low-and high-frequency modulation respectively. As expected, performance is generally better for high frequencies, both for the EIF and LIF neuron.    In the former case however, there is a dependence of L1 norm (ln Δp in , ln Δp EIF ) on baseline voltage, such that performance increases with increasing V 0 , both for low-and high-frequency modulations. It is only for large values of r Δp that this effect is attenuated, or even reversed if, in addition, V 0 stays close to the firing threshold. This sharp decrease in performance (which is most obvious in Fig 7) is rooted in the deterministic membrane potential trajectory of the EIF neuron which, for high values of r Δp , deviates more strongly from the ideal trajectory, as it would manifest itself if EIF dynamics would follow exactly the hazard equation of the ideal ISI sampler. Therefore, there is an inherent danger of erroneously crossing the threshold when situated close to it, which causes strong positive feedback to kick in. Such feedback may then easily amplify any deviation from the ideal trajectory (see e.g. Fig 7, for V 0 = −51.4mV, r Δp = 0.64, t % 0.065s). Although such a phenomenon cannot be considered an artifact, it is clearly caused by the mechanics of the EIF neuron.
In case of the LIF neuron, erroneous threshold crossings may also occur for high V 0 and r Δp , but LIF behavior on the remaining parameter space is more stereotypical compared to the EIF neuron: There is virtually no influence of baseline voltage on performance, as the latter hardly changes across the (r Δp , ln Δp LIF )-plane, particularly for high-frequency modulations. For lowfrequency modulations, there is even a slight decrease in performance when V 0 ! V T . Most importantly however, there is no point on the plane where the LIF neuron performs better than the EIF neuron: Both are identical for low V 0 (when the spiking nonlinearity has no influence on the membrane potential), but whereas EIF performance increases with increasing V 0 , LIF performance stays constant or even decreases.
Overall, the empirical results presented in this chapter confirm those obtained theoretically from small signal analysis. They are also robust against even large probability modulations, particularly for low-frequencies.

Range of Realizable ISI Distributions
Whereas in the previous sections, the range of distributions our ISI sampling model may sample from was analyzed in the frequency domain, we here estimate this range in terms of the amplitudes of the probability modulation function. This is necessary, because the model is not based on general ISI distributions, but rather on fluctuations around some exponential baseline.
In the following we assume Δp EIF;LIF (t) * ln N(μ, σ 2 ) to be log-normally distributed for all t, i.e. that ln Δp EIF;LIF (t) follows some stationary gaussian process with mean μ and variance σ 2 . This assumption covers many practically relevant types of probability modulation functions, e.g. all instances of colored, gaussian noise, as they were considered in the previous section. μ is set equal to −σ 2 /2, such that the mean E[Δp] = 1 (to avoid cluttered notation, the subscripts EIF;LIF of p(t) are now dropped). At any time t, we refer by p p (t) ≔ ln N(μ + ln h 0 − h 0 t, σ 2 ) to the distribution of values of p(t) = p 0 (t)Δp(t), p 0 (t) = h 0 exp−h 0 t. When the probability fluctuation ratio is redefined as the coefficient of variation of then, according to basic properties of the log-normal distribution, the variance σ 2 may be expressed in terms of r Δp by s 2 ¼ ln r 2 Dp þ 1 . Thus, except h 0 from the baseline distribution, r Δp remains the only parameter for determining the range of realizable ISI distributions, when log-normal Δp(t) are considered. Because by definition the term Δp(t) contains an implicit normalization constant (such that p(t) is a distribution), it is not straightforward to construct a corresponding gaussian process for lnΔp(t), without the use of future information about the process. Fortunately however, the normalization constant is not needed in a realistic, biological setup, due to the derivative employed for computing the neuron's input current (cf. Eq 3.13). Hence any stationary gaussian process with μ = −σ 2 /2 may be considered in our scenario. Fig 8 shows example p p (t) resulting from the log-normal model. As expected, p p (t) becomes broader for increasing values of r Δp , i.e. there is a corresponding increase in size of the set of realizable ISI distributions p(t). The mean and standard deviation of p p (t) however decrease monotonically with t, indicating a reduced range of p(t)-values for long ISIs. This behavior is a consequence of the product p(t) = p 0 (t)Δp(t): Although Δp(t) is stationary, the exponentially decreasing factor p 0 (t) renders the mean and standard deviation of p p (t) to approach zero exponentially fast. For the same reason, distributions p p (t) which correspond to different hazard baselines h 0 are discriminated only by some scaling factor, with no other qualitative differences in shape, modes etc.

Summary
Using the exponential integrate and fire (EIF) neuron, we have explicitly shown in this paper how the nonlinear sodium current, that is triggered when the membrane potential is situated close to the firing threshold, may facilitate the neuron's ability to perform random sampling based on interspike-intervals (ISIs). This was shown theoretically, by deriving approximately from the EIF voltage equation the differential equation for the ideal firing hazard, which we termed the 'hazard equation of the ideal ISI-sampler' (Eqs 3.14 and 2.4 respectively).
The solution to this approximating equation assumes a particularly simple form, which linearly relates the differentiated logarithm of the ISI density to the convolved and integrated input current (Eq 3.13). This way, the neuron may be regarded as a probability transducer, which receives as input a current-encoded ISI distribution and returns ISI samples from an approximating distribution at the output. Approximation quality depends strongly on the spectral composition of the input distribution: Whereas high-frequency components are displayed distortionless at the output, low-frequencies are distorted in terms of phase-shift and amplitude gain. In the high conductance regime however, both measures can be improved on, by increasing the neuron's membrane potential baseline towards firing threshold. This effect was derived theoretically and confirmed empirically by concomitant simulations.
Because high conductance, along with a noisy membrane potential balanced close to the firing threshold, is a defining feature of UP-states [28; 30], our results thus demonstrate a clear benefit of UP-states for ISI-based random sampling. We have also shown that improved probability transduction may be attributed directly to the nonlinear sodium current, that distinguishes the EIF from the ordinary leaky integrate and fire (LIF) neuron, because LIF performance never surpassed performance of the EIF neuron in any analytic or simulated scenario. Finally, because the described effects hold true for multiplicatively modulated exponential distributions at the input of the EIF probability transducer, there is a restriction imposed on the range of ISI distributions the neuron may possibly sample from and we have estimated this range analytically.

Plausibility of the Proposed Model
It may be argued that the range of realizable ISI distributions is too limited for ISI sampling to play a role in neural processing. However, it is possible that during UP states the computational procedures needed, e.g. by the cortex, are restricted to the set of realizable ISI distributions. Moreover, the quasi-exponential envelope of this set (Fig 8) might be a reflection of sampling from log(ISI) rather than ISI-distributions [25], such that sampling is possible from a more flexible range of log(ISI) distributions, that manifest itself in comparably stereotyped ISI distributions. Interestingly, quasi-exponential ISI distributions are quite common in the cortex (see e.g. [36; 37; 38]) and there is evidence that this might indeed be a reflection of such log(ISI)-coding [16; 20]. In particular, they were shown to be most prominent during highconductance states in a detailed Hodgkin-Huxley type model neuron [39]. ISI distributions may also become more stereotyped (i.e. quasi-exponential) through the averaging of nonstationary distributions; Suppose, as in section 3.4 for example, a probability modulation function Δp(t) fluctuating around 1. If these modulations are not influenced in any way by their induced spikes (i.e. if the resulting spike train is not renewal), then the (momentary) ISI distribution p(t) = p 0 (t)Δp(t), p 0 (t) = h 0 exp(−h 0 t) will be slightly different after each such spike, due to the fluctuations of Δp(t). Reflecting a situation typically encountered in experiments, the ISI distribution as it is measured from a complete train of spikes will thus be given as the stereotyped average distribution p 0 (t) and hence be exponential (blue lines in Fig 8).
Another objection against our model is based on the fact that the neuronal simulations of section 3.3 could have been arranged to produce even exact results with zero L1-norm error. For that, a distribution representing input current I(t) can be computed from the user-defined ISI distribution p(t) via the sequence p(t) ! h(t) ! V(t) ! I(t) (corresponding to the sequence of insertions into Eqs (2.2) ! (2.3) ! (2.7) ! (2.5)). Although such a procedure is feasible even for the LIF neuron [40; 25], compared to Eq 3.13 it yields a tremendously more complicated expression for the relationship between I(t) and p(t). In particular, the transition p(t) ! h(t) requires computation of the survivor function for which, in a realistic scenario, the set of neurons providing I(t) must explicitly represent the running integral R t 0 pðt 0 Þdt 0 , e.g. by means of some chemical accumulation variable. This requirement remains, even if the necessary computations are thought to happen as a combination of synaptic inputs and local information processing inside the sampling neuron. In contrast, the approach put forward in this paper naturally circumvents this problem; In case of escape noise, that is for K(t) = δ(t), the total input current I(t) directly encodes the momentary value of d dt lnpðtÞ (ref. Eq 3.13). Even for general filter kernels K(t) however, computation of d dt lnpðtÞ just requires a static, linear filter applied to the total current and not some highly nonlinear processing applied to an accumulation variable with reset mechanism.
On the other hand, it is questionable whether escape noise is a realistic model of noise in biological neurons. In fact, measurements in cortex during slow wave sleep, where UP/DOWN state transitions are prominent, show that during UP states diffusive noise is present [5; 8] and a crucial factor for spiking variability [5; 10]. Although for the LIF neuron diffusive noise, which is generated by barrages of balanced excitatory and inhibitory synaptic inputs [27], can be approximated to a good degree by escape noise [35], an analogous model for the EIF neuron with a voltage baseline close to threshold is still missing. However, as we have shown in section 3.1, the EIF approximation of the ideal ISI-sampler is still valid for general convolution kernels K(t). Hence, should the logarithm of the hazard, caused by diffusive noise in the EIF neuron, be expressible as a convolution between K(t) and V(t) (ref. Eq 2.7), our theory remains valid, even for diffusive noise. To establish this requirement, the methods presented in [41] could prove useful. In this context it is also interesting to note that the EIF sampling model provides a clear reasoning why measurements during cortical UP-states have revealed simultaneous occurrence of a high subthreshold voltage baseline, high conductance and high noise levels [8; 10]. Whereas previous explanations attributed this phenomenon to stochastic resonance [27], it is the obtaining of a better approximation of the ideal ISI-sampler, together with the need of a noise source for random firing, that our theory postulates.
With respect to our scenario of voltage baselines close to threshold, one could argue that under sustained depolarizations, neurons tend to increase their threshold, thereby leaving the range where sodium currents could possibly exert an influence on subthreshold processing. Indeed, such a threshold shift has been observed in experiments and can be explained by the sodium inactivation variable from the Hodgkin-Huxley formalism [42]. Because the EIF model is derived from the more detailed Hodgkin-Huxley model by assuming constant inactivation (see [43], chapter 5), this effect appears to be a potential threat to our theory. Experimental data shows however that the shift in threshold is substantially smaller than its causing depolarization (see [42] , Fig 2A). This may also explain the apparent consensus in the literature that (high-conductance) UP-states lead to closer proximity of the baseline membrane potential to threshold [39; 8; 44].
Another objection against the proposed EIF sampling model is its neglect of spike-frequency adaptation. In fact, coupling the EIF dynamics Eq 2.5 with the following equation describing adaptation yields the AdEx model [31], which was found able to reproduce a large range of electrophysiological firing patterns, such as adaptation, bursting, fast and regular spiking: where w is an adaptation current that is subtracted from the right hand side of Eq 2.5 and thus tends to hyperpolarize the membrane and consequently to decrease the firing rate of the neuron. a is an subthreshold adaptation parameter, which models the dependence of w on membrane voltage V. In addition to Eq 4.1, adaptation dynamics are governed by the instantaneous increase of w by an amount b, each time the neuron fires a spike (Fig 9, top). This rather phenomenological model of adaptation (which subtracts a state-dependent current from the neuron's input) was found able to match with great accuracy the spike times and voltage evolution of a detailed Hodgkin-Huxley (HH) type model neuron, that contained a biophysically plausible muscarinic potassium current as mechanism for adaptation [31; 45]. Importantly however, the impact of adaptation fluctuations, that is, w(t) subtracted by its average, on the ability of the AdEx to reproduce such spiking patterns was found to be weakest during high conductance states [31]. Thus, even in presence of adaptation, it may be possible to sample from 'user-defined' input distributions that are state-independent of the sampling neuron. This relative unimportance of adaptation dynamics during UP-states is also in line with the apparent independence between two small, subsequent ISIs in mouse somatosensory neurons (see [32], Fig 3A), since dependence between subsequent ISIs is plausibly mediated by spike-frequency adaptation [46; 47]. On the other hand, adaptation may even be incorporated as a computational feature by our ISI sampling scheme: Suppose the adaptation dynamics Eq 4.1 were independent of voltage fluctuations, such that w is turned into a leaky integrator. That is, w is governed only by the output spike train, voltage baseline V 0 and adaptation parameters a and b, whereas voltage fluctuations due to ΔV(t) are neglected (Fig 9,top). This assumption is plausible in our considered scenario, because of the small voltage fluctuations the ISI sampling model is based on (cf. assumption (a) in section 3.1). In this case, if we define x i ≔ (ISI i , w i ) as a state vector, consisting of an ISI-label ISI i (given by the i-th spike in a train) and w i (the value of w immediately before the ith spike), then the sequence of ISIs put out by the AdEx neuron corresponds to a sequence of samples from a Markov chain (Fig 9, middle and bottom). As before, external input to the Markov chain is provided by I(t), note however that here the total input current of the neuron is given by I(t) − w(t). For the future, we have plans to further explore the computational implications imposed by this biologically more realistic setup.

Computational Interpretations of ISI-Based Random Sampling During UP-States
Random sampling also fits seamlessly into the long standing hypothesis of oscillating UP and DOWN states as a means of memory consolidation during slow wave sleep. For example, based on the wake-sleep algorithm [48] (WSA, the notion of 'sleep' here is different from actual sleep as in slow-wave sleep) Destexhe and Sejnowski pushed forward a qualitative version of such a theory, that utilizes self-generated, idealized top-down inputs for learning the feedforward connections of an internal object recognition model [12; 13]. Learning such a recognition model is a crucial subroutine of the WSA, whose actual goal however is to learn a generative model, that is Between spikes, the dynamics of w are governed by a leaky integrator. w i is the value of w immediately before the spike, i.e. before the addition of b. ISI i is the ISI label of the i-th spike. Middle: Combining ISI i and w i yields a state vector x i ≔ (ISI i , w i ) that follows Markov dynamics. Shown is a Bayesian network representation of the resulting Markov chain. Input to the chain is given by an input current I(t) provided by the neuron's presynaptic partners. Bottom: Detailed Bayesian network when x i is expanded into its components ISI i and w i . The dependencies shown as arrows are due to Eq 3.13 and the leaky integrator dynamics governing w(t) (Eq 4.1).
doi:10.1371/journal.pone.0132906.g009 a model of the probability distribution of data inputs [48]. The latter model is termed 'generative', because it is used by the WSA to generate samples from the distribution the model represents, i.e. fantasized top-down sensory inputs that can be interpreted as memories and which are used for updating the recognition model during the so called sleep phase of the algorithm.
One interpretation of the alternating sequence of UP and DOWN states during slow wave sleep could be the mimicking of such a sleep phase of the WSA. That is, during UP states, ISI samples are drawn from the generative model and their values are stored by the sampling neuron. During the subsequent DOWN state, these values are then used by the neuron to update the parameters that control the density of the recognition model. Based on these updated parameters, a new set of samples is drawn in the next UP state and the procedure repeats, thereby mimicking a version of stochastic gradient ascent [49], as it is used by some formulations of the WSA [48]. If each neuron represents one dimension of some high-dimensional generative model, synaptic connections are necessary to mutually influence the sampling process performed by each neuron. Such influencing by exchanging sampled values is crucial for sampling in high-dimensional spaces and is a hallmark of MCMC-sampling methods such as Gibbs-sampling [22; 24]. The presented computational theory also explains the experimentally observed synchrony of UP/ DOWN state transitions across cortical neurons [4; 8; 50; 44]. During an UP state, each neuron must be guaranteed to have its sampling based on the same high-dimensional distribution, such that procedures like Gibbs-sampling become feasible. This is achieved by restricting the windows of opportunity for parameter updates to synchronous DOWN states. In this context, it is also interesting to note that recent experiments have indicated ongoing cortical activity during UP states to be functionally protected from thalamic inputs [44]. For the same reason this is exactly what is to be expected, if the goal of the UP-state is indeed to perform MCMC sampling from an unperturbed (prior) generative distribution that is represented in the sensory deprived cortex.
Similarly, a related interpretation of synchronous UP/DOWN state transitions is based on the EM-algorithm [51]. In this case, UP states would correspond to expectation (E) steps, during which sampling-based inferences about hidden variables in the generative model are conducted. As with the WSA, the subsequent maximization (M) steps would then be confined to updates of the parameters.

Model Predictions
As indicated in the section 4.2, our model hinges on the validity of Eq 2.7 for describing the dependency between membrane voltage and firing hazard. In particular, Eq 2.7 predicts the spiking determinism parameter to be equal to Δ T , the slope factor of the EIF model. In this context, it is interesting to note that there is indeed evidence for an exponential escape-rate model to be a good empirical descriptor of the hazard near firing threshold (in case of an EIF neuron stimulated by diffusive noise) [41]. Also, a quite remarkable congruence between the spiking determinism parameter and Δ T has been reported in the literature. More specifically, reported values for Δ T are from the set {0.5, 1.4, 3, 3.48}mV [52; 31; 53], whereas reported values for the spiking determinism parameter are {0.5, 3, 4}mV [54; 55]. Therefore, if Eq 2.7 turns out indeed to be a sensible model, e.g. for cortical neurons during UP states in vivo, then our theory could be tested further, for example by repeatedly injecting traces of current into a cortical neuron and examining the resulting ISI distribution. More specifically, triggered by each spike of the neuron, a current profile computed from Eq 3.13 could be injected and the ISI distribution be compared to the predicted one according to Eq 3.14.
In contrast, if in experiments the proposed dependency between membrane voltage and hazard turns out not to be a good description, then this aspect serves as a means to falsify the whole theory relating ISI sampling to EIF processing near threshold.