Advertisement
  • Loading metrics

Intrinsic Gain Modulation and Adaptive Neural Coding

  • Sungho Hong ,

    shhong@oist.jp

    Current address: Computational Neuroscience Unit, Okinawa Institute of Science and Technology, Onna-son, Okinawa, Japan

  • Brian Nils Lundstrom,

    Affiliation Physiology and Biophysics Department, University of Washington, Seattle, Washington, United States of America

  • Adrienne L. Fairhall

    Affiliation Physiology and Biophysics Department, University of Washington, Seattle, Washington, United States of America

Intrinsic Gain Modulation and Adaptive Neural Coding

  • Sungho Hong, 
  • Brian Nils Lundstrom, 
  • Adrienne L. Fairhall
PLOS
x

Abstract

In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate versus current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.

Author Summary

Many neurons are known to achieve a wide dynamic range by adaptively changing their computational input/output function according to the input statistics. These adaptive changes can be very rapid, and it has been suggested that a component of this adaptation could be purely input-driven: even a fixed neural system can show apparent adaptive behavior since inputs with different statistics interact with the nonlinearity of the system in different ways. In this paper, we show how a single neuron's intrinsic computational function can dictate such input-driven changes in its response to varying input statistics, which begets a relationship between two different characterizations of neural function—in terms of mean firing rate and in terms of generating precise spike timing. We then apply our results to two biophysically defined model neurons, which have significantly different response patterns to inputs with various statistics. Our model of intrinsic adaptation explains their behaviors well. Contrary to the picture that neurons carry out a stereotyped computation on their inputs, our results show that even in the simplest cases they have simple yet effective mechanisms by which they can adapt to their input. Adaptation to stimulus statistics, therefore, is built into the most basic single neuron computations.

Introduction

An f-I curve, defined as the mean firing rate in response to a stationary mean current input, is one of the simplest ways to characterize how a neuron transforms a stimulus into a spike train output as a function of the magnitude of a single stimulus parameter. Recently, the dependence of f-I curves on other input statistics such as the variance has been examined: the slope of the f-I curve, or gain, is modulated in diverse ways in response to different intensities of added noise [1][4]. This enables multiplicative control of the neuronal gain by the level of background synaptic activity [1]: changing the level of the background synaptic activity is equivalent to changing the variance of the noisy balanced excitatory and inhibitory input current to the soma, which modulates the gain of the f-I curve. It has been demonstrated that such somatic gain modulation, combined with saturation in the dendrites, can lead to multiplicative gain control in a single neuron by background inputs [5]. From a computational perspective, the sensitivity of the firing rate to mean or variance can be thought of as distinguishing the neuron's function as either an integrator (greater sensitivity to the mean) or a differentiator/coincidence detector (greater sensitivity to fluctuations, as quantified by the variance) [3],[6],[7].

An alternative method of characterizing a neuron's input-to-output transformation is through a linear/nonlinear (LN) cascade model [8],[9]. These models comprise a set of linear filters or receptive field that selects particular features from the input; the filter output is transformed by a nonlinear threshold stage into a time-varying firing rate. Spike-triggered covariance analysis [10],[11] reconstructs a model with multiple features from a neuron's input/output data. It has been widely employed to characterize both neural systems [12][15] and single neurons or neuron models subject to current or conductance inputs [16][19].

Generally, results of reverse correlation analysis may depend on the statistics of the stimulus used to sample the model [15], [19][25]. While some of the dependence on stimulus statistics in the response of a neuron or neural system may reflect underlying plasticity, in some cases, the rapid timescale of the changes suggests the action of intrinsic nonlinearities in systems with fixed parameters [16], [19], [25][29], which changes the effective computation of a neuron.

Our goal here is to unify the f-I curve description of variance-dependent adaptive computation with that given by the LN model: we present analytical results showing that the variance-dependent modulation of the firing rate is closely related to adaptive changes in the recovered LN model if a fixed underlying model is assumed. When the model relies only on a single feature, we find that such a system can show only a single type of gain modulation, which accompanies an interesting asymptotic scaling behavior. With multiple features, the model can show more diverse adaptive behaviors, exemplified by two conductance-based models that we will study.

Results

Diverse Variance-Dependent Gain Modulations without Spike Rate Adaptation

Recently, Higgs et al. [3] and Arsiero et al. [4] identified different forms of variance-dependent change in the f-I curves of various neuron types in avian brainstem and in cortex. Depending on the type, neurons can have either increasing or decreasing gain in the f-I curve with increasing variance. These papers linked the phenomenon to mechanisms underlying spike rate adaptation, such as slow afterhyperpolarization (sAHP) currents and slow sodium channel inactivation. We recently showed [7] that a standard Hodgkin–Huxley (HH) neuron model, lacking spike rate adaptation, can show two different types of variance-dependent gain modulation simply by tuning the maximal conductance parameters of the model. These differences in gain modulation correspond to two different regimes in the space of conductance parameters. In one regime, which includes the standard parameters, a neuron periodically fires to a sufficiently large constant input current. In the other regime, a neuron never fires to a constant input regardless of its magnitude, but responds only to rapid fluctuations. This rarely discussed property has been termed class 3 excitability [30],[31]. Higgs et al. [3] proposed that the type of gain modulation classifies the neuron as an integrator or differentiator.

Here, we examine two models that show these different forms of variance-dependent gain modulation without spike rate adaptation, and study the resulting LN models sampled with different stimulus statistics. We show that these fixed models generate variance-dependent gain modulation, and that this gain modulation is well predicted by aspects of the LN models derived from white noise stimulation. The two models are both based on the HH [32] active currents; one model is the standard HH model, and the other (HHLS) has lower Na+ and higher K+ conductances. The HHLS model is a class 3 neuron and responds only to a rapidly changing input. For this reason, the HHLS model can be thought of as behaving more like a differentiator than an integrator [3],[7].

Figure 1 shows the different gain modulation behaviors of the HH and HHLS conductance-based models. For the HH model, Figure 1A, the f-I curves in the presence of noise are similar to the noiseless case except that they are increasingly smoothed at the threshold. In contrast, Figure 1C shows that the f-I curves of the HHLS model never converge toward each other as the noise level increases. This case resembles that of layer 5 pyramidal neurons in rat medial prefrontal cortex [4], as well as nucleus laminaris (NL) neurons in the chick auditory brainstem and some pyramidal neurons in layer 2/3 of rat neocortex [3]. While for these layer 2/3 neurons, there is evidence that this change in f-I curve slope may be related to the sAHP current [3], at steady state this effect can be obtained in general by tuning the maximal conductances without introducing any mechanism for spike rate adaptation [7].

thumbnail
Figure 1. Variance-Dependent Gain Modulation of the HH and HHLS Model.

Each model is simulated as described in the Materials and Methods section. (A) f-I curves of a standard HH model for differing 10 variances (σ2) from 0 to 45 nA2. The topmost trace is the response to the highest variance. Each curve is obtained with 31 mean values (I0) ranging from −5 to 20 nA. (B) The same data as (A) plotted in the (mean, variance) plane. Lighter shades represent higher firing rates. We used cubic spline interpolation for points not included in the simulated data. (C,D) f-I curves of the HHLS model as in (A) and (B). 10 means from −10 to 50 nA and 21 variances from 0 to 100 nA2 are used.

https://doi.org/10.1371/journal.pcbi.1000119.g001

Gain Modulation and Adaptation of Fixed Models

For a system described by an LN model with a single feature, we derive an equation relating the slopes of the firing rate with respect to stimulus mean and variance. We then consider gain modulation in a system with multiple relevant features and derive a series of equations relating gain change to properties of the spike-triggered average and spike-triggered covariance. Throughout, we assume that the underlying system is fixed, and that its parameter settings do not depend on stimulus statistics. For example, if the model has a single exponential filter with a time constant τ, we assume that τ does not change with the stimulus mean (I0) or variance (σ2). However, this does not mean that the model shows a single response pattern regardless of the statistical structure of stimuli. The sampled LN description of a nonlinear system with fixed parameters—even when the underlying model is an LN model [25]—can show interaction with the input statistics, leading to different LN model descriptions for different input parameters [19], [25], [27][29]. We refer to this as intrinsic adaptation.

One-Dimensional Model

An LN model is composed of its relevant features {εμ(t)} (μ  =  1,2,…,n)), which act as linear filters on an incoming stimulus, and a probability to spike given the filtered stimulus, P(spike|filtered stimulus). For a Gaussian white noise stimulus with mean I0 and variance σ2, the firing rate is(1)where is the time-integrated filter and x is the mean-subtracted noise stimulus filtered by the n relevant features. p(x) is an n-dimensional Gaussian distribution with variance σ2. We refer to the Materials and Methods section for a more detailed account of the model.

For a one-dimensional model n  =  1, Equation 1 can be rewritten with change of variables(2)Since p(x) is Gaussian, it is also the kernel or Green's function of a diffusion equation in terms of (x2) and therefore so is p(xI0ε̅) in terms of (I02). In other words, we haveNow operating with on both sides of the equation, p(xI0ε̅) is the only term on the left hand side of Equation 2 that depends on (I02) and therefore the right hand side of Equation 2 vanishes. Thus one finds(3)The boundary condition is given by evaluating Equation 2 as σ2→0; here the Gaussian distribution becomes a delta functionand the boundary condition is given by the zero-noise f-I curve. Thus, when a model depends only on a single feature, ε(t), the f-I curve with a noisy input is given by a simple diffusion-like equation, Equation 3, with a single parameter, the time integrated filter, , determining the diffusion constant 1/2ε̅2.

Equation 3 states that the variance-dependent change in the firing rate is simply determined by the curvature of the f-I curve. Thus, a one-dimensional system displays only a single type of noise-induced gain modulation: as in diffusion, an f-I curve is gradually smoothed and flattened as the variance increases. Given a boundary condition, such as an f-I curve for a particular variance, the family of f-I relations can be reconstructed up to a scale factor by solving Equation 3. For example, one can predict how the neuron would respond to a noise stimulus based on its output in the absence of noise. Note that the solution of Equation 3 generalizes a classical result [33] based on a binary nonlinearity to a simple closed form which applies to any type of nonlinearity.

Figure 2A and 2B show a solution of Equation 3. While this one-dimensional model is based on the simplest and most general assumptions, it provides insights into the structure of variance-dependent gain modulation. The boundary condition is an f-I curve with no noise, f  =  (I+0.1)1/2 for I>0 and f  =  0 for I≤0, which imitates the general behavior of many dynamical neuron models around rheobase [34][36]. Compared with the HH conductance-based model, Equation 3 captures qualitative characteristics of the HH f-I curve despite differences due to the increased complexity of the HH model over a 1D LN model: in Figure 2A and 2B, there is a positive curvature (second derivative of firing rate with respect to current) of the f-I curve below rheobase related to the increase of the firing rate with increasing variance. In contrast, the behavior of the HHLS model cannot be described by Equation 3. Even though the f-I curves in Figure 1C mostly have negative curvature, the firing rate keeps increasing with variance, implying that the HHLS model cannot be described by a one-dimensional LN model.

thumbnail
Figure 2. Variance-Dependent Gain Modulation of One-Dimensional Models.

(A) Variance-dependent f-I curves of a one-dimensional model from the solution of Equation 3 with the boundary condition, f = (I+0.1)1/2 for I>0 and f = 0 for I≤0 at zero noise. (B) The firing rates of A in the (mean, variance) plane. (C) f-I curves of an LIF model. (D) f-I curves of a QIF model. The model parameters for the LIF and QIF are in the Materials and Methods section. We used 50 mean (I0) values from 0 to 4 (LIF) and from −2 to 2 (QIF), and 8 variances (σ2) from 0 to 8 for both models.

https://doi.org/10.1371/journal.pcbi.1000119.g002

We also compared Equation 3 with the f-I curves from two commonly used simple neuron models, the leaky integrate-and-fire (LIF) model (Figure 2C), and a similar model with minimal nonlinearity, the quadratic integrate-and-fire (QIF) model [37],[38] (Figure 2D). The f-I curves of the two models are similar but have subtle differences: in the LIF model, firing rate never decreases with noise, even though parameters were chosen to induce a large negative curvature, as shown analytically in Text S1. The QIF model behavior is much more similar to the 1D LN model, marked by a slight decrease in firing rate at large I0. From this perspective, the QIF is a simpler model in terms of the LN description despite the dynamical nonlinearity.

It is interesting to note that for one-dimensional models, the gain modulation given by Equation 3 depends only on the boundary condition, which implicitly describes how an input with a given mean samples the nonlinearity, but not explicitly on the details of filters or nonlinearity. An ideal differentiator, where firing rate is independent of the stimulus mean, is realized only when the filter has zero integral, ε̅  =  0. This is also the criterion that would be satisfied if the filter itself were ideally differentiating. We will return to the relationship between the LN model functional description and that of the f-I curves in the Discussion.

Multidimensional Models

Here we examine gain modulation in the case of a system with multiple relevant features. In this case, one cannot derive a single simple equation such as Equation 3. Instead, we derive relationships between the characteristics of f(I0,σ) curves and quantities calculated using white noise analysis.

Fixed multidimensional models can display far more complex response patterns to different stimulus statistics than one-dimensional models, because linear components in the model can now interact nonlinearly [29]. For example, in white noise analysis, as the stimulus variance increases, the distribution of the filtered stimuli also expands and probes different regions of the nonlinear threshold structure of the model. This induces a variance-dependent rotation among the filters recovered through sampling by white noise analysis, and the corresponding changes in the spike-triggered average, spike-triggered covariance, and the sampled nonlinearity [19].

Here, we relate parameters of the changing spike-triggered average and spike-triggered covariance description to the form of the f-I curves. The relationships are derived by taking derivatives of each side of Equation 1 with respect to I0 and σ2 (see Materials and Methods section). The first order in I0 establishes the relationship between the STA and the gain of the f-I curve with respect to the mean(4)The second order leads to a relationship between the second derivative of the f-I curve and the covariance matrix(5)The gain with respect to the variance is(6)where

Equations 4–6 show how the nonlinear gain of an f-I curve with respect to input mean and variance is related to intrinsic adaptation as observed through changes in the STA and STC. Note that Equations 4–6 apply to one-dimensional LN models as well. In that case, the STA has the same shape as the feature in the model, and only its magnitude varies according to the overlap integral, Equation 1, between the nonlinearity of the model and the prior stimulus. This is the same for the STC, and thus Equations 4–6 are not independent. This leads to a single form of variance gain modulation, given by Equation 3. However, in a multidimensional model, changing the stimulus mean shifts the nonlinearity in a single direction, , while increasing the variance expands the prior in every direction in the stimulus space. Therefore, the overlap integral can show more diverse behaviors.

Conductance-Based Models

We now examine whether the gain modulation behaviors we have described can be captured by a multi-dimensional LN model. We tested this by computing f-I curves, spike-triggered averages and the spike-triggered covariance matrices for the noise-driven HH and HHLS models for a range of input statistics. Figure 3A, B, and C show the result of fitting simulation data from the HH (left) and HHLS (right) model to Equations 4, 5, and 6, respectively. The linear relationships are quite clear in Figure 3A and 3C which show the gains with respect to mean and variance. Figure 3B involves the curvature of f-I curves, which is more difficult to calculate accurately, and shows larger errors. In every case, goodness of fit is p<1.3×10−6 and p<5.8×10−6 for the HH and HHLS where the upper bounds of p-values are given by the case of Equation 5, corresponding to Figure 3B. These results show that intrinsic adaptation of the LN model predicts the form of noise-induced gain modulation for these models.

thumbnail
Figure 3. Derivatives of the Firing Rate Curves with Respect to Mean and Variance Related to Quantities Obtained by White Noise Analysis for the Standard HH (Left) and HHLS (Right) Models.

Each point is calculated from the simulated data with a selected (mean, variance) input parameter pair, as described in the Materials and Methods section, and the gray lines represent our theoretical predictions, Equations 4–6, which hold when the variance dependent change in f-I curves is only due to intrinsic adaptation. (A) Gain versus the norm of the STA, as in Equation 4. (B) Gain change versus the spike-triggered covariance term of Equation 5. (C) Change of firing rate with respect to variance versus the function of the STA and spike-triggered covariance given in Equation 6.

https://doi.org/10.1371/journal.pcbi.1000119.g003

Gain Rescaling of One-Dimensional Models

Here we discuss a consequence of intrinsic adaptation for neuronal encoding of mean and variance information for a one-dimensional model. In this case, Equation 3 completely specifies intrinsic adaptation, and therefore we will focus on this case.

Our first observation is that Equation 3 is invariant under the simultaneous rescaling of the mean and standard deviation, I0→αI0, σ→ασ, where α is an arbitrary positive number. This invariance is preserved if the solution is also a function of only a dimensionless variable I0/σ, which would represent a signal-to-noise ratio if we describe the neuron's input/output function in terms of an f-I curve at a fixed noise level σ. Note that this situation is analogous to the Weber–Fechner [39],[40] and Fitts' law [41], which states that perception tends to depend on only dimensionless variables that are invariant under scaling of the absolute magnitude of stimulus [42]. However, the invariance of Equation 3 under the scaling of a stimulus does not necessarily lead to the invariance of a firing rate solution. By rewriting Equation 2 in terms of the “rescaled” variables, y = x/σ and μ = I0/σ, we get(7)where f0(I) = P(spike|Iε̅) is an f-I curve with no noise. Thus, the scaling of f(I02) with standard deviation depends on the boundary condition, f0(I), which in principle can be any arbitrary function.

Nevertheless, in practice, the f-I curves of many dynamical neurons are not completely arbitrary but can share a simple scaling property, at least asymptotically. For example, in the QIF and many other neuron models, the f-I curve with no noise asymptotically follows a power law f0∼(I0Ic)1/2 around the rheobase Ic [34][36]. In general, if f0(I)∝Iα asymptotically in such a regime, from Equation 7, the firing rate is asymptotically factorized into a σ dependent and μ = I0/σ dependent part as(8)In other words, I0/σ becomes an intermediate asymptotic of the f-I curves [43].

To test to what extent this scaling relationship holds in the models we have considered, we calculated the rescaled relative gain of the f-I curves, which we define as (σ/f) ∂f/∂I0 = σ ∂ log f/∂I0; the rescaled relative gain of Equation 8 depends only on μ = I0/σ, not on σ. Thus, if the rescaling strictly holds, this becomes a single-valued function of the signal-to-noise ratio, I0/σ, regardless of the noise level σ.

We find evidence for this form of variance rescaling in the QIF, LIF, and HH models. Figure 4 shows the rescaled gains evaluated from the simulated data. The QIF and HH case, Figure 4B and 4D, match well with the solution of Equation 3, Figure 4A. In the LIF case, Figure 4C, the relative gain shows deviations at low variance, but it approaches a variance-independent limit at large σ. We also present an analytic account in Text S1. On the other hand, in Figure 4E, the HHLS model does not exhibit this form of asymptotic scaling at all. The role of the signal-to-noise ratio, I0/σ, in the HHLS model appears to be quite distinct from the other models. In summary, Equation 3 predicts that one-dimensional LN models will have the tendency to decrease gain with increasing noise level. However, if the f-I curve of a neuron is power-law-like, the resulting gain modulation will be such that the neuron's sensitivity to mean stimulus change at various noise levels is governed only by the signal-to-noise ratio.

thumbnail
Figure 4. Rescaled Relative Gains of Variance-Dependent f-I Curves.

(A) The one-dimensional LN, (B) QIF, and (C) LIF models. The same data as Figure 2 are used. (D) The standard HH model from Figure 1A and 1B. (E) The HHLS model from Figure 1C and 1D. Since the HHLS does not have a rheobase, we instead used Icenter = 20 nA at which the variance-dependent firing rate increase is maximal.

https://doi.org/10.1371/journal.pcbi.1000119.g004

Discussion

In this paper, we have obtained analytical relationships between noise-dependent gain modulation of f-I curves and properties of the sampled linear/nonlinear model. We have shown that gain control arises as a simple consequence of the nonlinearity of the LN model, even with no changes in any underlying parameters.

For a system described by an LN model with only one relevant feature, a simple single-parameter diffusion relationship relates the f-I curves at different variances, where the role of the diffusion coefficient is taken by the integral of the STA. This form strictly limits the possible forms of gain modulation that may be manifested by such a system. The result qualitatively describes the variance dependent gain modulation of different neuron models such as the LIF, QIF, and standard HH neuron models. Models based on dynamical spike generation, such as QIF, showed better agreement with this result than the LIF model. The QIF model case is a good example of how a nonlinear dynamical system can be mapped onto an LN model description [19],[44]. The QIF model has a single dynamical equation whose subthreshold dynamics are captured approximately by a linear kernel, which takes the role of the feature; one can then determine a threshold which acts as a binary decision boundary for spiking. Thus, it is reasonable that the QIF model and the one-dimensional LN model show a similar response pattern to a noisy input. When the system has multiple relevant features, we obtain equations relating the gain with respect to the input mean and the input variance to parameters of the STA and STC. We verified these results using HH neurons displaying two different forms of noise-induced gain control.

Previous work has related different gain control behaviors to a neuron's function as an integrator or a differentiator [3],[7]. From an LN model perspective, the neuron's function is defined by specific properties of the filter or filters ε(t). An integrating filter would consist of entirely positive weights; for leaky integrators these weights will decay at large negative times. A differentiating filter implements a local subtraction of the stimulus, and so should consist of a bimodal form where the positive weights approximately cancel the negative weights.

In general, characterizations of neural function by LN model and by f-I curves are quite distinct. The f-I approach we have discussed here describes the encoding of stationary statistical properties of the stimulus by time-averaged firing rate, while the LN model describes the encoding of specific input fluctuations by single spikes, generally under a particular choice of stimulus statistics. Indeed, the LN characterization can change with the driving stimulus distribution, even, in principle, from an integrator to a differentiator. Thus, a model may, for example, act as a differentiator on short timescales but as an integrator on longer timescales. For systems whose LN approximation varies with mean and variance, the neuron's effective computation changes with stimulus statistics, and so does the information that is represented. One might then ask how the system can decode the represented information. It has been proposed that statistics of the spike train might provide the information required to decode slower-varying stimulus parameters [22],[45]. The possibility of distinguishing between responses to different stimulus statistics using the firing rate alone depends on the properties of the f-I curves.

The primary focus of this work is the restricted problem of single neurons responding to driving currents, where the integrated synaptic current in an in vivo-like condition is approximated to be a (filtered) Gaussian white noise [46][50]. However, our derivations can apply to arbitrary neural systems driven by white noise inputs, if f-I curves are interpreted as tuning functions with respect to the mean stimulus parameter. Given the generality of our results for neural systems, it would be interesting to test our results in cases where firing is driven by an external stimulus. A good candidate would be retinal ganglion cells, which are well-described by LN-type models [9], [14], [51][53], show adaptation to stimulus statistics on multiple timescales [23],[54] and display a variety of dimensionalities in their feature space [14].

A limitation of the tests we have performed here is a restriction to the low firing rate regime where spike-triggered reverse correlation captures most of the dependence of firing probability on the stimulus. The effects of interspike interaction can be significant [16],[17],[55] and models with spike history feedback have been developed to account for this [44],[51],[56],[57]. We have not investigated how spike history effects would impact our results.

Although evidence suggests that gain modulation by noise may be enhanced by slow afterhyperpolarization currents underlying spike frequency adaptation [3], these slow currents are not required to generate gain enhancement in simple neuron models [7], [19], [25][29]. While one may generate diverse types of noise-induced gain modulation only by modifying the mechanism of generating a spike independent of spike history [7], in realistic situations, slow adaptation currents are present and will affect neural responses over many timescales [58][60]. In principle, it is possible to extend our result to include these effects: f-I curves under conditions of spike frequency adaptation have been already discussed [61][63] and can be compared to LN models with spike history feedback. However, our goal here was to demonstrate the effects that can occur independent of slow adaptation currents and before such currents have acted to shift neuronal coding properties.

The suggestive form of our result for one-dimensional LN models led us to look for a representation of neuronal output that is invariant under change in the input noise level. Our motivation is based on a simple principle of dimensional analysis: the gains of the f-I curves with noise may be asymptotically described by a signal-to-noise ratio, a dimensionless variable depending only on the stimulus itself. We showed that this may occur if the f-I curve with no noise obeys asymptotic power-law properties. Such a property has been determined to arise both from the bifurcation patterns of spike generation [31],[34],[35] and due to spike rate adaptation [61]. This relationship implies that the gain of the firing rate as a function of the mean should scale inversely with the standard deviation. Scaling of the gain of the nonlinear decision function with the stimulus standard deviation has been observed to some degree in a number of neural systems [10], [15], [22][25], [29], [64][67]. Such scaling guarantees maximal transmission of information [10],[22]. As we and others have proposed, a static model might suffice to explain this phenomenon [25],[27], although in some cases slow adaptation currents are known to contribute [65],[66].

In summary, we have presented theoretically derived relationships between the variance-dependent gain modulation of f-I curves and intrinsic adaptation in neural coding. In real neural systems, any type of gain modulation likely results from many different mechanisms, possibly involving long-time scale dynamics. Our results show that observed forms of gain modulation may be a result of a pre-existing static nonlinearity that reacts to changes in the stimulus statistics robustly and almost instantaneously.

Materials and Methods

Biophysical Models

We used two single compartmental models with Hodgkin–Huxley (HH) active currents. The first one is an HH model with standard parameters while the second model (HHLS) has a lower Na+ and higher K+ maximal conductance. The voltage changes are described by [32]and the activation variables m, n, and h behave according towhereThe voltage V is in millivolts (mV).

For the HH model, the conductance parameters are K = 36 mS/cm2 and Na = 120 mS/cm2. The HHLS model has K = 41 mS/cm2 and Na = 79 mS/cm2. All other parameters are common to both models. The leak conductance is L = 0.3 mS/cm2 and the membrane capacitance per area C is 1 μF/cm2. The reversal potentials are EL = −54.3 mV, ENa = 50 mV, and EK = −77 mV. The membrane area is 10−3 cm2, so that a current density of 1 μA/cm2 corresponds to a current of 1 nA.

All simulations of these models were done with the NEURON simulation environment [68]. Gaussian white noise currents with various means and variances are generated with an update rate of 5 kHz (dt = 0.2 ms) and delivered into the model via current clamp. For the f-I curves, we simulated 4 min of input for each mean and variance pair. The whole procedure was repeated five times to estimate the variance of the f-I relationship, σrepeat.

We ran another set of simulations for reverse correlation analysis and collected about 100,000 spikes for each stimulus condition. The means and variances of the Gaussian noisy stimuli were chosen such that the mean firing rate did not exceed 10 Hz, and we selected eight means and seven variances for the HH model, and nine means and four variances for the HHLS model.

Integrate-and-Fire-Type Models

In addition to the conductance-based model, we investigated the behavior of two heuristic model neurons driven by a noisy current input. Each model consists of a single dynamical equation describing voltage fluctuations of the form

The first model is a leaky integrate-and-fire (LIF) model [69],[70], for which L(V) = −gL(VEL). We used the parameters gL = 2, EL = 0, and C = 1. Since this choice of L(V) cannot generate a spike, we additionally imposed a spiking threshold Vth = 1 and reset voltage Vreset = −3.

The second is a quadratic integrate-and-fire (QIF) model [31],[37],[38], for which L(V) = gL(VEL)(VVth)/ΔV where ΔV = VthEL>0. We used gL = 0.5, EL = 0, Vth = 0.1, and C = 1. In this model, the voltage V can increase without bound; such a trajectory is defined to be a spike if it crosses Vspike = 5. After spiking, the system is reset to Vreset = 0.

These two models are simulated using a fourth-order Runge–Kutta integration method with an integration time step of dt = 0.01. The input current I(t) was Gaussian white noise, updated at each time step, with a range of means and variances. The f-I curves were obtained from 1,000 s of stimulation for each (mean,variance) condition. We then compared the f-I curves from these models with the relationship derived in the Results section, Equation 5. A numerical solution of the partial differential equation was obtained using a PDE solver in Mathematica (Wolfram Research, Inc.).

Linear/Nonlinear Model

We use the linear/nonlinear (LN) cascade model framework to describe a neuron's input/output relation. We will focus on the dependence of the firing rate of a fixed LN model on the mean and variance of a Gaussian white noise input.

We will take the driving input to be I(t) = I0+ξ(t) where I0 is the mean and ξ(t) is a Gaussian white noise with variance σ2 and zero mean. The linear part of the model selects, by linear filtering, a subset of the possible stimuli probed by I(t). That subset is expressed as n relevant features {εμ(t)}, (μ = 1,2,…,n). Interpreted as vectors, the components of any stimulus that are relevant to changing the firing rate can be expressed in terms of projections onto these features. The firing rate of the model for a given temporal sequence I(t) depends only on s, the input filtered by the n relevant features. Thus the firing rate from the given stimulus depends on the convolution of the input with all n features and can be written as P(spike|s = I0ε̅+x) whereSince I(t) is white noise with stationary statistics, the projections xμ can be taken to be stationary random variables chosen from a Gaussian distribution at each t.

Given the filtered stimulus, a nonlinear decision function P(spike|I0ε̅+x) generates the instantaneous time-varying firing rate. For a specified model and stimulus statistics, the mean firing rate f(I02) = P(spike) is simply(9)where

Equation 9 describes an f-I curve of the model in the presence of added noise with variance σ2. The slope or gain of the firing rate with respect to mean or variance can be computed if P(spike|I0ε̅+x) is known. However, the gains can be also obtained in terms of the first and second moments of P(spike|I0ε̅+x), which can be measured directly by reverse correlation analysis.

Reverse Correlation Analysis

We used spike-triggered reverse correlation to probe the computation of the model neurons through an LN model. We collected about 100,000 spikes and corresponding ensembles of spike triggered stimulus histories in a 30 ms long time window preceding each spike.

From the spike-triggered input ensembles, we calculated spike-triggered averages (STAs) and spike-triggered covariances (STCs). The STA is simply the average of the set of stimuli that led to spikes subtracted from the mean of the “prior” stimulus distribution, the distribution of all stimuli independent of spiking output(10)Therefore, one may consider only the noise part of the zero mean stimulus.

When computing the STC, the prior's covariance is subtracted(11)

Statistical Analysis

In calculating the slope and curvature of the f-I curves, we used 6–10 degree polynomial fitting of the f-I curves, where in any single case, the lowest degree was used which provided both a good fit and smoothness. From the fitting procedure, we obtained the standard deviation of the residuals, σfit. This was repeated five times for f-I curves computed using different noise samples, and from this we computed σrepeat, the standard deviation of each computed slope and curvature. We estimated the total error of our calculation as σtotal = (σrepeat2fit2)1/2. In practice, σrepeat was always greater than σfit by an order of magnitude. This σtotal was used for the error bars in Figure 3.

To evaluate the goodness of fit in Figure 3, we used the Pearson χ2 test by using the reduced χ2 statisticwhere O and E represent the right and left hand sides of Equations 4–6, respectively. From this, the p-values are estimated from the cumulative density function of the χ2 distribution, Q2/k,k). The degree of freedom is k = 54 and k = 34 for the HH and HHLS, respectively.

Derivation of Equations 4–6

We first present two key identities: the first one, which depends on the form of s having additive mean and noise components, is a change of variables for the gradient of P(spike|x+I0ε̅)(12)Secondly, when x is a Gaussian random variable with zero mean and variance σ2, by using integration by parts in can be seen that any function F(x) satisfies(13)

Then, we first take derivatives of both sides of Equation 9 (or equivalently Equation 1), by I0 and σ2, and apply Equations 12 and 13. The first order in I0 is(14)The second order is given by(15)where δμν is a Kronecker delta symbol. The gain with respect to variance is(16)

Now, we show how the right hand sides of Equations 14–16 correspond to the STA and the STC. Given a Gaussian white noise signal ξ(t), we can split it as ξ = ξ, where ξ belongs to the space spanned by our basis features {εμ}, and therefore relevant to spiking. ξ is the orthogonal or irrelevant part. ξ can be written asAgain, x is a Gaussian variable from a distribution Equation 9.

The STA issince ξ is irrelevant and does not make any contribution. Here we use Bayes theoremAs in Equation 9, P(s = x+I0ε̅) = p(x), and therefore the STA becomesComparing this result with Equation 14, we obtain Equation 4.

A similar calculation for the second order [19] showsThis result, combined with Equations 15 and 16, leads to Equations 5 and 6, respectively.

Supporting Information

Text S1.

Firing Rate of the LIF Model with Noisy Stimuli.

https://doi.org/10.1371/journal.pcbi.1000119.s001

(0.09 MB DOC)

Author Contributions

Conceived and designed the experiments: SH. Analyzed the data: SH. Contributed reagents/materials/analysis tools: BL. Wrote the paper: SH BL AF. Derived the equations: SH.

References

  1. 1. Chance FS, Abbott LF, Reyes AD (2002) Gain modulation from background synaptic input. Neuron 35: 773–782.FS ChanceLF AbbottAD Reyes2002Gain modulation from background synaptic input.Neuron35773782
  2. 2. Fellous JM, Rudolph M, Destexhe A, Sejnowski TJ (2003) Synaptic background noise controls the input/output characteristics of single cells in an in vitro model of in vivo activity. Neuroscience 122: 811–829.JM FellousM. RudolphA. DestexheTJ Sejnowski2003Synaptic background noise controls the input/output characteristics of single cells in an in vitro model of in vivo activity.Neuroscience122811829
  3. 3. Higgs MH, Slee SJ, Spain WJ (2006) Diversity of gain modulation by noise in neocortical neurons: regulation by the slow afterhyperpolarization conductance. J Neurosci 26: 8787–8799.MH HiggsSJ SleeWJ Spain2006Diversity of gain modulation by noise in neocortical neurons: regulation by the slow afterhyperpolarization conductance.J Neurosci2687878799
  4. 4. Arsiero M, Lüscher HR, Lundstrom BN, Giugliano M (2007) The impact of input fluctuations on the frequency–current relationships of layer 5 pyramidal neurons in the rat medial prefrontal cortex. J Neurosci 27: 3274–3284.M. ArsieroHR LüscherBN LundstromM. Giugliano2007The impact of input fluctuations on the frequency–current relationships of layer 5 pyramidal neurons in the rat medial prefrontal cortex.J Neurosci2732743284
  5. 5. Prescott SA, Koninck YD (2003) Gain control of firing rate by shunting inhibition: roles of synaptic noise and dendritic saturation. Proc Natl Acad Sci USA 100: 2076–2081.SA PrescottYD Koninck2003Gain control of firing rate by shunting inhibition: roles of synaptic noise and dendritic saturation.Proc Natl Acad Sci USA10020762081
  6. 6. Prescott SA, Ratté S, De Koninck Y, Sejnowski TJ (2006) Nonlinear interaction between shunting and adaptation controls a switch between integration and coincidence detection in pyramidal neurons. J Neurosci 26: 9084–9097.SA PrescottS. RattéY. De KoninckTJ Sejnowski2006Nonlinear interaction between shunting and adaptation controls a switch between integration and coincidence detection in pyramidal neurons.J Neurosci2690849097
  7. 7. Lundstrom BN, Hong S, Fairhall AL (2008) Two computational regimes of a single-compartment neuron separated by a planar boundary in conductance space. Neural Comput 20: 1239–1260.BN LundstromS. HongAL Fairhall2008Two computational regimes of a single-compartment neuron separated by a planar boundary in conductance space.Neural Comput2012391260
  8. 8. Victor J, Shapley R (1980) A method of nonlinear analysis in the frequency domain. Biophys J 29: 459–483.J. VictorR. Shapley1980A method of nonlinear analysis in the frequency domain.Biophys J29459483
  9. 9. Meister M, Berry MJ II (1999) The neural code of the retina. Neuron 22: 435–450.M. MeisterMJ Berry II1999The neural code of the retina.Neuron22435450
  10. 10. Brenner N, Bialek W, de Ruyter van Steveninck R (2000) Adaptive rescaling maximizes information transmission. Neuron 26: 695–702.N. BrennerW. BialekR. de Ruyter van Steveninck2000Adaptive rescaling maximizes information transmission.Neuron26695702
  11. 11. Simoncelli EP, Paninski L, Pillow J, Schwartz O (2004) Characterization of neural responses with stochastic stimuli. In: Gazzaniga M, editor. The Cognitive Neurosciences. 3rd edition. Cambridge (Massachusetts): MIT Press. EP SimoncelliL. PaninskiJ. PillowO. Schwartz2004Characterization of neural responses with stochastic stimuli.M. GazzanigaThe Cognitive Neurosciences. 3rd editionCambridge (Massachusetts)MIT Press
  12. 12. Rust NC, Schwartz O, Movshon JA, Simoncelli EP (2005) Spatiotemporal elements of macaque V1 receptive fields. Neuron 46: 945–956.NC RustO. SchwartzJA MovshonEP Simoncelli2005Spatiotemporal elements of macaque V1 receptive fields.Neuron46945956
  13. 13. Stanley GB, Lei FF, Dan Y (1999) Reconstruction of natural scenes from ensemble responses in the lateral geniculate nucleus. J Neurosci 19: 8036–8042.GB StanleyFF LeiY. Dan1999Reconstruction of natural scenes from ensemble responses in the lateral geniculate nucleus.J Neurosci1980368042
  14. 14. Fairhall AL, Burlingame CA, Narasimhan R, Harris RA, Puchalla JL, et al. (2006) Selectivity for multiple stimulus features in retinal ganglion cells. J Neurophysiol 96: 2724–2738.AL FairhallCA BurlingameR. NarasimhanRA HarrisJL Puchalla2006Selectivity for multiple stimulus features in retinal ganglion cells.J Neurophysiol9627242738
  15. 15. Maravall M, Petersen RS, Fairhall AL, Arabzadeh E, Diamond ME (2007) Shifts in coding properties and maintenance of information transmission during adaptation in barrel cortex. PLoS Biol 5: e19.M. MaravallRS PetersenAL FairhallE. ArabzadehME Diamond2007Shifts in coding properties and maintenance of information transmission during adaptation in barrel cortex.PLoS Biol5e19
  16. 16. Agüera y Arcas B, Fairhall AL (2003) What causes a neuron to spike? Neural Comput 15: 1715–1749.B. Agüera y ArcasAL Fairhall2003What causes a neuron to spike?Neural Comput1517151749
  17. 17. Agüera y Arcas B, Fairhall AL, Bialek W (2003) Computation in a single neuron: Hodgkin and Huxley revisited. Neural Comput 15: 1789–1807.B. Agüera y ArcasAL FairhallW. Bialek2003Computation in a single neuron: Hodgkin and Huxley revisited.Neural Comput1517891807
  18. 18. Slee SJ, Higgs MH, Fairhall AL, Spain WJ (2005) Two-dimensional time coding in the auditory brainstem. J Neurosci 25: 9978–9988.SJ SleeMH HiggsAL FairhallWJ Spain2005Two-dimensional time coding in the auditory brainstem.J Neurosci2599789988
  19. 19. Hong S, Agüera y Arcas B, Fairhall AL (2007) Single neuron computation: from dynamical system to feature detector. Neural Comput 19: 3133–3172.S. HongB. Agüera y ArcasAL Fairhall2007Single neuron computation: from dynamical system to feature detector.Neural Comput1931333172
  20. 20. Atick JJ (1992) Could information theory provide an ecological theory of sensory processing? Network (Bristol, England) 3: 213–251.JJ Atick1992Could information theory provide an ecological theory of sensory processing?Network (Bristol, England)3213251
  21. 21. Theunissen FE, Sen K, Doupe AJ (2000) Spectral-temporal receptive fields of nonlinear auditory neurons obtained using natural sounds. J Neurosci 20: 2315–2331.FE TheunissenK. SenAJ Doupe2000Spectral-temporal receptive fields of nonlinear auditory neurons obtained using natural sounds.J Neurosci2023152331
  22. 22. Fairhall A, Lewen G, Bialek W, de Ruyter van Steveninck RR (2001) Efficiency and ambiguity in an adaptive neural code. Nature 412: 787–792.A. FairhallG. LewenW. BialekRR de Ruyter van Steveninck2001Efficiency and ambiguity in an adaptive neural code.Nature412787792
  23. 23. Baccus SA, Meister M (2002) Fast and slow contrast adaptation in retinal circuitry. Neuron 36: 909–919.SA BaccusM. Meister2002Fast and slow contrast adaptation in retinal circuitry.Neuron36909919
  24. 24. Nagel KI, Doupe AJ (2006) Temporal processing and adaptation in the songbird auditory forebrain. Neuron 51: 845–859.KI NagelAJ Doupe2006Temporal processing and adaptation in the songbird auditory forebrain.Neuron51845859
  25. 25. Gaudry KS, Reinagel P (2007) Benefits of contrast normalization demonstrated in neurons and model cells. J Neurosci 27: 8071–8079.KS GaudryP. Reinagel2007Benefits of contrast normalization demonstrated in neurons and model cells.J Neurosci2780718079
  26. 26. Rudd ME, Brown LG (1997) Noise adaptation in integrate-and-fire neurons. Neural Comput 9: 1047–1069.ME RuddLG Brown1997Noise adaptation in integrate-and-fire neurons.Neural Comput910471069
  27. 27. Paninski L, Lau B, Reyes AD (2003) Noise-driven adaptation: in vitro and mathematical analysis. Neurocomputing 52: 877–883.L. PaninskiB. LauAD Reyes2003Noise-driven adaptation: in vitro and mathematical analysis.Neurocomputing52877883
  28. 28. Yu Y, Lee TS (2003) Dynamical mechanisms underlying contrast gain control in single neurons. Phys Rev E 68: 011901.Y. YuTS Lee2003Dynamical mechanisms underlying contrast gain control in single neurons.Phys Rev E68011901
  29. 29. Borst A, Flanagin VL, Sompolinsky H (2005) Adaptation without parameter change: dynamic gain control in motion detection. Proc Natl Acad Sci USA 102: 6172–6176.A. BorstVL FlanaginH. Sompolinsky2005Adaptation without parameter change: dynamic gain control in motion detection.Proc Natl Acad Sci USA10261726176
  30. 30. Hodgkin AL (1948) The local electric changes associated with repetitive action in a non-medullated axon. J Physiol 107: 165–181.AL Hodgkin1948The local electric changes associated with repetitive action in a non-medullated axon.J Physiol107165181
  31. 31. Izhikevich EM (2006) Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. Cambridge (Massachusetts): MIT Press. EM Izhikevich2006Dynamical Systems in Neuroscience: The Geometry of Excitability and BurstingCambridge (Massachusetts)MIT Press
  32. 32. Hodgkin AL, Huxley AF (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol 463: 391–407.AL HodgkinAF Huxley1952A quantitative description of membrane current and its application to conduction and excitation in nerve.J Physiol463391407
  33. 33. Spekreijse H, Reits D (1982) Sequential analysis of the visual evoked potential system in man: nonlinear analysis of a sandwich system. Ann N Y Acad Sci 388: 72–97.H. SpekreijseD. Reits1982Sequential analysis of the visual evoked potential system in man: nonlinear analysis of a sandwich system.Ann N Y Acad Sci3887297
  34. 34. Ermentrout GB (1994) Reduction of conductance-based models with slow synapses to neural nets. Neural Comput 6: 679–695.GB Ermentrout1994Reduction of conductance-based models with slow synapses to neural nets.Neural Comput6679695
  35. 35. Rinzel JM, Ermentrout GB (1989) Analysis of neuronal excitability. In: Koch C, Segev I, editors. Methods in Neuronal Modeling: From Synapses to Networks. Cambridge (Massachusetts): MIT Press. pp. 135–170.JM RinzelGB Ermentrout1989Analysis of neuronal excitability.C. KochI. SegevMethods in Neuronal Modeling: From Synapses to NetworksCambridge (Massachusetts)MIT Press135170
  36. 36. Hoppensteadt F, Izhikevich EM (1997) Weakly Connected Neural Nets. Berlin: Springer-Verlag. F. HoppensteadtEM Izhikevich1997Weakly Connected Neural NetsBerlinSpringer-Verlag
  37. 37. Ermentrout GB, Kopell N (1986) Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J Appl Math 4: 233–253.GB ErmentroutN. Kopell1986Parabolic bursting in an excitable system coupled with a slow oscillation.SIAM J Appl Math4233253
  38. 38. Ermentrout B (1996) Type I membranes, phase resetting curves, and synchrony. Neural Comput 8: 979–1001.B. Ermentrout1996Type I membranes, phase resetting curves, and synchrony.Neural Comput89791001
  39. 39. Weber EH (1834) De Pulsu, Resorptione, Auditu et Tactu. Annotiones Anatomicae et Physiologicae. Lipsiae: Koehler. EH Weber1834De Pulsu, Resorptione, Auditu et Tactu. Annotiones Anatomicae et PhysiologicaeLipsiaeKoehler
  40. 40. Fechner G (1966) Elements of Psychophysics. New York: Holt, Rinehart and Winston. G. Fechner1966Elements of PsychophysicsNew YorkHolt, Rinehart and Winston
  41. 41. Fitts PM (1954) The information capacity of the human motor system in controlling the amplitude of movement. J Exp Psychol 47: 381–391.PM Fitts1954The information capacity of the human motor system in controlling the amplitude of movement.J Exp Psychol47381391
  42. 42. Stevens SS (1986) Psychophysics: Introduction to Its Perceptual, Neural, and Social Prospects. Piscataway (New Jersey): Transaction Publishers. SS Stevens1986Psychophysics: Introduction to Its Perceptual, Neural, and Social ProspectsPiscataway (New Jersey)Transaction Publishers
  43. 43. Barenblatt GI (2003) Scaling. Cambridge, UK: Cambridge University Press. GI Barenblatt2003ScalingCambridge, UKCambridge University Press
  44. 44. Gerstner W, Kistler W (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge, UK: Cambridge University Press. W. GerstnerW. Kistler2002Spiking Neuron Models: Single Neurons, Populations, PlasticityCambridge, UKCambridge University Press
  45. 45. Lundstrom BN, Fairhall AL (2006) Decoding stimulus variance from a distributional neural code of interspike intervals. J Neurosci 26: 9030–9037.BN LundstromAL Fairhall2006Decoding stimulus variance from a distributional neural code of interspike intervals.J Neurosci2690309037
  46. 46. Gerstein GL, Mandelbrot B (1964) Random walk models for the spike activity of a single neuron. Biophys J 4: 41–68.GL GersteinB. Mandelbrot1964Random walk models for the spike activity of a single neuron.Biophys J44168
  47. 47. Bryant HL, Segundo JP (1976) Spike initiation by transmembrane current: a white-noise analysis. J Physiol 260: 279–314.HL BryantJP Segundo1976Spike initiation by transmembrane current: a white-noise analysis.J Physiol260279314
  48. 48. Mainen ZF, Sejnowski TJ (1995) Reliability of spike timing in neocortical neurons. Science 268: 1503–1506.ZF MainenTJ Sejnowski1995Reliability of spike timing in neocortical neurons.Science26815031506
  49. 49. Destexhe A, Paré D (1999) Impact of network activity on the integrative properties of neocortical pyramidal neurons in vivo. J Neurophysiol 81: 1531–1547.A. DestexheD. Paré1999Impact of network activity on the integrative properties of neocortical pyramidal neurons in vivo.J Neurophysiol8115311547
  50. 50. Rudolph M, Destexhe A (2003) Characterization of subthreshold voltage fluctuations in neuronal membranes. Neural Comput 15: 2577–2618.M. RudolphA. Destexhe2003Characterization of subthreshold voltage fluctuations in neuronal membranes.Neural Comput1525772618
  51. 51. Keat J, Reinagel P, Reid RC, Meister M (2001) Predicting every spike: a model for the responses of visual neurons. Neuron 30: 803–817.J. KeatP. ReinagelRC ReidM. Meister2001Predicting every spike: a model for the responses of visual neurons.Neuron30803817
  52. 52. Chichilnisky EJ (2001) A simple white noise analysis of neuronal light responses. Network (Bristol, England) 12: 199–213.EJ Chichilnisky2001A simple white noise analysis of neuronal light responses.Network (Bristol, England)12199213
  53. 53. Pillow JW, Paninski L, Uzzell VJ, Simoncelli EP, Chichilnisky EJ (2005) Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. J Neurosci 25: 11003–11013.JW PillowL. PaninskiVJ UzzellEP SimoncelliEJ Chichilnisky2005Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model.J Neurosci251100311013
  54. 54. Smirnakis SM, Berry MJ, Warland DK, Bialek W, Meister M (1997) Adaptation of retinal processing to image contrast and spatial scale. Nature 386: 69–73.SM SmirnakisMJ BerryDK WarlandW. BialekM. Meister1997Adaptation of retinal processing to image contrast and spatial scale.Nature3866973
  55. 55. Pillow JW, Simoncelli EP (2003) Biases in white noise analysis due to non-Poisson spike generation. Neurocomputing 52–54: 109–115.JW PillowEP Simoncelli2003Biases in white noise analysis due to non-Poisson spike generation.Neurocomputing52–54109115
  56. 56. Truccolo W, Eden UT, Fellows MR, Donoghue JP, Brown EN (2005) A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. J Neurophysiol 93: 1074–1089.W. TruccoloUT EdenMR FellowsJP DonoghueEN Brown2005A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects.J Neurophysiol9310741089
  57. 57. Paninski L, Pillow J, Lewi J (2006) Statistical models for neural encoding, decoding, and optimal stimulus design. Prog Brain Res 165: 493–507.L. PaninskiJ. PillowJ. Lewi2006Statistical models for neural encoding, decoding, and optimal stimulus design.Prog Brain Res165493507
  58. 58. Schwindt PC, Spain WJ, Foehring RC, Stafstrom CE, Chubb MC, et al. (1988) Multiple potassium conductances and their functions in neurons from cat sensorimotor cortex in vitro. J Neurophysiol 59: 424–449.PC SchwindtWJ SpainRC FoehringCE StafstromMC Chubb1988Multiple potassium conductances and their functions in neurons from cat sensorimotor cortex in vitro.J Neurophysiol59424449
  59. 59. Spain WJ, Schwindt PC, Crill WE (1991) Two transient potassium currents in layer V pyramidal neurones from cat sensorimotor cortex. J Physiol 434: 591–607.WJ SpainPC SchwindtWE Crill1991Two transient potassium currents in layer V pyramidal neurones from cat sensorimotor cortex.J Physiol434591607
  60. 60. La Camera G, Rauch A, Thurbon D, Lüscher HR, Senn W, et al. (2006) Multiple time scales of temporal response in pyramidal and fast spiking cortical neurons. J Neurophysiol 96: 3448–3464.G. La CameraA. RauchD. ThurbonHR LüscherW. Senn2006Multiple time scales of temporal response in pyramidal and fast spiking cortical neurons.J Neurophysiol9634483464
  61. 61. Ermentrout B (1998) Linearization of F-I curves by adaptation. Neural Comput 10: 1721–1729.B. Ermentrout1998Linearization of F-I curves by adaptation.Neural Comput1017211729
  62. 62. Benda J, Herz AVM (2003) A universal model for spike-frequency adaptation. Neural Comput 15: 2523–2564.J. BendaAVM Herz2003A universal model for spike-frequency adaptation.Neural Comput1525232564
  63. 63. La Camera G, Rauch A, Lüscher HR, Senn W, Fusi S (2004) Minimal models of adapted neuronal response to in vivo-like input currents. Neural Comput 16: 2101–2124.G. La CameraA. RauchHR LüscherW. SennS. Fusi2004Minimal models of adapted neuronal response to in vivo-like input currents.Neural Comput1621012124
  64. 64. Kim KJ, Rieke F (2001) Temporal contrast adaptation in the input and output signals of salamander retinal ganglion cells. J Neurosci 21: 287–299.KJ KimF. Rieke2001Temporal contrast adaptation in the input and output signals of salamander retinal ganglion cells.J Neurosci21287299
  65. 65. Arganda S, Guantes R, de Polavieja GG (2007) Sodium pumps adapt spike bursting to stimulus statistics. Nat Neurosci 10: 1467–1473.S. ArgandaR. GuantesGG de Polavieja2007Sodium pumps adapt spike bursting to stimulus statistics.Nat Neurosci1014671473
  66. 66. Diaz-Quesada M, Maravall M (2008) Intrinsic mechanisms for adaptive gain rescaling in barrel cortex. J Neurosci 28: 696–710.M. Diaz-QuesadaM. Maravall2008Intrinsic mechanisms for adaptive gain rescaling in barrel cortex.J Neurosci28696710
  67. 67. Ringach DL, Malone BJ (2007) The operating point of the cortex: neurons as large deviation detectors. J Neurosci 27: 7673–7683.DL RingachBJ Malone2007The operating point of the cortex: neurons as large deviation detectors.J Neurosci2776737683
  68. 68. Hines ML, Carnevale NT (1997) The NEURON simulation environment. Neural Comput 9: 1179–1209.ML HinesNT Carnevale1997The NEURON simulation environment.Neural Comput911791209
  69. 69. Knight BW (1972) Dynamics of encoding in a population of neurons. J Gen Physiol 59: 734.BW Knight1972Dynamics of encoding in a population of neurons.J Gen Physiol59734
  70. 70. Tuckwell HC (1988) Introduction to Theoretical Neurobiology. Cambridge, UK: Cambridge University Press. HC Tuckwell1988Introduction to Theoretical NeurobiologyCambridge, UKCambridge University Press