^{*}

^{¤}

Current address: Computational Neuroscience Unit, Okinawa Institute of Science and Technology, Onna-son, Okinawa, Japan

Conceived and designed the experiments: SH. Analyzed the data: SH. Contributed reagents/materials/analysis tools: BL. Wrote the paper: SH BL AF. Derived the equations: SH.

The authors have declared that no competing interests exist.

In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate versus current (

Many neurons are known to achieve a wide dynamic range by adaptively changing their computational input/output function according to the input statistics. These adaptive changes can be very rapid, and it has been suggested that a component of this adaptation could be purely input-driven: even a fixed neural system can show apparent adaptive behavior since inputs with different statistics interact with the nonlinearity of the system in different ways. In this paper, we show how a single neuron's intrinsic computational function can dictate such input-driven changes in its response to varying input statistics, which begets a relationship between two different characterizations of neural function—in terms of mean firing rate and in terms of generating precise spike timing. We then apply our results to two biophysically defined model neurons, which have significantly different response patterns to inputs with various statistics. Our model of intrinsic adaptation explains their behaviors well. Contrary to the picture that neurons carry out a stereotyped computation on their inputs, our results show that even in the simplest cases they have simple yet effective mechanisms by which they can adapt to their input. Adaptation to stimulus statistics, therefore, is built into the most basic single neuron computations.

An

An alternative method of characterizing a neuron's input-to-output transformation is through a linear/nonlinear (LN) cascade model

Generally, results of reverse correlation analysis may depend on the statistics of the stimulus used to sample the model

Our goal here is to unify the

Recently, Higgs et al.

Here, we examine two models that show these different forms of variance-dependent gain modulation without spike rate adaptation, and study the resulting LN models sampled with different stimulus statistics. We show that these ^{+} and higher K^{+} conductances. The HHLS model is a class 3 neuron and responds only to a rapidly changing input. For this reason, the HHLS model can be thought of as behaving more like a differentiator than an integrator

Each model is simulated as described in the ^{2}) from 0 to 45 nA^{2}. The topmost trace is the response to the highest variance. Each curve is obtained with 31 mean values (_{0}) ranging from −5 to 20 nA. (B) The same data as (A) plotted in the (mean, variance) plane. Lighter shades represent higher firing rates. We used cubic spline interpolation for points not included in the simulated data. (C,D) ^{2} are used.

For a system described by an LN model with a single feature, we derive an equation relating the slopes of the firing rate with respect to stimulus mean and variance. We then consider gain modulation in a system with multiple relevant features and derive a series of equations relating gain change to properties of the spike-triggered average and spike-triggered covariance. Throughout, we assume that the underlying system is fixed, and that its parameter settings do not depend on stimulus statistics. For example, if the model has a single exponential filter with a time constant τ, we assume that τ does not change with the stimulus mean (_{0}) or variance (σ^{2}). However, this does not mean that the model shows a single response pattern regardless of the statistical structure of stimuli. The sampled LN description of a nonlinear system with fixed parameters—even when the underlying model is an LN model

An LN model is composed of its relevant features {ε_{μ}(_{0} and variance σ^{2}, the firing rate is^{2}. We refer to the

For a one-dimensional model ^{2}) and therefore so is _{0}ε̅) in terms of (_{0},σ^{2}). In other words, we have_{0}ε̅) is the only term on the left hand side of Equation 2 that depends on (_{0},σ^{2}) and therefore the right hand side of Equation 2 vanishes. Thus one finds^{2}→0; here the Gaussian distribution becomes a delta function^{2}.

Equation 3 states that the variance-dependent change in the firing rate is simply determined by the curvature of the

^{1/2} for

(A) Variance-dependent ^{1/2} for _{0}) values from 0 to 4 (LIF) and from −2 to 2 (QIF), and 8 variances (σ^{2}) from 0 to 8 for both models.

We also compared Equation 3 with the _{0}. From this perspective, the QIF is a

It is interesting to note that for one-dimensional models, the gain modulation given by Equation 3 depends only on the boundary condition, which implicitly describes how an input with a given mean samples the nonlinearity, but not explicitly on the details of filters or nonlinearity. An ideal differentiator, where firing rate is independent of the stimulus mean, is realized only when the filter has zero integral, ε̅ = 0. This is also the criterion that would be satisfied if the filter itself were ideally differentiating. We will return to the relationship between the LN model functional description and that of the

Here we examine gain modulation in the case of a system with multiple relevant features. In this case, one cannot derive a single simple equation such as Equation 3. Instead, we derive relationships between the characteristics of _{0},σ) curves and quantities calculated using white noise analysis.

Fixed multidimensional models can display far more complex response patterns to different stimulus statistics than one-dimensional models, because linear components in the model can now interact nonlinearly

Here, we relate parameters of the changing spike-triggered average and spike-triggered covariance description to the form of the _{0} and σ^{2} (see _{0} establishes the relationship between the STA and the gain of the

Equations 4–6 show how the nonlinear gain of an

We now examine whether the gain modulation behaviors we have described can be captured by a multi-dimensional LN model. We tested this by computing ^{−6} and ^{−6} for the HH and HHLS where the upper bounds of

Each point is calculated from the simulated data with a selected (mean, variance) input parameter pair, as described in the

Here we discuss a consequence of intrinsic adaptation for neuronal encoding of mean and variance information for a one-dimensional model. In this case, Equation 3 completely specifies intrinsic adaptation, and therefore we will focus on this case.

Our first observation is that Equation 3 is invariant under the simultaneous rescaling of the mean and standard deviation, _{0}→α_{0}, σ→ασ, where α is an arbitrary positive number. This invariance is preserved if the solution is also a function of only a dimensionless variable _{0}/σ, which would represent a signal-to-noise ratio if we describe the neuron's input/output function in terms of an _{0}/σ, we get_{0}(_{0},σ^{2}) with standard deviation depends on the boundary condition, _{0}(

Nevertheless, in practice, the _{0}∼(_{0}−_{c})^{1/2} around the rheobase _{c} _{0}(^{α} asymptotically in such a regime, from Equation 7, the firing rate is asymptotically factorized into a σ dependent and μ = _{0}/σ dependent part as_{0}/σ becomes an

To test to what extent this scaling relationship holds in the models we have considered, we calculated the _{0} = σ ∂ log _{0}; the rescaled relative gain of Equation 8 depends only on μ = _{0}/σ, not on σ. Thus, if the rescaling strictly holds, this becomes a single-valued function of the signal-to-noise ratio, _{0}/σ, regardless of the noise level σ.

We find evidence for this form of variance rescaling in the QIF, LIF, and HH models. _{0}/σ, in the HHLS model appears to be quite distinct from the other models. In summary, Equation 3 predicts that one-dimensional LN models will have the tendency to decrease gain with increasing noise level. However, if the

(A) The one-dimensional LN, (B) QIF, and (C) LIF models. The same data as _{center} = 20 nA at which the variance-dependent firing rate increase is maximal.

In this paper, we have obtained analytical relationships between noise-dependent gain modulation of

For a system described by an LN model with only one relevant feature, a simple single-parameter diffusion relationship relates the

Previous work has related different gain control behaviors to a neuron's function as an integrator or a differentiator

In general, characterizations of neural function by LN model and by

The primary focus of this work is the restricted problem of single neurons responding to driving currents, where the integrated synaptic current in an in vivo-like condition is approximated to be a (filtered) Gaussian white noise

A limitation of the tests we have performed here is a restriction to the low firing rate regime where spike-triggered reverse correlation captures most of the dependence of firing probability on the stimulus. The effects of interspike interaction can be significant

Although evidence suggests that gain modulation by noise may be enhanced by slow afterhyperpolarization currents underlying spike frequency adaptation

The suggestive form of our result for one-dimensional LN models led us to look for a representation of neuronal output that is invariant under change in the input noise level. Our motivation is based on a simple principle of dimensional analysis: the gains of the

In summary, we have presented theoretically derived relationships between the variance-dependent gain modulation of

We used two single compartmental models with Hodgkin–Huxley (HH) active currents. The first one is an HH model with standard parameters while the second model (HHLS) has a lower Na^{+} and higher K^{+} maximal conductance. The voltage changes are described by

For the HH model, the conductance parameters are _{K}^{2} and _{Na}^{2}. The HHLS model has _{K}^{2} and _{Na}^{2}. All other parameters are common to both models. The leak conductance is _{L}^{2} and the membrane capacitance per area ^{2}. The reversal potentials are _{L}_{Na}_{K}^{−3} cm^{2}, so that a current density of 1 μA/cm^{2} corresponds to a current of 1 nA.

All simulations of these models were done with the NEURON simulation environment _{repeat}.

We ran another set of simulations for reverse correlation analysis and collected about 100,000 spikes for each stimulus condition. The means and variances of the Gaussian noisy stimuli were chosen such that the mean firing rate did not exceed 10 Hz, and we selected eight means and seven variances for the HH model, and nine means and four variances for the HHLS model.

In addition to the conductance-based model, we investigated the behavior of two heuristic model neurons driven by a noisy current input. Each model consists of a single dynamical equation describing voltage fluctuations of the form

The first model is a leaky integrate-and-fire (LIF) model _{L}_{L}_{L}_{L}_{th}_{reset} = −3.

The second is a quadratic integrate-and-fire (QIF) model _{L}_{L}_{th}_{th}_{L}_{L}_{L}_{th}_{spike} = 5. After spiking, the system is reset to _{reset} = 0.

These two models are simulated using a fourth-order Runge–Kutta integration method with an integration time step of d

We use the linear/nonlinear (LN) cascade model framework to describe a neuron's input/output relation. We will focus on the dependence of the firing rate of a fixed LN model on the mean and variance of a Gaussian white noise input.

We will take the driving input to be _{0}+ξ(_{0} is the mean and ξ(^{2} and zero mean. The linear part of the model selects, by linear filtering, a subset of the possible stimuli probed by _{μ}(_{0}_{μ} can be taken to be stationary random variables chosen from a Gaussian distribution at each

Given the filtered stimulus, a nonlinear decision function _{0}_{0},σ^{2}) =

Equation 9 describes an ^{2}. The slope or _{0}_{0}

We used spike-triggered reverse correlation to probe the computation of the model neurons through an LN model. We collected about 100,000 spikes and corresponding ensembles of spike triggered stimulus histories in a 30 ms long time window preceding each spike.

From the spike-triggered input ensembles, we calculated spike-triggered averages (STAs) and spike-triggered covariances (STCs). The STA is simply the average of the set of stimuli that led to spikes subtracted from the mean of the “prior” stimulus distribution, the distribution of all stimuli independent of spiking output

When computing the STC, the prior's covariance is subtracted

In calculating the slope and curvature of the _{fit}. This was repeated five times for _{repeat}, the standard deviation of each computed slope and curvature. We estimated the total error of our calculation as σ_{total} = (σ_{repeat}^{2}+σ_{fit}^{2})^{1/2}. In practice, σ_{repeat} was always greater than σ_{fit} by an order of magnitude. This σ_{total} was used for the error bars in

To evaluate the goodness of fit in ^{2} test by using the reduced χ^{2} statistic^{2} distribution, ^{2}/

We first present two key identities: the first one, which depends on the form of _{0}^{2}, by using integration by parts in can be seen that any function

Then, we first take derivatives of both sides of Equation 9 (or equivalently Equation 1), by _{0} and σ^{2}, and apply Equations 12 and 13. The first order in _{0} is_{μν} is a Kronecker delta symbol. The gain with respect to variance is

Now, we show how the right hand sides of Equations 14–16 correspond to the STA and the STC. Given a Gaussian white noise signal ξ(_{∥}+ξ_{⊥}, where ξ_{∥} belongs to the space spanned by our basis features {ε_{μ}}, and therefore relevant to spiking. ξ_{⊥} is the orthogonal or irrelevant part. ξ_{∥} can be written as

The STA is_{⊥} is irrelevant and does not make any contribution. Here we use Bayes theorem_{0}

A similar calculation for the second order

Firing Rate of the LIF Model with Noisy Stimuli.

(0.09 MB DOC)