Skip to main content
Advertisement

< Back to Article

Fig 1.

Model of neural encoding with three sources of noise.

A: Model schematic for a single pathway. An input s is directly corrupted by some noise η, and then transformed nonlinearly by f(⋅). The nonlinear processing stage sets the mean of a scaled Poisson response with variance equal to κ times the mean response. This response is corrupted by additional additive downstream noise ζ to give a total response r. B: Transformed stimulus distribution at each stage of the model. C: Model schematic for two parallel pathways. Noise upstream and downstream of the nonlinearity may be correlated across neurons. For schematic purposes, we have drawn all signal processing steps as though they are contained within a single neuron, but each pathway could more generally represent signal processing spread out across multiple neurons.

More »

Fig 1 Expand

Fig 2.

Optimal nonlinearities when one noise source dominates, found by minimizing the mean squared error (MSE) of a linear estimator.

Each row shows three separate cases in which a single source of noise dominates. The dominant noise source is indicated by the highlighted source in the circuit schematics left of each row. The overall level of noise is quantified by the signal-to-noise ratio (SNR), which is fixed in each column. The SNR is largest in the leftmost column and smallest in the rightmost column; i.e., the strength of the noise increases toward the right. The shape of the optimal nonlinearity changes markedly depending on which noise source dominates the circuit, even when the overall signal-to-noise ratio of model responses is the same. Analytical results (dashed colored lines) and simulations with sigmoidal nonlinearities (solid lines) are shown. The stimulus distribution (dashed gray curve) is also shown for reference. Shaded regions encompass nonlinearities that perform within 1% of the minimum mean squared error of the optimal sigmoidal nonlinearity. The SNR is computed as the variance of the signal (the variance, across all inputs, of the average response to a given input) divided by the variance of the noise (the average variance in responses to a given input); see Methods.

More »

Fig 2 Expand

Fig 3.

Responses produced by optimal (left column) and suboptimal (right column) nonlinearities.

Each row shows a different set of noise conditions in which a single source of noise is dominant (i.e., upstream noise dominates in panels A and B, Poisson noise in C and D, and downstream noise in E and F). Markers show 1,000 points randomly selected from the stimulus distribution (bottom subpanels) and the corresponding responses that are produced by the nonlinearity (solid line). Different nonlinearities produce very different response distributions (left subpanels). These particular suboptimal nonlinearities are chosen for illustrative purposes, to highlight qualitative features of the optimal nonlinearities.

More »

Fig 3 Expand

Fig 4.

Optimal nonlinearities for cuts through parameter space.

A: Schematic showing regions of 3-dimensional parameter space: σup (purple), σdown (green), and κ (blue). Insets show the optimal nonlinearity corresponding to the like-colored point, along with the stimulus distribution (dashed gray curves). B: Slope of the optimal nonlinearity as each parameter is varied along the corresponding dashed axis in A. C: Offsets plotted in the same manner as B. Slopes and offsets are obtained from the exact solutions of the model.

More »

Fig 4 Expand

Fig 5.

Optimal nonlinearities for a circuit with two parallel pathways, found by minimizing mean squared error (MSE) of a linear estimator.

A: Optimal nonlinearities for circuits in which pathways are of opposite polarity. Each row corresponds to a case in which one particular noise source is dominant. The dominant noise source is indicated by the highlighted source in the schematics at the center of the figure. As in Fig 2, solid curves are the optimal sigmoidal nonlinearities and colored dashed curves are optimal nonlinearities obtained analytically. The gray dashed curves represent the stimulus distribution. Shaded regions represent the range of sigmoidal nonlinearities that perform within 1% of the mean squared error of the optimal sigmoidal nonlinearities. B: Same as A but for pathways of the same polarity. We address the relative coding efficiency of the different polarities in the section “Globally optimal strategies.”

More »

Fig 5 Expand

Fig 6.

Optimal ON-OFF or ON-ON nonlinearities for slices through parameter space.

A: Schematics showing regions of the effective 3-dimensional parameter space: total input correlation ρeff (purple), scaled Poisson strength κ (blue), and downstream noise standard deviation σdown (green). Insets show the optimal nonlinearity corresponding to the like-colored point. Top schematic: optimal ON-OFF nonlinearities, bottom: optimal ON-ON nonlinearities. B: Slopes of the optimal nonlinearities (rescaled by ) for ON-OFF (solid) and ON-ON (dashed) solutions as each parameter is varied along the corresponding dashed axis in A. C: Offsets (rescaled by ) plotted in the same manner as B. Insets in the ρeff plots are zoomed in to resolve the splitting of the identical ON-ON solution into the non-identical ON-ON solution. Rescaled slopes and offsets are obtained from numerical solutions of the analytic model; see Methods for details about rescaling.

More »

Fig 6 Expand

Fig 7.

Dependence of solution polarity on noise parameters.

Each panel shows the optimal solution type (indicated by color) as a function of σdown and ρdown for a particular value of ρeff. Dot size indicates the percent decrease in MSE between the globally optimal solution and the best solution of a different type (ON-ON where ON-OFF is optimal, and ON-OFF where ON-ON is optimal). Dots show results from numerical solution of integral equations for the exact nonlinearities; black lines show analytic predictions for boundaries at which ON-ON and ON-OFF solutions are equally optimal. The parameter κ has relatively little influence on the qualitative features of this plot, so for simplicity, we only show cases where κ = 1.0. The crossover from identical to non-identical ON-ON solutions occurs between ρeff ≈ 0.9 − 1.0; strong downstream noise or Poisson strength reduces the range over which splitting occurs. The rescaled nonlinearities and globally optimal strategy depend only on the parameters ρeff, κ, σdown and ρdown; the dependence on σs, σup, and ρup enters only through ρeff. See Methods for details.

More »

Fig 7 Expand

Fig 8.

Complementary methods for determining the optimal nonlinearity.

A: We use simulations to find the sigmoidal nonlinearity that minimizes the mean squared error (MSE) of a linear readout of the stimulus. We sweep over multiple possible slope and offset combinations to find the optimal nonlinearity. MSE is given in units of the stimulus variance, . B: Using the same method as A, we maximize the mutual information (MI) between stimulus and responses. MI is given in units of bits. C: Optimal sigmoidal nonlinearities (blue and purple curves) found from simulations versus the optimal nonlinearity determined by solving the model analytically (gray curve). The analytical solution is determined non-parametrically, and was not chosen to be piecewise linear. All nonlinearities are qualitatively similar, regardless of the criterion for optimality or constraints on the functional form of the nonlinearity.

More »

Fig 8 Expand

Table 1.

Parameter values used in numerical solutions of the coupled integral equations determining the optimal nonlinearities f1(z) and f2(z).

More »

Table 1 Expand

Table 2.

Parameters used to generate data shown in Fig 2 (single pathway optimal nonlinearities).

More »

Table 2 Expand

Table 3.

Parameters used to generate data shown in Fig 3 (comparison of optimal and suboptimal nonlinearities).

More »

Table 3 Expand

Table 4.

Parameters used to generate data shown in Fig 5 (parallel pathway optimal nonlinearities).

More »

Table 4 Expand