Fig 1.
Overview of optimally efficient heterogeneous neuronal encoding populations in 1-D and 2-D.
Top row: Based on prior work [8, 9], we derive a closed-form solution for an optimal heterogeneous neuronal population encoding a non-uniform 1-D stimulus probability distribution. This population has a neuronal density that is proportional to the probability of the stimulus. The neuronal density can be mapped to a specific population (here, represented by a set of Gaussian tuning curves) by using the cumulative of the probability distribution p(s) to warp the stimulus coordinates (s) over which each neuron’s tuning curve is defined. The resulting heterogeneous 1-D neuronal population is compressed in regions of higher probability and expanded in regions of lower probability. Bottom row: Neurons throughout the nervous system encode more than one stimulus dimension. Here, we extend the previous 1-D framework to examine optimally efficient neuronal populations in arbitrary dimensions. In closed form, we show that given the same assumptions used for 1-D populations, higher dimensional neuronal populations should also have density proportional to probability. However, in higher dimensions with statistical dependencies, density remapping cannot be achieved via the cumulative. Instead, we show that in both 1-D and higher dimensions, the optimal mapping is embodied by the gradient of a scalar potential function that reflects the displacement necessary to uniformly distribute neuronal density as a function of probability. This encoding potential is illustrated here as a surface, with its inverse gradient illustrated as a vector field below. The vector field is inverted to provide a more intuitive visual of how neuronal density gets condensed to areas of high probability. This gradient can be numerically optimized in 2-D for a given density function, which we exploit to derive optimal 2-D neuronal populations.
Fig 2.
Parameterizing hetereogeneous neuronal populations.
A) A uniform population of neurons approximately tiles the stimulus space (s) with identical, equally spaced Gaussian tuning curves. B) The Fisher information of this population is roughly uniform (blue line), matching the approximation in Eq (16) (red line). C) A displacement field that is smooth and slowly varying relative to the tuning curves. These displacement values apply to the stimulus space, arrows below illustrate the direction and magnitude of shifts in the resulting tuning curves defined over s (which corresponds to the inverse of the displacement field). D) After the displacement field is applied, the neuronal population now has heterogeneous tuning curves. Displacements that stretch the stimulus space result in denser, narrower tunings. Displacements that compress the stimulus space result in sparser, wider tunings. E) A gain function that is smooth relative to the tuning curves can also allow neurons to have different response magnitudes. F) Following the application of both the displacement field and the gain function, we have a transformed heterogeneous population with variable tuning curves. G) The Fisher information in the hetereogenous population is no longer uniform, as illustrated by the measured (blue) and approximated (red) lines. S1 Fig illustrates the consequences when the displacement field and gain function are not smooth and slowly varying with respect to the tuning curves.
Fig 3.
Example 2-D stimulus probability distributions and the resulting optimal encoding populations.
A) Each row represents a different example probability distribution over two stimulus dimensions (s1 and s2). For each panel, the probabilities are defined over a lattice ranging from -1 to 1 (cropped to the central 65% to remove boundary artifacts). Top: uniform over s2 and Gaussian distributed over s1 (μ = 0, σ = 0.25). Upper middle: isotropic bivariate Gaussian (μ = 0, σ = 0.25 in both dimensions). Lower middle: bivariate generalized Gaussian (μ = 0, σs1 = 0.75, σs2 = 0.25, power = 1.1). Bottom: Gaussian distributed over s1, with a σ that varies non-linearly with s2 (this distribution is non-separable). B) For each probability distribution, we show a down-sampled and scaled visualization of the inverse density mapping function. The direction and length of the arrows illustrate how density will be mapped from sensory space into the stimulus space. C) For each probability distribution, we show an example neuronal population that has been warped to optimally encode the stimulus. For these visualizations, we chose a population of neurons with isotropic bivariate Gaussian tuning curves (σ = 0.05) tiling the space on a hexagonal lattice (spacing ≈ 0.2). Though these choices for the population are arbitrary, varying them does not change the qualitative properties of the warped populations. Circular domain boundaries were used for the bottom three examples. To account for the uniform probability in panel A (top row), the population illustrated in panel C (top row) was defined with a square rather than a circular domain boundary. D) On the right side of each population, a pair of 1-D samples are illustrated. For each sample, s2 is held constant and the tuning curves are visualized over s1. Neurons with a maximum normalized response of less than 0.2 within the sample are not visualized. The distribution of Fisher information in each 2-D population is shown in S2 Fig).
Fig 4.
Analysis of how lower-dimensional measurements of tuning curve properties (1-D gain and tuning width) relate to the higher-dimensional stimulus probability.
Four example neuronal populations are shown, which correspond to the probability distributions and optimized mappings in Fig 3 (re-plotted in the top row). We simulated a set of 1-D experiments by selecting a single value for either s1 or s2 and measuring the response gain (maximum response) and tuning sharpness (inverse of the full width half maximum) of a set of neurons within this ‘slice’ (σ pre-warping was 0.05). This method simulates what the measured neuronal gain and tuning bandwidth would be in an experiment in which one stimulus feature was held constant and the other was varied. (A-D) For each of the illustrated populations, these panels plot the 1-D tuning sharpness as a function of probability, for a sample of neurons (400-700 neurons). Samples that were drawn by holding s1 constant are shown in red, and samples drawn by holding s2 constant are in black. (E-H) These panels plot the 1-D response gain as a function of stimulus probability, as in the panels above. (I,J) We repeated these simulations 500 times for randomly generated 2-D stimulus probability distributions and calculated the correlation between gain/tuning and probability. Each probability distribution was a zero-centered, bivariate Gaussian with a random orientation and major/minor σ drawn uniformly from 0.1-0.4. For each simulation, the tuning curves were modeled as isotropic Gaussians with σ drawn uniformly from 0.03-0.07. A random 1-D slice was selected, and 25 neurons were sampled. P-values indicate the results of a Wilcoxon signed rank test determining whether the median correlation was significantly different from zero.