Skip to main content
Advertisement

< Back to Article

Fig 1.

Foveal over-representation is amplified from cones to mRGCs to cortex.

(A) Cone density, mRGC receptive field density, and V1 cortical magnification factor as a function of eccentricity. Left panel: Cone data from Curcio et al. [9]. Middle panel: midget RGC RF density data from Watson [64]. Both cone and mRGC data are the average across cardinal retinal meridians of the left eye using the publicly available toolbox ISETBIO [6567]. Right panel: V1 CMF is predicted by the areal equation published in Horton and Hoyt [68]. (B) Transformation ratios from cones to mRGCs and mRGCs to V1. The cone:mRGC ratio is unitless, as both cone density and mRGC density are quantified in cells/deg2. The increasing ratio indicates higher convergence of cone signals by the mRGCs. For mRGC:V1 CMF ratio units are defined in cells/mm2. The ratio increase in the first 20° indicates an amplification of the foveal over-representation in V1 compared to mRGCs.

More »

Fig 1 Expand

Fig 2.

Nonuniformities in polar angle representations are amplified from cones to mRGCs to cortex.

(A) Cone density, mRGC density, and V1 CMF for cardinal meridians as a function of eccentricity. Left panel: Cone density from Curcio et al. [9]. Middle panel: mRGC densities from Watson [64]. All data are in visual field coordinates. Black line represents the horizontal visual field meridian (average of nasal and temporal retina), green line represents lower visual field meridian (superior retina), and blue line represents upper visual field meridian (inferior retina). Cone and mRGC data are computed with the open-source software ISETBIO [6567]. Right panel: V1 CMF computed from the HCP 7T retinotopy dataset analyzed by Benson et al. [78] (black, green, blue dots and lines) and predicted areal CMF by the formula in Horton and Hoyt [68] (dotted black line, replotted from Fig 1). All data are plotted in visual field coordinates where black, green, and blue data points represent the horizontal, lower, and upper visual field meridians, respectively. Data points represent the median V1 CMF of ±20° wedge ROIs along the meridians for 1–6° eccentricity in 1° bins. Error bars represent 68%-confidence intervals across 163 subjects using 1,000 bootstraps. Black, green, and blue lines are 1/eccentricity power functions fitted to corresponding data points. Pink dashed line is the average of fits to horizontal, upper, and lower visual field meridians from HCP 7T retinotopy dataset [78] and agrees well with Horton and Hoyt’s formula [68]. (B) Transformation ratios from cones to mRGCs and mRGCs to V1 CMF. Ratios are shown separately for the horizontal (black), lower (green) and upper (blue) visual field meridians. The mRGC:V1 CMF panel has a truncated x-axis due to the limited field-of-view during cortical measurements. These polar angle asymmetries can be found across two different computational models of mRGC density (see S1 Fig, second row).

More »

Fig 2 Expand

Fig 3.

Overview of computational observer model with additional mRGC layer.

A 1-ms frame of a 100% contrast Gabor stimulus is used at each computational step for illustration purposes. (1) Scene radiance. Photons emitted by the visual display, resulting in a time-varying scene spectral radiance. Gabor stimulus shows radiance summed across 400–700 nm wavelengths. (2) Retinal irradiance. Emitted photons pass through simulated human cornea, pupil, and optics, indicated by the schematic point spread function (PSF) in the top right-side box, resulting in time-varying retinal irradiance. Gabor stimulus shows irradiance with wavelengths converted to RGB values for illustration purposes. (3) Cone absorptions. Retinal irradiance is isomerized by a rectangular cone mosaic, resulting in time-varying photon absorption rates for each L-cone with Poisson noise. (4) Cone photocurrent. Absorptions are converted to photocurrent via temporal integration, gain control, followed by adding Gaussian white noise. This results in time-varying photocurrent for each cone. (5) Midget RGC responses. Time-varying cone photocurrents are convolved with a 2D Difference of Gaussians spatial filter (DoG), followed by additive Gaussian white noise and subsampling. (6) Behavioral inference. A linear support vector machine (SVM) classifier is trained on the RGC outputs to classify stimulus orientation per contrast level. With 10-fold cross-validation, left-out data are tested, and accuracy is fitted with a Weibull function to extract the contrast threshold at ~80%.

More »

Fig 3 Expand

Fig 4.

Difference of Gaussians filters used to model mRGC layer.

Two mRGCs are illustrated for a 2x2° field-of-view mRGC array centered at 4.5° and 40° eccentricity. (A) 1D representation of two example mRGC layers in visual space. The mRGC responses are computed by convolving the cone image with the mRGC DoG RF, followed by adding noise, and subsampling the cone array to the corresponding mRGC density. Width for Gaussian center (σc) and surround (σs) are converted to units of degree. As the mRGC filters in our model are not rectified, they respond to both increments and decrements. Physiologically, this would require two cells (an ON and OFF cell), so we count each modeled mRGC location as two cells. Both panels show a mRGC:cone ratio of 2:1. (B) 1D representation of Difference of Gaussians in Fourier space. The Fourier representation illustrates the band-pass and unbalanced nature of the DoG (i.e., non-zero amplitude at DC). Depending on the width/subsample rate, DoGs attenuate different spatial frequencies. However, at our peak stimulus frequency (4 cycles per degree, indicated with red dashed line) the two DoG filters vary a relatively small amount, preserving most stimulus information. Fourier amplitudes are normalized. Note that y-axis is truncated for illustration purposes. (C) 2D representation of two example mRGC layers shown in panel. Midget RGC DoG filters are zoomed into a 1x1° field-of-view cone array (black raster) centered at 4.5° (red center with purple surround) and 40° eccentricity (red center with yellow surround), corresponding to the 1D examples in panel A. Centers and surrounds are plotted at 2 standard deviations. For illustration purposes, only one mRGC is shown; the mRGC array in our computational observer model tiles the entire cone array.

More »

Fig 4 Expand

Fig 5.

Model performance for different computational stages.

Left column shows classifier accuracy as function of stimulus contrast. Data are from simulated experiments with 1,000 trials per stimulus class, using a model with a L-cone only mosaic varying in cone density. Data are fitted with a Weibull function. Contrast thresholds are plotted separately as a function of cone density in the right column. (A) Cone absorptions. Applying a linear SVM classifier to cone absorptions averaged across stimulus time points. (B) Cone photocurrent. Applying a linear SVM classifier to cone outer segment photocurrent responses, averaged across time weighted by a temporally delayed stimulus time course. This transformation of cone absorptions into photocurrent causes a ~10x increase in contrast thresholds, interacting with cone density (i.e., Weibull functions are spaced out compared to cone absorptions). (C) RGC responses. Applying a linear SVM classifier to spatially filtered photocurrent with added white noise. This transformation causes an additional increase in contrast thresholds for all cone densities. Data show results for a fixed subsampling ratio of 2 mRGCs per cone.

More »

Fig 5 Expand

Fig 6.

The effect of spatial filtering properties by mRGCs on full model performance.

(A) Contrast thresholds as a function of cone density and mRGC:cone ratio. Data points are contrast thresholds for cone absorptions, cone photocurrent, and each mRGC:cone ratio separately (for psychometric functions see S3 Fig). Individual mRGC fits are slices of the 3D mesh fit shown in panel B. (B) Mirrored views of combined effect of cone density and mRGC:cone ratio on contrast sensitivity. The mesh is fitted with a locally weighted regression to 3D data: log cone density (x-axis) by log mRGC:cone ratio (y-axis) by log contrast thresholds (z-axis). Individual dots represent the predicted model performance for nasal retina or horizontal visual (red star), superior retina or lower visual (blue star), temporal retina or horizontal visual (green star) and inferior or upper visual (black star) meridian locations at 4.5° eccentricity (matched to stimulus eccentricity in [15]). Contour lines show possible cone densities and mRGC:cone ratios that would predict the same horizontal-vertical and upper/lower vertical-meridian asymmetry as observed in psychophysical data at 4.5° eccentricity. To do so, we scaled the difference in contrast threshold between the lower (blue) and upper (black) vertical visual meridian relative to the horizontal meridian to match the difference in behavior. Goodness of fit of 3D mesh fit is R2 = 0.96.

More »

Fig 6 Expand

Fig 7.

Comparison of model performance to human performance.

(A) Contrast sensitivity predicted by computational observer model up to isomerizations in cones (blue), up to cone outer segment phototransduction (turquoise), up to spatial filtering and subsampling in mRGCs (red), and behavior observed (purple) by Himmelberg et al. (2020) using matching stimulus parameters. HM: horizontal meridian, UVM: upper visual meridian, LVM: lower visual meridian. Model prediction shows contrast sensitivity (reciprocal of contrast threshold) for stimuli at 4.5° eccentricity, with a spatial frequency of 4 cycles per degree. HM is the average of nasal and temporal meridians. Model error bars indicate simulation results allowing for uncertainty in the cone or mRGC density along each meridian (see Methods for details). Behavioral plots show group average results (n = 9) from Himmelberg et al. [15], and error bars represent standard error of the mean across observers. (B) Polar angle asymmetries for cone absorptions, photocurrent, mRGCs, and behavior. HVA: horizontal-vertical asymmetry. VMA: vertical-meridian asymmetry. Blue, turquoise, red, and purple bars match panel (A) and correspond to model prediction up to cone absorptions, cone photocurrent, mRGCs, human behavior. Error bars represent the HVA and VMA when using the upper/lower bound of predicted model error from panel A.

More »

Fig 7 Expand