Skip to main content
  • Loading metrics

Asymmetries around the visual field: From retina to cortex to behavior

  • Eline R. Kupers ,

    Roles Formal analysis, Investigation, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    Current address: Department of Psychology, Stanford University, Stanford, California, United States of America

    Affiliations Department of Psychology, New York University, New York, New York, United States of America, Center for Neural Sciences, New York University, New York, New York, United States of America

  • Noah C. Benson,

    Roles Formal analysis, Investigation, Methodology, Resources, Software, Supervision, Writing – review & editing

    Current address: eScience Institute, University of Washington, Seattle, Washington, United States of America

    Affiliations Department of Psychology, New York University, New York, New York, United States of America, Center for Neural Sciences, New York University, New York, New York, United States of America

  • Marisa Carrasco,

    Roles Conceptualization, Funding acquisition, Investigation, Supervision, Writing – review & editing

    Affiliations Department of Psychology, New York University, New York, New York, United States of America, Center for Neural Sciences, New York University, New York, New York, United States of America

  • Jonathan Winawer

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Software, Supervision, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Department of Psychology, New York University, New York, New York, United States of America, Center for Neural Sciences, New York University, New York, New York, United States of America


Visual performance varies around the visual field. It is best near the fovea compared to the periphery, and at iso-eccentric locations it is best on the horizontal, intermediate on the lower, and poorest on the upper meridian. The fovea-to-periphery performance decline is linked to the decreases in cone density, retinal ganglion cell (RGC) density, and V1 cortical magnification factor (CMF) as eccentricity increases. The origins of polar angle asymmetries are not well understood. Optical quality and cone density vary across the retina, but recent computational modeling has shown that these factors can only account for a small percentage of behavior. Here, we investigate how visual processing beyond the cone photon absorptions contributes to polar angle asymmetries in performance. First, we quantify the extent of asymmetries in cone density, midget RGC density, and V1 CMF. We find that both polar angle asymmetries and eccentricity gradients increase from cones to mRGCs, and from mRGCs to cortex. Second, we extend our previously published computational observer model to quantify the contribution of phototransduction by the cones and spatial filtering by mRGCs to behavioral asymmetries. Starting with photons emitted by a visual display, the model simulates the effect of human optics, cone isomerizations, phototransduction, and mRGC spatial filtering. The model performs a forced choice orientation discrimination task on mRGC responses using a linear support vector machine classifier. The model shows that asymmetries in a decision maker’s performance across polar angle are greater when assessing the photocurrents than when assessing isomerizations and are greater still when assessing mRGC signals. Nonetheless, the polar angle asymmetries of the mRGC outputs are still considerably smaller than those observed from human performance. We conclude that cone isomerizations, phototransduction, and the spatial filtering properties of mRGCs contribute to polar angle performance differences, but that a full account of these differences will entail additional contribution from cortical representations.

Author summary

The neural circuitry of the visual system is organized into multiple maps of the visual field. Each map is orderly, as nearby cells represent nearby points in the visual field. Each map is also non-uniform, in that some portions of the visual field are sampled more densely than others. These non-uniformities emerge from the first stage of processing, the photoreceptor array in the retina. The cone photoreceptors vary in density with eccentricity—they are denser in the central than the peripheral retina—and with polar angle—they are denser on the horizontal than vertical meridian. Our analyses show that both the eccentricity gradient and polar angle asymmetries become more pronounced in each of two subsequent processing stages, the midget retinal ganglion cells and primary visual cortex. We then implement a computational observer model incorporating several components of the early visual system. The model shows that the information present in the cone array can explain a small portion of the polar angle asymmetries in human visual performance, and the information present in the midget retinal ganglion cells can explain more, but still less than half, of the performance asymmetries. A full account of performance asymmetries will entail additional contributions from cortex.


Visual performance is not uniform across the visual field. The most well-known effect is a decrease in visual acuity as a function of eccentricity: we see more poorly in the periphery compared to the center of gaze [14]. This observed difference in visual performance has been attributed to several physiological factors, starting as early as the distribution of photoreceptors [5,6]. In the human fovea, the cones are tightly packed such that visual input is encoded at high spatial resolution. In peripheral retinal locations, cones are larger and interspersed among rods, resulting in a drastically lower density [710]; hence a decrease in spatial resolution.

Visual performance also differs as a function of polar angle. At matched eccentricity, performance is better along the horizontal than vertical visual meridian (horizontal-vertical anisotropy or “HVA”, e.g., [1116]) and better along the lower than upper vertical visual meridian (vertical-meridian asymmetry or “VMA”, e.g., [1218]). These polar angle asymmetries are observed in many different visual tasks, such as those mediated by contrast sensitivity [1215,1931], spatial resolution [11,16,17,19,20,3234], contrast appearance [35], visual search [3644], crowding [4447], and tasks that are thought to recruit higher visual areas such as visual working memory [34]. Covert spatial attention improves performance similarly at all iso-eccentric stimulus locations, thus it does not eliminate the polar angle asymmetries [12,13,48,49].

These polar angle effects can be large. For instance, for a Gabor patch at 4.5° eccentricity with a spatial frequency of 4 cycles per degree, contrast thresholds are close to double for the upper vertical meridian compared to the horizontal meridian [12,13,15]. This is an effect size similar to doubling stimulus’ eccentricity from 4.5° to 9° on the horizontal axis [15,20]. Additionally, these performance differences are retinotopic, shifting in line with the retinal location of the stimulus rather than its location in space [14].

The visual system has polar angle asymmetries from its earliest stages, including in the optics and cone density. In a computational observer model that tracked information from the photons in the scene through the optics and cone isomerizations, variations in optical quality and cone density accounted for less than 10% of the observed polar angle asymmetries in a contrast threshold task [50]. This leads to the question, what additional factors later in the visual processing stream give rise to visual performance differences with polar angle?

One possibility is that even without additional asymmetries in cell density, later processing could exacerbate the earlier asymmetries. For example, the larger cone apertures observed at lower cone densities result in greater downregulation of the cone photocurrent [51], hence this decrease in signal-to-noise ratio might exacerbate polar angle asymmetries.

A second—not mutually exclusive—possibility is that there are additional polar angle asymmetries in the distribution of other downstream cell types. In the human retina, the best described retinal ganglion cells (RGCs) are the midget and parasol cells. Both of these cell types show a decrease in density as a function of eccentricity and vary in density as a function polar angle in humans [5258] and monkeys [5962]. Because midget RGCs are the most numerous ganglion cells in primates (i.e., 80% of ~1 million RGCs compared to 10% parasols and 10% other types) and have small cell bodies and small dendritic field trees that increase with eccentricity [60,61,63], they are often hypothesized to set an anatomical limit on high resolution spatial vision such as acuity and contrast sensitivity at mid to high spatial frequencies [55,61].

Interestingly, in the range of eccentricities used for many psychophysical tasks (0–10°), cone density shows an HVA (greater density on the horizontal than vertical meridian), but not a VMA, inconsistent with behavior (there is a slightly greater density on the upper than lower vertical visual meridian, opposite what one would predict to explain behavior) [810]. Midget RGC density, in contrast, shows both an HVA and a VMA, making their distribution patterns more similar to behavioral patterns [5254,57,64].

Here, we investigate how asymmetries in the visual system vary across processing stages. First, we quantify asymmetries in spatial sampling around the visual field in three early visual processing stages: cones, mRGCs, and V1 cortex. We do so because it is important to first identify if there are any differences in spatial encoding across these processing stages, and if so, how these differences relate to differences in behavior. Then we extend our previously published computational observer model, which included optics and cone sampling, by adding a model of conversion from photon absorptions to photocurrent, and then mRGC-like spatial filtering. We compare this observer model to our previous model (no RGC layer) and to human performance on a two alternative forced choice (2-AFC) orientation discrimination task. By comparing the predicted performance to human observers, we can quantify the contribution of mRGCs to visual performance differences around the visual field.


We quantify the asymmetries in cone density, midget retinal ganglion cells (mRGCs) density and V1 cortical magnification factor (CMF)—both as a function of eccentricity and for the four cardinal meridians. In the next two sections, we first show that both eccentricity gradients and polar angle asymmetries are amplified from cones to mRGCs and from mRGCs to early visual cortex. Then we implement the observed variations in mRGC density in a computational observer model to test whether biologically plausible differences in mRGC sampling across the cardinal meridians can quantitatively explain psychophysical performance differences as a function of polar angle.

Fovea-to-periphery gradient is amplified from retina to mRGCs to early visual cortex

A hallmark of the human retina is the sharp drop in cone density from fovea to periphery [810]. Within the central one degree, cone density decreases dramatically (on average by 3.5-fold). Beyond the fovea, cone density continues to decrease by 10-fold between 1° and 20° eccentricity (Fig 1A, left panel). This decrease in cone density is due to an increase in cone spacing caused by the presence of rods and by the increase in cone diameter [9].

Fig 1. Foveal over-representation is amplified from cones to mRGCs to cortex.

(A) Cone density, mRGC receptive field density, and V1 cortical magnification factor as a function of eccentricity. Left panel: Cone data from Curcio et al. [9]. Middle panel: midget RGC RF density data from Watson [64]. Both cone and mRGC data are the average across cardinal retinal meridians of the left eye using the publicly available toolbox ISETBIO [6567]. Right panel: V1 CMF is predicted by the areal equation published in Horton and Hoyt [68]. (B) Transformation ratios from cones to mRGCs and mRGCs to V1. The cone:mRGC ratio is unitless, as both cone density and mRGC density are quantified in cells/deg2. The increasing ratio indicates higher convergence of cone signals by the mRGCs. For mRGC:V1 CMF ratio units are defined in cells/mm2. The ratio increase in the first 20° indicates an amplification of the foveal over-representation in V1 compared to mRGCs.

The second processing stage we focus on are the midget RGCs. The mRGC cell bodies are laterally displaced from their receptive fields by the foveal cones. Therefore, we use a computational model by Watson [64] that combines cone density [9], mRGC density [53] and displacement [57] to infer the mRGC density referred to the visual field, rather than the cell body positions. Throughout, we refer to mRGC density with respect to receptive fields. Like the cones, midget RGCs sample the visual field differentially as a function of eccentricity. At the central one degree, mRGC density is greater than cone density. The fovea-to-periphery gradient is steeper for mRGCs than for cones (Fig 1A, middle panel compared to left panel). This divergence results in a cone:mRGC ratio of 0.5 (Fig 1B, left panel), indicating a ‘direct line’ between a single cone and a pair of ON- and OFF-center mRGCs. In the periphery, mRGC density falls off at a faster rate than cones. For example, cone density decreases by 10-fold between 1° and 20° eccentricity, whereas mRGC density decreases by 80-fold. This convergence can also be expressed in the cone:mRGC ratio, which increases as a function of eccentricity (Fig 1B, left panel).

Third, we quantify the amount of V1 surface area devoted to a portion of the visual field, also known as the cortical magnification factor (Fig 1A, right panel). There have been claims that V1 CMF is proportional to retinal ganglion cell density [6972] and see Discussion). However, when comparing human mRGCs density [64] to V1 CMF [68], we find that the ratio is not constant: The foveal magnification is even more accentuated in V1 up to 20° eccentricity (Fig 1B, right panel). These results are consistent with the findings in squirrel monkey [73], owl monkey [74], and macaque [75], all of which show that the cortical magnification function falls off with eccentricity more steeply in V1 than would be predicted by mRGC density alone. Beyond 20° eccentricity, the mRGC to V1 CMF ratio declines slowly. This effect is driven by V1 CMF falling off slightly more steeply than mRGC density. The relative compression of V1 CMF vs mRGC density in the far periphery has been reported in owl monkey [74]. However, given that this result has not been confirmed in human cortex, we cannot exclude the possibilities that in the far periphery Watson’s formula [64] overpredicts mRGC density, Horton and Hoyt’s formula [68] underpredicts V1 CMF, or a combination of both.

Polar angle asymmetries are amplified from cones to mRGCs

Cone density differs as a function of polar angle. It is higher along the horizontal visual field meridian (average of nasal and temporal retina meridians) than the upper and lower vertical visual field meridians (representing the inferior and superior retinal meridians) (Fig 2A, left panel). This horizontal-vertical asymmetry is around 20% and relatively constant with eccentricity. There is no systematic difference between the cone density in the upper and lower visual field meridians. If anything, there is a slight ‘inverted’ vertical-meridian asymmetry in the central three degrees: cones are more densely packed along the upper vertical visual meridian. Assuming greater density leads to better performance, this would predict better performance on the upper vertical meridian in the central three degrees, opposite of the typical asymmetry reported in behavior, which has been found up to 1.5° eccentricity in a study on contrast sensitivity [30]. All of these patterns of cone density asymmetries are found using two different datasets with different methods: a post-mortem retinal dataset [9] and an in vivo dataset [10], indicating reproducibility of the biological finding. All of the patterns are also consistent when computed using two different analysis toolboxes (ISETBIO [6567] and rgcDisplacementMap [76], S1 Fig, top row), indicating computational reproducibility.

Fig 2. Nonuniformities in polar angle representations are amplified from cones to mRGCs to cortex.

(A) Cone density, mRGC density, and V1 CMF for cardinal meridians as a function of eccentricity. Left panel: Cone density from Curcio et al. [9]. Middle panel: mRGC densities from Watson [64]. All data are in visual field coordinates. Black line represents the horizontal visual field meridian (average of nasal and temporal retina), green line represents lower visual field meridian (superior retina), and blue line represents upper visual field meridian (inferior retina). Cone and mRGC data are computed with the open-source software ISETBIO [6567]. Right panel: V1 CMF computed from the HCP 7T retinotopy dataset analyzed by Benson et al. [78] (black, green, blue dots and lines) and predicted areal CMF by the formula in Horton and Hoyt [68] (dotted black line, replotted from Fig 1). All data are plotted in visual field coordinates where black, green, and blue data points represent the horizontal, lower, and upper visual field meridians, respectively. Data points represent the median V1 CMF of ±20° wedge ROIs along the meridians for 1–6° eccentricity in 1° bins. Error bars represent 68%-confidence intervals across 163 subjects using 1,000 bootstraps. Black, green, and blue lines are 1/eccentricity power functions fitted to corresponding data points. Pink dashed line is the average of fits to horizontal, upper, and lower visual field meridians from HCP 7T retinotopy dataset [78] and agrees well with Horton and Hoyt’s formula [68]. (B) Transformation ratios from cones to mRGCs and mRGCs to V1 CMF. Ratios are shown separately for the horizontal (black), lower (green) and upper (blue) visual field meridians. The mRGC:V1 CMF panel has a truncated x-axis due to the limited field-of-view during cortical measurements. These polar angle asymmetries can be found across two different computational models of mRGC density (see S1 Fig, second row).

The polar angle asymmetries in density are larger in the mRGC distribution. The horizontal visual field meridian (average of nasal and temporal retina) contains higher cell densities (after correction for cell body displacement) than the upper and lower visual field meridians (Fig 2A, middle panel). This horizontal-vertical asymmetry increases with eccentricity. For example, at 3.5° eccentricity, the average horizontal visual field density is ~20% higher than the average of upper and lower visual field meridians. By 40° eccentricity, this density difference increases to ~60%. Beyond 10° eccentricity, this horizontal-vertical asymmetry is mostly driven by the nasal retina, as it contains higher mRGC density than the temporal retina. This finding is in line with earlier histology reports in macaque [62] and positively correlated with spatial resolution tasks (e.g., [77]). This nasal-temporal asymmetry, although interesting, is beyond the focus of this paper, as the asymmetries in performance we observe are found in both binocular and monocular experiments [12,16]. Overall, the emphasis on the horizontal is substantially greater in the mRGCs than the cones.

Unlike the cones, mRGC receptive fields show a consistent asymmetry along the vertical meridian: The lower visual meridian (superior retinal meridian) contains a higher mRGC density than the upper visual meridian (inferior retinal meridian). This is consistent with the psychophysical VMA, showing better performance on the lower vertical meridian [1215,1931]. This asymmetry increases with eccentricity. For example, the lower vertical meridian (superior retina) has ~15% higher density compared to upper vertical (inferior) at 3.5°, and ~50% higher density at 40° eccentricity. This interaction between retinal meridian and eccentricity is summarized in the cone-to-mRGC transformation plot (Fig 2B, left panel), where the convergence ratio from cones to mRGCs increases more rapidly along the upper than the lower vertical and the horizontal visual meridians (see also S2 Fig).

Polar angle asymmetries are amplified from mRGCs to early visual cortex

Because the areal V1 CMF calculation by Horton and Hoyt [68] does not make separate predictions for the cardinal meridians, we used the publicly available retinotopy dataset from the Human Connectome Project (HCP) analyzed by Benson et al. [79] to calculate the CMF along the meridians (see also [78]). As a first check on agreement between the two datasets, we found that the V1 CMF data measured in 163 subjects with functional MRI [78], pooled across all polar angles, was a close match to Horton and Hoyt’s [68] prediction based on lesion case studies from three decades ago. We then used the HCP dataset to compute CMF along the separate meridians.

We find that polar angle asymmetries in cortical magnification factors are yet larger than those found in mRGC density (Fig 2A, right panel), where V1 CMF is higher on the horizontal than vertical meridian, and the V1 CMF is higher for the lower than the upper vertical meridian. For example, at 3.5° eccentricity CMF is ~52% higher on the horizontal than vertical meridians and ~41% higher for the lower than upper vertical meridian. These polar angle asymmetries show a 2x increase within the first three degrees of eccentricity before flattening (Fig 2B, right panel) and are mostly driven by the upper vertical meridian (S2 Fig). This indicates that the mapping of the visual field in early visual cortex is not simply predicted from the distribution of midget retinal ganglion cells, but rather the cortex increases the retinal polar angle asymmetries.

A computational observer model from stimulus to mRGCs to behavior

To understand how polar angle asymmetries in visual field representations might affect visual performance, we added a photocurrent transduction and retinal ganglion cell layer to our computational observer model [50]. In this observer model, we used the publicly available ISETBIO toolbox [6567] to simulate the first stages of visual pathway including the stimulus scene, fixational eye movements, chromatic and achromatic optical aberrations, and isomerization by the cone array. Combining model output with a linear support vector machine classifier allowed us to simulate performance on a 2-AFC orientation discrimination task given information available in the cones. When matching stimulus parameters in the model to a previously published psychophysical experiment [13], we showed that biologically plausibly variations in optical quality and cone density together would contributed no more than ~10% to the observed polar angle asymmetries in contrast sensitivity.

Given the inability of cone density to quantitatively explain differences in visual performance, we extended our model further into the retina to include temporal and spatial filtering, and noise at two later processing stages. First, we added temporal filtering and noise in the conversion of cone isomerizations to photocurrent in the cone outer segments. Second, we added spatial filtering and noise in a model of midget RGCs. The mRGCs are especially interesting because they show a systematic asymmetry between the upper and lower visual field (where the cones did not), and an amplification of the horizontal-vertical asymmetry. The mRGC computational stage is implemented after cone isomerizations and photocurrent and before the model performs the discrimination task. We provide a short overview of the modeled stages that precede the mRGC layer, as details of these stages can be found in our previous paper [50], followed by a discussion of the implementation details of the photocurrent transduction and mRGC layer.

Scene radiance.

The first stage of the model comprises the photons emitted by a visual display. This results in a time-varying scene defined by the spectral radiance of an achromatic low contrast Gabor stimulus (Fig 3, panel 1). The Gabor was oriented 15° clockwise or counter-clockwise from vertical with a spatial frequency of 4 cycles per degree. These stimulus parameters were chosen to match a recent psychophysical experiment [15] to later compare model and human performance.

Fig 3. Overview of computational observer model with additional mRGC layer.

A 1-ms frame of a 100% contrast Gabor stimulus is used at each computational step for illustration purposes. (1) Scene radiance. Photons emitted by the visual display, resulting in a time-varying scene spectral radiance. Gabor stimulus shows radiance summed across 400–700 nm wavelengths. (2) Retinal irradiance. Emitted photons pass through simulated human cornea, pupil, and optics, indicated by the schematic point spread function (PSF) in the top right-side box, resulting in time-varying retinal irradiance. Gabor stimulus shows irradiance with wavelengths converted to RGB values for illustration purposes. (3) Cone absorptions. Retinal irradiance is isomerized by a rectangular cone mosaic, resulting in time-varying photon absorption rates for each L-cone with Poisson noise. (4) Cone photocurrent. Absorptions are converted to photocurrent via temporal integration, gain control, followed by adding Gaussian white noise. This results in time-varying photocurrent for each cone. (5) Midget RGC responses. Time-varying cone photocurrents are convolved with a 2D Difference of Gaussians spatial filter (DoG), followed by additive Gaussian white noise and subsampling. (6) Behavioral inference. A linear support vector machine (SVM) classifier is trained on the RGC outputs to classify stimulus orientation per contrast level. With 10-fold cross-validation, left-out data are tested, and accuracy is fitted with a Weibull function to extract the contrast threshold at ~80%.

Retinal irradiance.

The second stage simulates the effect of emitted photons passing through the human cornea, pupil, and lens. This computational step results in time-varying retinal irradiance (Fig 3, panel 2). Optics are modeled as a typical human wavefront with a 3-mm diameter pupil without defocus and contain a spectral filter that reduces the fraction of short wavelengths (due to selective absorption by the lens). We do not vary the optics across the different simulations.

Cone absorptions.

The third stage implements a rectangular cone mosaic with L-cones only (2x2° field-of-view). For each cone, we compute the number of photons absorbed in each 2-ms bin, resulting in a 2D time-varying cone absorption image (Fig 3, panel 3). The number of absorptions depends on the photoreceptor efficiency, on the wavelengths of light, and on Poisson sampling due to the quantal nature of light. This stage differs in two ways from our previous model. First, we use an L-cone only retina, and second, we exclude fixational eye movements. We make these two simplifications to keep the model tractable and the calculations to reasonable size. As we describe in the Methods, the number of trials is much larger than in our previous work (to ensure that the classifier has sufficient information to learn the best classification), the number of conditions simulated is much larger (because we vary both cone density and mRGC:cone ratios), and the noise level is substantially higher (because we add noise at phototransduction and mRGC stages). The lack of eye movements enables us to average time points across trials, greatly speeding up processing, as well as simplifying the interpretation of how the new stages contributed to performance.

Cone photocurrent.

The fourth stage converts photon absorptions to photocurrent, incorporating the recently added phototransduction functionality in ISETBIO by Cottaris et al. [51], Here, phototransduction is implemented as a temporal filter followed by gain control and additive noise (Fig 3, panel 4). The result is a continuous time-varying signal in units of current (picoamps). While we use the same photocurrent model for all cones irrespective of size or location, the effect of the photocurrent depends on properties of the cones, due to the additive noise. Specifically, the signal-to-noise decreases more for larger cones than smaller cones, because large cones capture more photons and are subject to more downregulation before the additive noise.

Midget RGC responses.

The fifth stage is spatial filtering by the mRGCs. We model the mRGCs in a rectangular array with each mRGC receptive field centered on a cone. We do not add further temporal filtering beyond that inherited from the photocurrent stage. We do not explicitly model spiking and its associated noise, but instead add independent Gaussian white noise to each RGC output at each time point. Unlike the photocurrent, where the noise is implemented in ISETBIO according to a physiologically-informed model [80], the noise added in the mRGC layer is not constrained by a physiological model because the noise added by mRGCs (after accounting for noise inherited from prior stages) is less well known. For this reason, in additional simulations, we explore the effect of noise level in mRGCs, and find that while the mean performance declines with increasing noise (as expected), the differences between conditions are largely unaffected by noise level (S4 Fig). In the Discussion, we elaborate on the possible contribution of other aspects of retinal processing to polar angle asymmetries such as spatial subunits and spiking.

The mRGC layer has the same field-of-view as the cone array. Because we do not model rectification or spiking non-linearities, we do not separately model ON- and OFF-cells. Our mRGC receptive fields are 2D difference of Gaussian (DoG) models, approximating the shape of receptive fields measured with electrophysiology [81,82] (Fig 3, panel 5), based on parameters from macaque [83]. The width of the center Gaussian (σc, 1 sd) is ⅓ of the spacing between neighboring cones, and the surround Gaussian (σs) is 6x the width of the center. This creates an mRGC array with one mRGC per cone and where mRGC RFs overlap at 1.3 standard deviations from their centers, which matches the overlap of dendritic fields reported in human retina [55]. We compute the mRGC responses by convolving the cone absorptions with this mRGC DoG receptive field. Because the ratio of mRGCs to cones varies across the retina, we simulate differences in this ratio by subsampling the mRGC array (Fig 4). Thus, the mRGC density (cells/deg2) is determined by both the cone array density and the cone-to-mRGC ratio, creating a 2D space of simulations.

Fig 4. Difference of Gaussians filters used to model mRGC layer.

Two mRGCs are illustrated for a 2x2° field-of-view mRGC array centered at 4.5° and 40° eccentricity. (A) 1D representation of two example mRGC layers in visual space. The mRGC responses are computed by convolving the cone image with the mRGC DoG RF, followed by adding noise, and subsampling the cone array to the corresponding mRGC density. Width for Gaussian center (σc) and surround (σs) are converted to units of degree. As the mRGC filters in our model are not rectified, they respond to both increments and decrements. Physiologically, this would require two cells (an ON and OFF cell), so we count each modeled mRGC location as two cells. Both panels show a mRGC:cone ratio of 2:1. (B) 1D representation of Difference of Gaussians in Fourier space. The Fourier representation illustrates the band-pass and unbalanced nature of the DoG (i.e., non-zero amplitude at DC). Depending on the width/subsample rate, DoGs attenuate different spatial frequencies. However, at our peak stimulus frequency (4 cycles per degree, indicated with red dashed line) the two DoG filters vary a relatively small amount, preserving most stimulus information. Fourier amplitudes are normalized. Note that y-axis is truncated for illustration purposes. (C) 2D representation of two example mRGC layers shown in panel. Midget RGC DoG filters are zoomed into a 1x1° field-of-view cone array (black raster) centered at 4.5° (red center with purple surround) and 40° eccentricity (red center with yellow surround), corresponding to the 1D examples in panel A. Centers and surrounds are plotted at 2 standard deviations. For illustration purposes, only one mRGC is shown; the mRGC array in our computational observer model tiles the entire cone array.

Behavioral inference.

The final stage of the computational observer model is the inference engine. For the main analysis, we use a linear support vector machine (SVM) classifier to discriminate stimulus orientation (clockwise or counter-clockwise from vertical) given the cone absorptions, cone photocurrent, or mRGC responses. We compute a weighted average across time for the output of each cell before running the classifier. This greatly reduces the dimensionality of the classifier input, and therefore speeds up computation time and reduces the number of trials needed for the classifier to learn optimal classification boundary. The weighting is proportional to the temporal filter in the photocurrent simulation, such that the time points with the highest weight in the filter has the largest contribution to the weighted average. Because we do not simulate eye movements or vary the phase of the stimulus, the only changes over time arise from the noise and temporal filtering by the photocurrent, and hence there is little to no loss of signal from averaging. The classifier trains and tests on the averaged responses for each stimulus contrast separately, where each contrast level results in a percent correct identified stimulus. The accuracy results are then fitted with a Weibull function to extract the contrast threshold at ~80%.

The cone photocurrent and mRGCs have a large effect on orientation discrimination

We find large effects on performance of the computational observer when adding the cone photocurrent and the mRGC layers. For comparison, we ran the SVM decision maker either on the cone absorptions, the cone photocurrent, or the mRGC outputs while varying the cone density and the stimulus contrast. Consistent with our prior model [50], thresholds are low (~0.1–0.2%) when analyzed on the cone absorptions, and show only a small effect of cone density (Fig 5A). Thresholds increase sharply, about 5–10x, after the absorptions are converted to photocurrent (Fig 5B). This increase is due to noise in the photocurrent, consistent with prior results [51]. Surprisingly, the effect of cone density is also substantially increased, as seen in the greater spread of the psychometric functions. This is because the cones in the lower density retinal patches have larger apertures, resulting in greater photon capture, and hence more downregulation when converted to photocurrent. Over the 10-fold range of retinal densities, threshold vary by only about 1.4:1 for the absorptions, much less in contrast to about 5:1 for the photocurrent. The spatial filtering and late noise from the mRGCs further elevate thresholds, but at a fixed mRGC:cone ratio there is little change in the effect of cone density: the threshold vs density plot shows a vertical shift compared to the cone photocurrent, with about the same slope (Fig 5C).

Fig 5. Model performance for different computational stages.

Left column shows classifier accuracy as function of stimulus contrast. Data are from simulated experiments with 1,000 trials per stimulus class, using a model with a L-cone only mosaic varying in cone density. Data are fitted with a Weibull function. Contrast thresholds are plotted separately as a function of cone density in the right column. (A) Cone absorptions. Applying a linear SVM classifier to cone absorptions averaged across stimulus time points. (B) Cone photocurrent. Applying a linear SVM classifier to cone outer segment photocurrent responses, averaged across time weighted by a temporally delayed stimulus time course. This transformation of cone absorptions into photocurrent causes a ~10x increase in contrast thresholds, interacting with cone density (i.e., Weibull functions are spaced out compared to cone absorptions). (C) RGC responses. Applying a linear SVM classifier to spatially filtered photocurrent with added white noise. This transformation causes an additional increase in contrast thresholds for all cone densities. Data show results for a fixed subsampling ratio of 2 mRGCs per cone.

We next quantified the effect of the mRGC:cone ratio on computational observer performance. We find that as the ratio increases, contrast thresholds decline (Fig 6A). The effect of the mRGC:cone ratio is largely independent of the cone density. For example, at any cone density, downsampling the mRGC density by 4x elevates thresholds by about 70% to 80%. The better model performance with more mRGCs comes from higher SNR, which arises because the signal is correlated across mRGCs (due to spatial pooling), whereas the noise added in the mRGC layer is independent. To visualize the space of predicted contrast thresholds as a function of cone density and mRGC:cone ratio, we plot model thresholds as a function of both independent variables (Fig 6B). This surface plot confirms the observation from the line plots (Fig 6A) that the effects of these two retinal factors—cone density and mRGC:cone ratio—have approximately independent, additive effects on model contrast threshold.

Fig 6. The effect of spatial filtering properties by mRGCs on full model performance.

(A) Contrast thresholds as a function of cone density and mRGC:cone ratio. Data points are contrast thresholds for cone absorptions, cone photocurrent, and each mRGC:cone ratio separately (for psychometric functions see S3 Fig). Individual mRGC fits are slices of the 3D mesh fit shown in panel B. (B) Mirrored views of combined effect of cone density and mRGC:cone ratio on contrast sensitivity. The mesh is fitted with a locally weighted regression to 3D data: log cone density (x-axis) by log mRGC:cone ratio (y-axis) by log contrast thresholds (z-axis). Individual dots represent the predicted model performance for nasal retina or horizontal visual (red star), superior retina or lower visual (blue star), temporal retina or horizontal visual (green star) and inferior or upper visual (black star) meridian locations at 4.5° eccentricity (matched to stimulus eccentricity in [15]). Contour lines show possible cone densities and mRGC:cone ratios that would predict the same horizontal-vertical and upper/lower vertical-meridian asymmetry as observed in psychophysical data at 4.5° eccentricity. To do so, we scaled the difference in contrast threshold between the lower (blue) and upper (black) vertical visual meridian relative to the horizontal meridian to match the difference in behavior. Goodness of fit of 3D mesh fit is R2 = 0.96.

Comparison between model and human contrast sensitivity

To compare model performance to human observers, we evaluate the model outputs for cone densities and mRGC:cone ratios that match the values on the different meridians according to the literature. We then compare these predicted thresholds to those obtained in a recent psychophysical experiment [15]. We also compare both the human data and the mRGC model data to two simplified models, one which omits the mRGCs and one which omits mRGCs and the conversion from isomerizations to photocurrent.

According to Curcio et al. [9], cone density at 4.5° eccentricity is ~1,575 cones/deg2 on the horizontal retinal meridian (nasal: 1590 cones/deg2, temporal: 1560 cones/deg2), 1300 cones/deg2 on the superior retinal meridian, and 1382 cones/deg2 on the inferior retinal meridian. We combine these cone density values with the mRGC:cone ratios from the computational model by Watson [64], which ranges between 0.84 mRGCs per cone on the horizontal meridian (nasal: 0.87, temporal: 0.82), to 0.81 on the superior retina and 0.68 on the inferior retina.

Consistent with our previous report [50], we find that a model in which the pattern of photon absorptions is fed into the linear SVM classifier shows only a small effect of cone density (Fig 7A, left). Given the expected cone densities at the different polar angles at 4.5° eccentricity, the model predicts only about 5% higher sensitivity for the horizontal than vertical visual meridians, much less than the 40% difference found in behavioral experiments [15] (Fig 7B). The model also predicts almost no difference between upper and lower vertical visual meridian, whereas human sensitivity was found to be about 20% higher on the lower than upper vertical visual meridian. The overall sensitivity of the model observer (800–900) is considerably higher than human sensitivity (~30–50).

Fig 7. Comparison of model performance to human performance.

(A) Contrast sensitivity predicted by computational observer model up to isomerizations in cones (blue), up to cone outer segment phototransduction (turquoise), up to spatial filtering and subsampling in mRGCs (red), and behavior observed (purple) by Himmelberg et al. (2020) using matching stimulus parameters. HM: horizontal meridian, UVM: upper visual meridian, LVM: lower visual meridian. Model prediction shows contrast sensitivity (reciprocal of contrast threshold) for stimuli at 4.5° eccentricity, with a spatial frequency of 4 cycles per degree. HM is the average of nasal and temporal meridians. Model error bars indicate simulation results allowing for uncertainty in the cone or mRGC density along each meridian (see Methods for details). Behavioral plots show group average results (n = 9) from Himmelberg et al. [15], and error bars represent standard error of the mean across observers. (B) Polar angle asymmetries for cone absorptions, photocurrent, mRGCs, and behavior. HVA: horizontal-vertical asymmetry. VMA: vertical-meridian asymmetry. Blue, turquoise, red, and purple bars match panel (A) and correspond to model prediction up to cone absorptions, cone photocurrent, mRGCs, human behavior. Error bars represent the HVA and VMA when using the upper/lower bound of predicted model error from panel A.

The conversion from cone absorptions to cone photocurrent reduces the sensitivity by about 4- to 5-fold, and increases the asymmetries. The linear SVM classifier performance based on the cone photocurrent shows about 15% higher sensitivity for horizontal than vertical visual meridian, an asymmetry that is 3 times larger than that found in a model up to cone isomerizations. It also predicts about 9% higher sensitivity for upper vertical than lower vertical visual meridian (opposite to the pattern in human data). This is because the cone density is slightly higher for the upper than lower vertical visual meridian at this eccentricity (4.5°).

Finally, the mRGC model brings overall performance closer to behavior, with sensitivity of about 70–90, and ~18% higher sensitivity for the horizontal than vertical visual meridian, predicting almost half the asymmetry found in behavior (~40%). The mRGC model also eliminates the advantage for upper over lower vertical visual meridian (now predicting slightly higher performance for the lower vs upper vertical), which is the same direction as the pattern observed in the human data.

Overall, our models show that although including an mRGC layer predicts polar angle asymmetries closer to behavior than a model up to cone absorptions or up to photocurrent, the biological variations in the spatial properties of mRGCs are not sufficient to fully explain differences in behavior. For example, the measured cone densities for the upper and lower vertical visual meridians are about 12% and 19% lower than for the horizontal. To predict the horizontal-vertical and vertical-meridian asymmetries as observed in human performance, and without further changing the mRGC:cone ratios, the cell densities would instead have to be ~37% and 30% lower than the horizontal. Alternatively, one could keep the cone densities fixed at the levels estimated by Curcio et al. [9], and instead vary the mRGC:cone ratio as observed by Watson [64]. In this case, the ratios would have to decrease from 0.81 to 0.52 for the lower vertical and 0.68 to 0.32 for the upper vertical visual meridian. If one decreased both the cone densities and the mRGC:cone ratios by tracing out the values along the nasal retinal meridian, one would need to increase eccentricity of a stimulus from 4.5° to 7.3° (upper vertical) or 6.3° (lower vertical) to match the behavioral asymmetries.


The visual system, from retina to subcortex to cortex, is organized in orderly maps of the visual field. But within each particular processing stage, the retinotopic map is distorted. Here we investigated the polar angle asymmetries in these spatial representations across three stages of the early visual pathway: cones, mRGCs, and V1 cortex. Our study revealed that both the eccentricity gradient (foveal bias) and polar angle asymmetries (HVA and VMA) in spatial representations are amplified from cones to mRGCs, and further amplified from mRGCs to early visual cortex. Additionally, we showed that although mRGC density has considerably polar angle asymmetries in the directions predicted by psychophysical studies, they are insufficient to explain observed differences in human’s contrast sensitivity around the visual field.

Linking behavior to eccentricity and polar angle asymmetries in visual field representations

For over a century, limits in retinal sampling were hypothesized to cause the fovea-to-periphery gradient in human visual performance [1,5,6]. Initial tests of this idea showed that the fall-off in cone density could explain some, but not all of the observed decrease in visual acuity [2,3,8487]. Later, more detailed computational models, reported that mRGCs come closer in predicting the eccentricity-dependent decrease in achromatic contrast sensitivity and resolution, and conclude that mRGCs are sufficient to explain some aspects of behavior, such as spatial resolution and contrast sensitivity [8894]. Similar to the retina, the cortical magnification factor in V1 has been linked to visual performance as a function of eccentricity, for example, explaining differences in acuity [92,95,96], contrast sensitivity and resolution [20], visual search [97,98], and the strength of some visual illusions [99].

Conversely, polar angle asymmetries have rarely been considered. For instance, all above-mentioned studies either ignored the stimulus polar angle for analysis or limited measurements to a single meridian, usually the horizontal. Despite the fact that the existence of polar angle asymmetries in human early visual cortex was predicted based on behavior in the late 70’s [19,20], further reports on polar angle differences have been scarce. One fMRI study reported a higher V1 BOLD amplitude for stimuli on the lower than the upper visual meridian [100] and two studies found more cortical surface area devoted to the horizontal than the vertical meridian [101,102]. Our recent studies suggest that V1 surface area is highly correlated to spatial frequency thresholds [78] and contrast sensitivity [103]. Yet several studies have assumed little to no polar angle differences in macaque V1 CMF [104,105] or did not account for polar angle differences in human V1 CMF [46,96] to explain differences in behavior. Computational models that include retinal and/or V1 sampling across visual space generally exclude polar angle asymmetries (e.g., [106,107]). A few cases do incorporate polar angle asymmetries in the retinal ganglion cell distribution, but they assume that these asymmetries are not amplified in cortex [108110].

Early visual cortex does not sample the retina uniformly

It is well documented that the convergence of cones to retinal ganglion cells varies with eccentricity (e.g., see [91]). In the fovea of both primates and humans, there is one cone per pair of bipolar cells and pair of midget RGCs, with pairs comprised of an “ON” and an “OFF” cell. In contrast, in the periphery, there are many cones per pair of bipolar cells and midget RGCs, with the ratio depending on the eccentricity. In the far periphery, there can be dozens of cones per ganglion cell [9].

It has been long debated whether V1 further distorts the visual field representation, or if V1 samples uniformly from RGCs, as reviewed previously [71,72]. Our analysis showed more cortical surface area devoted to the fovea than the parafovea and to the horizontal than vertical meridian, supporting previous findings using retinotopy informed by anatomy [101] and functional MRI [78,102,103,111]. Importantly, these eccentricity and polar angle non-uniformities are larger in V1 than they are in mRGC density, in agreement with findings from monkey [61,7375,112,113]. Whether these non-uniformities arise in cortex, or depend on the mapping from retina to LGN, LGN to V1, or both, is a question of interest in both human [114,115] and monkey [116120], but beyond the scope of this paper. The implication of the increased spatial non-uniformities in the cortical representation is that cortex cannot be understood as a canonical wiring circuit from the retina repeated across locations.

Because visual field distortions are larger as a function of eccentricity than polar angle, one might surmise that polar angle asymmetries contribute little to visual performance. Even though the polar angle asymmetries are smaller than the eccentricity effects, they can in fact be large. For example, within the central eight degrees, the surface area in V1 is about 60% larger for the horizontal meridian than the vertical meridian [78]. Given that virtually all visual tasks must pass through V1 neurons, these cortical asymmetries are likely to have a large effect on perception. The number of cortical cells could be important for extracting information quickly [121], for increasing the signal-to-noise ratio, and for tiling visual space and visual features (e.g., orientation, spatial frequency) more finely [122]. To know how the number of V1 neurons affect performance, there is a need for a computational model that explicitly links cortical resources to performance around the visual field.

Temporal summation in cone photocurrent accentuates polar angle asymmetries

We found one physiological factor in the retina—gain control in the cone photocurrent—that appears to accentuate the polar angle asymmetries. This is because at matched eccentricities, cone density varies with polar angle (i.e., cone density is higher on the horizontal meridian), and cone aperture size varies inversely with density. Specifically, at lower densities, the apertures are larger, capturing more photons per receptor. As a result of the higher absorption rates, there is greater downregulation of the photocurrent gain. Cottaris et al. [51] observed in their modeling work that the lower gain in the photocurrent for larger cones caused a reduction in the signal-to-noise ratio. In their simulations, this resulted in sensitivity loss for stimuli that extended further into the periphery. In our simulations, lower density results in lower sensitivity, therefore contributing to the difference in performance as a function of polar angle.

Overall, while adding a photocurrent stage decreases overall thresholds, bringing them closer to human performance (especially for simulations with low cone density mosaics), it still leaves a large gap between the predicted and observed psychophysical asymmetries as a function of polar angle. Moreover, the photocurrent model does not explain any of the vertical meridian asymmetry, as cone density, and presumably aperture size, do not differ between lower and upper vertical meridian in a way that matches behavior.

Model limitations

Despite implementing known facts about the eye, our model, like any model, is a simplification. The lack of comprehensiveness trades off with interpretability. For this model, we make the trade-off between complexity and understanding by treating a local patch of mRGCs as a linear, shift-invariant system (i.e., a spatial filter). As several components of the model here are identical to our previous model, we will focus on the limitations of those components that are different (addition of cone photocurrent and mRGC layer, and exclusion of eye movements), and refer to Kupers, Carrasco, and Winawer [50] for model limitations related to the pathways from scene to cone absorptions and the inference engine.

Spatial properties: Uniform sampling within a patch and subunits.

Hexagonal cone arrays that include within-patch density gradients have been implemented in ISETBIO by Cottaris et al. (e.g., [51,67]). Nonetheless, our mRGC layer is implemented as a rectangular patch of retina, initially with the same size as the cone mosaic. This allows for filtering by convolution and then linear subsampling to account for mRGC density, making the model computationally efficient. We do not incorporate several known complexities of RGC sampling in the retina: (i) density gradients within a patch, (ii) irregular sampling, and (iii) spatial RGC subunits. (i) Given our relatively small patch size (2x2° field-of-view) in the parafovea (centered at 4.5°), the change in density across the patch would be small (~10%). We found that a much larger change in mRGC density (spanning a 5-fold range) had only a modest effect on performance of our observer model, so it is unlikely that accounting for a small gradient within a patch would have significantly influenced our results. (ii) Given the relatively low spatial frequency content of our test stimulus (4 cycles per degree), it is unlikely that irregular sampling would have resulted in a substantial difference from the regular sampling we implemented. (iii) Our low spatial frequency test stimuli also reduce concerns of omitting spatial subunits [123126], as these non-linearities are most likely to be important for stimuli at high spatial frequencies (reviewed by [127]). Moreover, we showed for our linear RGC filters that sensitivity differences are only large at high spatial frequencies (around 8 cycles per degree and higher); even when receptive field sizes differ by a factor of 3 (as shown in Fig 4B). Hence for the relatively low spatial frequency stimuli modeled here, the detailed spatial properties that we excluded would likely not have large enough effects to make up the difference between the predicted model performance and human behavior.

Temporal properties and eye movements.

In contrast to our previous work [50], our current model includes temporal integration but omits fixational eye movements and multiple cone types. The omission of eye movements made the model more tractable and the computations more efficient. We think this omission is unlikely to have a large effect on our results. In recent related work, it was shown that fixational eye movements combined with temporal integration resulted in spatial blur and degraded performance, causing a loss in contrast sensitivity up to a factor of 2.5 [51]. However, the largest losses were for stimulus spatial frequencies over 8 cycles per degree, with little loss from eye movements for stimuli with lower peak spatial frequency (2–4 cycles per degree). Given that the spatial frequency of our test stimulus falls within this range, the influence of fixational eye movements on the computational observer performance would have been modest.

Noise implementation.

Our expectation was that the largest effect of mRGCs on performance as a function of polar angle would arise from variation in cell density: where mRGC density is higher, SNR will be higher, thus performance will be better. This effect of density on performance emerged in our simulations from the noise added after spatial filtering, before subsampling: without this additional noise component, the spatial filtering of the mRGC would just be a linear transform of the cone outputs, which would have little or no effect on performance of a linear classifier. We simulated this late noise as additive Gaussian noise rather than the stochastic nature of spiking, as we were not trying to fit spiking data but rather predict behavior. While we also did not build in correlated noise between RGCs (e.g., [128]), there is nonetheless some shared noise in our mRGCs due to common inputs from cones, which is the major source of noise correlations in RGCs [129]. Moreover, we found that the general pattern of model performance was unchanged over a large range of noise levels (up to an overall scale factor in performance), suggesting that the effect of density is likely to hold in many noise regimes.

Other retinal cell types.

Midgets are not the only retinal ganglion cells that process the visual field. Parasol (pRGCs) and bistratified retinal ganglion cells are less numerous but also cover the entire retina. pRGCs are the next most common retinal ganglion cells, and have generally larger cell bodies and dendritic field sizes than mRGCs, both increasing with eccentricity [54]. These differences cause parasols to be more sensitive to relative contrast changes and have higher temporal resolution, with the consequence of losing spatial resolution [130]. For this reason, the small mRGCs are much more likely to put a limit on spatial vision, and thus our model does not include pRGCs.

The discussion above raises the question, had we incorporated more known features of the retina in our model, would the model make predictions more closely matched to human performance? We think it is unlikely that doing so would fully explain the observed asymmetries in behavior, because we measured substantially larger asymmetries in cortex than in retina. If the retinal simulations entirely accounted for behavior, this would leave no room for the additional cortical asymmetries on behavior.

A case for cortical contributions to visual performance asymmetries

Recent retinal modeling of contrast sensitivity in the fovea showed that very little information used for behavior seems to be lost from the retinal output [51]. This may not be the case for the parafovea and periphery. Incorporating temporal properties of phototransduction and spatial properties of mRGC followed by additive noise could explain about half the differences in behavior of HVA and ~1/6 of VMA. These differences indicate a contribution from downstream processing, such as early visual cortex. V1 cortex has several characteristics that suggest a tight link between cortical topography and polar angle visual performance asymmetries. Hence a model that incorporates properties of early visual cortex is likely to provide a substantially better account of polar angle asymmetries in behavior than one that only incorporates properties of the eye. We have not developed such a model but outline some of the reasons that cortex-specific properties are important for explaining polar angle asymmetries.

First, the representation of the visual field is split across hemispheres in visual cortex along the vertical, but not horizontal meridian. This split may require longer temporal integration windows for visual input that spans the vertical meridian, as information needs to travel between hemispheres. For example, the response in the left visual word form area is delayed by ~100 ms compared to the right visual word form area when presenting a stimulus in the left visual field [131]. Longer integration windows may in turn impair performance on some tasks, as eye movements during integration will blur the representation. Longer integration time of visual information spanning the vertical meridian is consistent with behavior, as accrual time is slower when stimuli are presented at the vertical than the horizontal meridian [38]. Interestingly, the hemispheric split is not precise: there is some ipsilateral representation of the visual field along the vertical meridian in early visual cortex. The amount of ipsilateral coverage is larger along the lower than upper vertical meridian and increases from 1–6° eccentricity [132]. It is possible that the split representation affects performance for stimuli on the vertical meridian (contributing to the HVA), and that the asymmetry in ipsilateral coverage between the lower and upper vertical meridian contributes to the VMA.

Second, there is good correspondence between the angular patterns of asymmetries in V1 cortex and behavior. Polar angle asymmetries in the CMF of early visual cortex are largest along the cardinal meridians (i.e., horizontal vs vertical and upper vertical vs lower vertical). The asymmetries gradually fall-off with angular distance from the meridians [78]. This gradual decrease in polar angle asymmetry in cortex parallels the gradual decrease in contrast sensitivity [12,29,30] and spatial frequency sensitivity [16] with angular distance from the cardinal meridians. Measurements of cone density and retinal ganglion cell density have emphasized the meridians, so there is less information regarding how the asymmetries vary with angular distance from the meridians.

Third, there is good correspondence between cortical properties and behavior in the domain of spatial frequency and contrast sensitivity. Polar angle asymmetries in spatial frequency sensitivity observed by Barbot et al. [16] parallel spatial frequency tuning in V1 cortex. Specifically, fMRI measurements show that in V1, in behavior spatial frequency thresholds are higher on the horizontal than vertical visual meridian [16] and the preferred spatial frequency tuning is higher along the horizontal meridian than vertical visual meridian [133]. Additionally, polar angle asymmetries in contrast sensitivity covary with polar angle asymmetries in V1 cortical magnification [103]: Observers with larger horizontal-vertical asymmetries in contrast sensitivity (i.e., better performance on the horizontal vs vertical visual meridian at matched eccentricities), tend to have larger horizontal-vertical asymmetries in V1 cortical magnification at corresponding locations in the visual field.

Fourth, polar angle asymmetries in behavior are maintained when tested monocularly [12,16], but thresholds are slightly higher compared to binocular testing (at least for spatial frequency sensitivity [16]). Higher thresholds (i.e., poorer performance) show that performance benefits from combining information of the two eyes, as twice the amount of information increases the signal-to-noise ratio [134]. This summation is likely to arise in early visual cortex, as V1 is the first stage in the visual processing pathways where information of the left and right visual field merges [135137].


Overall, we have shown that the well-documented polar angle asymmetries in visual performance are associated with differences in the structural organization of cells throughout the early visual pathway. Polar angle asymmetries in cone density are amplified in downstream processing, from cones to RGCs and again from RGCs to early visual cortex. Further, we have extended our computational observer model to include temporal filtering when converting cone absorptions to photocurrent and spatial filtering of mRGCs, and found that both contributions, although larger than those of cones, are far from explaining behavior. In future research, we will aim to integrate cortical data within the computational observer model to explain whether a significant amount of the polar angle asymmetries can be accounted for by the organization of cortical space in early visual cortex.


Reproducible computation and code sharing

All analyses were conducted in MATLAB (MathWorks, MA, USA). Data and code for our previously published and extended computational observer model, including density computations and figure scripts, are made publicly available via the Open Science Framework at the URL: (previously published) and (this study).

Data sources

Data on cone density, midget RGC density, and V1 cortical surface area previously published or from publicly available analysis toolboxes. Both cone and mRGC densities were computed as cells/deg2 for 0–40° eccentricities (step size 0.05°), at the cardinal meridians (0°, 90°, 180°, and 270° polar angle, corresponding to nasal, superior, temporal, and inferior retina of the left eye. Fig 1 contains averaged cone and mRGC densities across all meridians as a function of eccentricity. Fig 2 contains cone and mRGC densities converted to visual field coordinates, where the horizontal visual field meridian is the average of nasal and temporal retina, upper visual field meridian corresponds to the inferior retina and lower visual field meridian to the superior retina.

Cone density.

Cone density data for the main results were extracted from post-mortem retinal tissue of 8 human retina’s published by Curcio et al. [9] using the analysis toolbox ISETBIO [6567], publicly available via GitHub (

Cone density in S1 Fig shows two datasets computed by two analysis toolboxes. To extract post-mortem data from Curcio et al. [9], we either use ISETBIO or the rgcDisplacementMap toolbox [76], publicly available at GitHub ( A second cone density dataset comes from an adaptive optics study published by Song et al. [10]. From this work, we use “Group 1” (young individuals, 22–35 years old) implemented in ISETBIO.

Midget retinal ganglion cell receptive field density.

Midget RGC density for the main results were computed with the quantitative model by Watson [64] implemented in ISETBIO. This model combines cone density data from Curcio et al. [9], mRGC cell body data from Curcio and Allen [53] and the displacement model by Drasdo et al. [57], to predict the midget RGC receptive fields (RFs).

Midget RGC data in S1 Fig computes mRGC density with two computational models: Watson [64] from ISETBIO and the displacement model by Barnett and Aguirre [76] implemented in the rgcDisplacementMap toolbox.

Cortical magnification factor in early visual cortex.

To quantify the fovea-to-periphery gradient in the V1 cortical magnification factor (CMF), we used the areal CMF function published in Horton and Hoyt [68] for 0–40° eccentricity (Fig 1). Because this function does not make separate predictions for the cardinal meridians (Fig 2), we used data from the Human Connectome Project (HCP) 7 Tesla retinotopy dataset (n = 163), which were first published by Ugurbil, van Essen, and colleagues [138,139] and analyzed with population receptive field models by Benson et al. [79]). V1 CMF surface area data are from Benson et al. [78] segmented into bins using hand-drawn ROIs from Benson et al. [140] and computed as follows.

To compute V1 CMF from retinotopy data, we used the extracted surface area for ±10° and ±20° wedge ROIs centered on the cardinal meridians in each individual’s hemisphere. The wedges on the horizontal, dorsal, and ventral locations represented the horizontal, lower, and upper visual field meridians respectively. Wedge ROIs were computed in the following steps: First, area V1 and V2 were manually labeled with iso-eccentricity and iso-polar angle contour lines using the measured retinotopic maps of each hemisphere [140]. Second, for each cardinal meridian and each 1°-eccentricity bin, we calculated the mean distance along the cortex to reach a 10° or 20° polar angle. All vertices that fell within the eccentricity bin and polar angle distance were included in the particular ROI. We computed wedge strips, rather than an entire wedge or line, to avoid localization errors in defining the exact boundaries.

The wedges were separated into 5 eccentricity bins between 1–6° (1° step size) using the hand-drawn ROIs from Benson et al. [140], marking eccentricity lines at 1°, 2°, 4°, and 7°. The 3°, 5° and 6° eccentricity lines were deduced from the 2°, 4° and 7° lines using isotropic interpolation (independently for ±10° and ±20° wedge ROIs, for more details see Benson et al. [78]), and hence are likely to be less accurate than the data points at the exact hand-drawn eccentricity lines. The cortical surface area (mm2) was summed across hemispheres within each subject and divided by the visual field area (deg2). For each eccentricity bin and cardinal meridian, mean and standard error V1 CMF were computed from bootstrapped data across subjects (1,000 iterations). Mean data for each cardinal meridian were fit with a linear function in log-log space (i.e., power law function in linear coordinates) for 1–6° eccentricity.

The initial ROIs used for the upper and lower vertical meridian included both V1 and V2 sections of the vertical meridian, and therefore contain twice as much visual area as the horizontal ROI. To have a fair comparison between the horizontal and upper and lower visual field ROIs, we corrected the upper and lower ROIs as follows. For each subject and eccentricity bin, we computed a vertical surface area ROI (with both upper and lower visual fields) that excluded V2 sections of the vertical meridian. When summed over both hemispheres, this vertical ROI has a size comparable to the horizontal ROI. We then calculated a scale factor for each subject and eccentricity, by dividing the vertical ROI by the sum of upper and lower surface area ROIs. This scale factor was on average ~0.5. To get the corrected V1 CMF, we multiplied the scale factor to corresponding ventral and dorsal surface areas and divided by the corresponding visual field area. By scaling dorsal and ventral ROIs to only include the V1-side, we made the assumption that V2 is approximately the same size as V1. These vertical ROIs may be slightly less precise than the horizontal meridian ROI and affect the horizontal-vertical asymmetry (HVA). We did not compare differences in pRF sizes for the cardinal meridians.

Although the narrower ±10° wedge ROIs are in closer correspondence to the single line estimations of cone and mRGC density, we use ±20° wedge ROIs in Fig 2 as those data are more robust. This is because narrow wedge ROIs are prone to overestimation of the vertical meridian surface, caused by ipsilateral representations near the boundaries. Such ipsilateral representations are sometimes incorrectly counted as part of the ±20° ROI for the ipsilateral hemisphere, instead of as part of the ±10° ROI for the contralateral hemisphere, and this effect is exacerbated for smaller wedges. We visualize V1 asymmetries for both ±10° and ±20° wedge ROI S1 Fig.

Convergence ratios

The cone:mRGC ratio was computed by dividing mRGC density (cells/deg2) by cone density (cells/deg2) for 0–40° eccentricity, in 0.05° bins. The mRGC:CMF ratio was computed in cells/mm2. When comparing mRGC density to Horton and Hoyt’s CMF prediction, mRGC density (cells/deg2) was divided by V1 CMF (deg2/mm2) for 0–40° eccentricity, in 0.05° bins. When comparing HCP’s retinotopy CMF to mRGC density, mRGC density was restricted to 1–6° eccentricity, and divided by the power law functions fitted to the V1 CMF. To compute the transformation ratios relative to horizontal visual field meridian for cone:mRGC or mRGC:V1 CMF ratios in S2 Fig, we divide the lower and upper visual field transformation ratio separately by the horizontal visual field transformation ratio.

Asymmetry computation

Polar angle asymmetries between meridians for cone density and mRGC density were calculated as percent change in retinal coordinates as in Eqs 1 and 2, and then converted to visual field coordinates (i.e., nasal and temporal retina are left and right visual field meridians, and superior and inferior retina are lower and upper visual field meridians): (1) (2)

Polar angle asymmetries in V1 CMF and behavior were computed with the same equations, but for visual field coordinates.

Computational observer model

The computational observer uses and extends a published model [50]. The extensions include (1) a phototransduction stage in the cone outer segment (transforming absorptions to photocurrent) and (2) a midget RGC layer (transforming photocurrent to mRGC responses) between the cone isomerization stage and the behavioral inference stage. To compensate for the increase in computational load and to keep the model tractable, we also made two simplifications: We used an L-cone only mosaic (instead of L-, M-, S-cone mosaic), and removed any stimulus location uncertainty by omitting fixational eye movements and stimulus phase shifts within a single stimulus orientation. With our extended model, we generated new cone absorption and photocurrent data using a fixed random number generator.

Given that several stages of the model are identical to those to the previous study, we refer to those methods on Scene radiance, Retinal irradiance, and Cone mosaic and absorptions. Unlike in our previous study [50], we did not vary the level of defocus in the Retinal irradiance stage nor the ratio of different cone types within a cone mosaic.

Stimulus parameters.

The computational model simulates a 2-AFC orientation discrimination task while varying stimulus contrast. The stimulus parameters are chosen to match the baseline condition of the psychophysical study by Himmelberg et al. [15], whose results have replicated the psychophysical study used for comparison in our previous computational observer model [13]. The recent psychophysics experiment used achromatic oriented Gabor patches, ±15° oriented from vertical, with a spatial frequency of 4 cycles per degree. Stimuli were presented at 4.5° iso-eccentric locations on the cardinal meridians, with a size of 3x3° visual angle (σ = 0.43°) and duration of 120 ms. These stimulus parameters were identical to those the model, except for size, duration, and phase randomization of the Gabor. The simulated stimulus by the model was smaller (2x2° visual angle (σ = 0.25°), shorter (54-ms on, 2-ms sampling) followed by a 164-ms blank period (mean luminance). We simulated these additional time points without a stimulus because photocurrent data are temporally delayed (see next section on Photocurrent). There was no stimulus onset period, and the phase of the Gabor patches were identical across all trials (90°). Instead of simulating 5 experiments with 200 trials per stimulus orientation as in our previous paper, we simulated one experiment with 5x more trials (i.e., 1,000 trials per stimulus orientation, 2,000 trials in total) to ensure that our behavioral inference stage had sufficient number of trials to successfully learn and classify stimulus orientation. To assure psychometric functions with lower and upper asymptotes, stimulus contrasts ranged from 0–100%.


After the cone isomerization stage, we applied ISETBIO’s built-in osLinear photocurrent functionality implemented by Cottaris et al. [51] to our cone absorption data (separate for each simulation varying in cone density). This photocurrent stage converts cone excitations into photocurrent in pA in a linear manner (in contrast to the osBiophys functionality in ISETBIO which contains a more complex and computationally intensive biophysical model to calculate cone current).

The phototransduction stage takes the cone absorptions and applies three computations. First, it convolves cone absorptions trials with a linear temporal impulse response specific to L-cones (see Fig 3, panel in between absorptions and photocurrent stage). This temporal filter delays and blurs the cone photocurrent in time. Second, photocurrent gain is downregulated by light input, for instance due to increased luminance levels or larger cone apertures. Third, photocurrents are subject to an additional source of white Gaussian noise, which are determined by photocurrent measurement by [80] (for more details, see Cottaris et al. [51]). This resulted in a 4D array with m rows by n columns by 109 2-ms time points by 2,000 trials.

Because our simulated experiments do not contain any uncertainty about the stimulus location (no fixational eye movements or stimulus phase randomization), we were able to average both cone absorptions and photocurrent data across stimulus time points. We computed mean cone absorption data by taking the average across the first 54 ms (ignoring the time points without stimulus). For mean cone photocurrent data, we took a weighted mean across all 218 ms time points using a temporally delayed stimulus time course. This time course was constructed by convolving the stimulus on-off boxcar with the temporal photocurrent filter. This resulted in a 3D array with time-averaged cone photocurrent m rows by n columns by 2,000 trials.

Midget RGC layer.

Prior to the mRGC layer, Gabor stimuli were simulated as spectral scene radiance from a visual display, passed through the simulated human optics, subject to isomerization and phototransduction by the cones in a rectangular mosaic (2x2° field-of-view) and saved as separate files for each stimulus contrast. The mRGC layer loaded the simulated 2D cone absorptions and photocurrent data.

The mRGC layer was built as a rectangular array, with the identical size mosaic as the cone mosaic (2x2°). Spatial summation by RGC RFs was implemented as 2D Difference of Gaussians (DoG) filters [81,82]. The DoG RF was defined on a support of 31 rows by 31 columns. The DoG size was based on Croner and Kaplan [83]: the standard deviation of the center Gaussian (σc) was 1/3 times the cone spacing and the standard deviation of the surround Gaussian (σs) was 6 times the center standard deviation. The center/surround weights were 0.64:0.36, hence unbalanced. These parameters create neighboring DoG RFs that overlap at 1.3 standard deviation from their centers, approximating RGC tiling in human retina based on overlap of dendrites fields [55]. The support of the DoG filter did not change size, however, because the mRGC array is matched to the cone array and cone density affects cone spacing (i.e., a lower cone density results in a sparser array), the width of the DoG varies with cone density and can be expressed in units of degree visual angle (i.e., scaling with the number of cones per degree within the cone array).

In the primate fovea, there is one ON and one OFF mRGC cell per cone, for a ratio of 2 mRGCs per cone. Unlike in the eye, our model mRGCs are not rectified, hence one of our mRGCs can signal either increments or decrements. For comparison to the literature, we multiply our mRGC counts by 2. We do not model ON- and OFF-center mRGCs separately, but rather consider one linear mRGC (no rectification) as a pair of rectified ON- and OFF-centers. For example, we consider an mRGC layer with no subsampling as having an mRGC:cone ratio of 2:1 (2 mRGCs per cone). The mRGC:cone ratios, counted in this way, were 2:1, 0.5:1, 0.22:1, 0.125:1, 0.08:1. The highest ratio (2:1) is similar to the observed in the fovea and the lowest ratio (0.08:1) is similar to the observed at ~40° eccentricity [64]. We tested a wide range of ratios because the purpose of the modeling was to assess how variation in mRGC density affects performance. The relationships between cone density and performance, or between mRGC:cone ratio and performance, are more robustly assessed by testing a wide range of parameters.

The spatial computations of the mRGC layer were implemented in three stages. In the first stage, the 2D DoG filter was convolved with each time-averaged 2D cone photocurrent frame separately for each trial. The photocurrent images were padded to avoid border artifacts. We padded the array with the mean of the photocurrent cone array, where the padding doubled the width and height of the array. The post-convolution array maintained the same size as the cone array without padding.

In the second stage, white Gaussian noise was added to each time point of the filtered cone photocurrent response, sampled from a distribution with a standard deviation of 1. This noise level was determined after testing a range of values showed that doubling or halving the width of the Gaussian only scaled the absolute performance levels, not the effect as a function of cone density or mRGC:cone ratios (for results using a standard deviation of 0.5 and 2, see S4 Fig). We added noise to our mRGC responses at this stage, because our mRGC layer without noise would perform a linear transform of the photocurrent responses (linear filtering and linear subsampling). A transform that a linear support vector machine classifier should be able to learn the optimal hyperplane with enough training trials to “untangle” the two stimulus classes. This would mean that our model would not predict any loss of information introduced by the mRGC layer, the effect we are most interested in. Had we used a limited number of trials instead, our model would have performed suboptimal and showed differences in classification accuracy. In such case, it would be difficult to distinguish the extent to which these performance differences are caused by spatial variations in mRGCs on visual performance versus the general ability of the SVM algorithm.

In the third stage, the filtered cone responses were linearly subsampled. This was implemented by resampling each row and column of the filtered cone responses with a sample rate equal to the mRGC:cone ratio. For instance, an array with an mRGC:cone ratio of 0.5:1 samples from every other cone. The mRGCs are centered on the cones, limiting the resampling of filtered cone responses to integer numbers of cones. These spatially filtered and subsampled responses are the mRGC responses in arbitrary units, as we added an arbitrary level of Gaussian white noise on the filtered photocurrent responses and did not implement spiking non-linearity in this transformation.

Simulated experiments.

A single simulated experiment had a total of 64,000 trials: 2,000 trials per contrast level, 1,000 clockwise and 1,000 counter-clockwise. Stimulus contrast was systematically varied from 0 to 100% Michelson contrast, using 32 contrast levels. The cone mosaic was identical across contrast levels, only including L-cones, cone density and cone spacing. There were no eye movements. Cone absorptions and photocurrent simulations used a fixed random number generator seed. Data from a single contrast level were represented as a 4D array (m rows by n columns by 218 time points by 2,000 trials). The size of the m by n frame depended on the defined subsampling ratio used for the mRGC layer.

This single experiment was repeated for 17 different cone mosaics, which varied systematically in cone density and spacing. The cone density variation was implemented by simulating cone mosaics at different eccentricities, ranging from a density as high as at the 1° (4.9 x103 cells/deg2) to as low as at 40° eccentricity on the horizontal meridian (0.047 x104 cells/deg2). This resulted in a total of 1,088,000 simulated trials (64,000 trials x 17 cone densities).

Simulated experiments for each of the 17 different cone densities were averaged across time, resulting in a 3D array (m rows by n columns by 2,000 trials). In the mRGC layer, each 3D array was spatially subsampled by 5 different mRGC:cone ratios. This resulted in a total of 5,440,000 simulated trials (64,000 trials x 17 cone densities x 5 ratios).

Inference engines.

The simulated trials were fed into an inference engine. The task of the inference engine was to classify if a trial contained a clockwise or counter-clockwise oriented Gabor stimulus given the cone or mRGC responses. Classification was performed separately for every 2,000 trials, i.e., separately for each contrast level, cone density, and mRGC:cone ratio.

We used a linear SVM classifier as implemented in MATLAB’s fitcsvm with 10-fold cross-validation and built-in z-scoring. This procedure is identical to our previously published model [50]. In contrast to our previous model implementation, we did not transform each 2D frame of mRGC responses to the Fourier domain and did not discard phase information prior to classification, because the stimulus was static and did not contain any uncertainty about stimulus location nor simulated fixational eye movements. The mRGC responses were concatenated across space, resulting in a matrix of 2,000 trials by mRGC responses. The order of the trials within this vector was randomized and fed into the linear SVM classifier with a set of stimulus labels. The classifier trained its weights on 90% of the trials, and tested on the 10% left-out trials. This resulted in accuracy (percent correct) for each given contrast level, cone density and ratio.

Accuracy data for a single simulated experiment were fitted with a Weibull function to extract the contrast threshold. The threshold was defined as the power of 1 over the slope of the Weibull function, which comes out approximately ~80% correct, given that chance is 50% for a 2-AFC task and our slope was defined as β = 3.

Comparing model performance to behavior.

To quantify the contribution of the spatial filtering by mRGCs, we compared the model performance to behavior reported by Himmelberg et al. [15]. To do so, we extracted the mean contrast thresholds across all simulated cone densities and mRGC:cone ratios. This resulted in a matrix of 17 cone densities x 5 mRGC:cone ratios. We placed these data points in a 3D coordinate space: log cone density (x-dimension) by log mRGC:cone ratio (y-dimension) by log contrast thresholds (z-dimension). We fitted a 3D mesh using a regression with locally weighted scatterplot smoothing with MATLAB’s fit.m (using a LOWESS fit type with a span = 0.2, built-in normalization and the ‘bisquare’ robust fitting options). This 3D mesh fit is used to visualize the effect of cone density at a single mRGC:cone ratio by extracting a single curve from the mesh at that particular ratio (Fig 6A). We then used the 3D mesh fit to predict contrast thresholds for the four cardinal meridians at 4.5° eccentricity, evaluating the model at the four observed [cone, mRGC:cone ratio]-density coordinates reported by Curcio et al. [9] and Watson [64].

Predicted thresholds for the model up to cone isomerizations and photocurrent were computed using contrast thresholds for each cone density. These data were fitted separately per model stage, with the same 3D mesh fit as mRGC responses using a dummy variable for the mRGC:cone ratio. This fit was used to predict thresholds for each model stage given the observed cone densities at the four cardinal meridians at 4.5° eccentricity.

Contrast thresholds were converted into contrast sensitivity by taking the reciprocal. Nasal and temporal retina were averaged to represent the horizontal meridian. Because cone density can vary dramatically across observers [141,142], we computed error bars that represent the amount of variability in predicted sensitivity based on a difference in underlying cone density.

The upper/lower bound of the error bars in cone and mRGC model predictions were defined by assuming that our estimates of cone density on the meridians are imperfect. Specifically, we assumed that the measured asymmetries might be off by as much as a factor of 2. So, for example, if the reported density for the horizontal meridian is 20% above the mean, and for the vertical meridian is 20% below the mean, we considered the possibility that they were in fact 40% above or below the mean, or 10% above or below the mean.

Supporting information

S1 Fig. Polar angle asymmetries for cone density, mRGC density and V1 surface area computed from different publicly available datasets.

Asymmetries are in percent change, calculated as the difference between horizontal and vertical meridians divided by their mean (left column), the difference between upper and lower vertical meridians divided by their means (right column). Positive asymmetries would positively correlate with observed differences in behavior. (Top row) Cone data are from either Curcio et al. [9] (black lines) or Song et al. [10] (orange line) computed with either ISETBIO (solid lines) or rgcDisplacementMap toolbox (dotted lines). (Middle row) Midget RGC RF data are computed using the computational model by Watson (2014) implemented in the ISETBIO toolbox (solid black line) or Barnett and Aguirre [76] implemented in the rgcDisplacementMap toolbox (dotted black line). (Bottom row) V1 surface is computed from the Human Connectome Project 7T retinotopy dataset (n = 163), using the analyzed dataset by Benson et al. [78,79]. Surface areas are defined as ±10° (black) and ±20° (red) wedge ROIs from 1–6° eccentricity around the meridians, avoiding the central one degree and stimulus border (7–8°) as those data can be noisy. Note that the x-axis is truncated as cortical measurements are limited by the field-of-view in the fMRI experiment. Data are fit with a 2nd degree polynomial, R2 = 0.48 (±10°) and R2 = 0.89 (±20°) for horizontal-vertical and R2 = 0.94 (±10°) and R2 = 0.72 (±20°) for vertical-meridian asymmetries).


S2 Fig. Transformation ratios relative to horizontal visual field meridian.

Relative ratio is computed taking the lower or upper visual field transformation ratio and horizontal visual field transformation ratio from panel B, and divide the two for cone:mRGC ratios (left panel) and mRGC:V1 CMF ratios (right panel).


S3 Fig. Classifier performance varying with cone density, separately for each mRGC:cone ratio.

Linear SVM classifier accuracy is computed for each contrast level in a simulated experiment with 1,000 clockwise and 1,000 counter-clockwise trials. Average accuracy data are fitted with a Weibull function.


S4 Fig. The effect of noise in mRGC layer on contrast thresholds as a function of cone density, separately for each mRGC:cone ratio.

(A) Contrast thresholds as a function of cone density when adding white noise following a Gaussian distribution with a standard deviation of 0.5 (left panel), 1 (middle panel), 2 (right panel). Data are fit with a locally weighted regression using the same procedure as the fit shown in Fig 6. Middle panel (1 std) is identical to Fig 6A. (B) Same data as panel A, visualizing the three mRGC noise levels separately per mRGC:cone ratio. Decreasing opacity of fits and data correspond to decreasing levels of noise.



We thank Michael Landy and Brian Wandell for their useful comments.


  1. 1. Wertheim T. Über die indirekte Sehschärfe. Z Psychol, Physiol. 1894;7:172–83.
  2. 2. Ludvigh E. Extrafoveal acuity as measured with Snellen test-letters. American Journal of Ophtalmology. 1941;24(3):303–10.
  3. 3. Polyak SL. The retina. Chicago, Illinois: University of Chicago Press; 1941.
  4. 4. Strasburger H, Rentschler I, Juttner M. Peripheral vision and pattern recognition: a review. J Vis. 2011;11(5):13. Epub 2011/12/31. pmid:22207654.
  5. 5. Hering E. Beiträge zur Physiologie [Contributions to physiology]. Leipzig: Wilhelm Engelmann; 1861.
  6. 6. von Helmholtz H. Handbuch der physiologischen Optik. Hamburg: Leopold Voss; 1896.
  7. 7. Østerberg GA. Topography of the layer of rods and cones in the human retina. Acta Ophthalmologica. 1935;13(6):1–102.
  8. 8. Curcio CA, Sloan KR Jr., Packer O, Hendrickson AE, Kalina RE. Distribution of cones in human and monkey retina: individual variability and radial asymmetry. Science. 1987;236(4801):579–82. Epub 1987/05/01. pmid:3576186.
  9. 9. Curcio CA, Sloan KR, Kalina RE, Hendrickson AE. Human photoreceptor topography. J Comp Neurol. 1990;292(4):497–523. Epub 1990/02/22. pmid:2324310.
  10. 10. Song H, Chui TY, Zhong Z, Elsner AE, Burns SA. Variation of cone photoreceptor packing density with retinal eccentricity and age. Invest Ophthalmol Vis Sci. 2011;52(10):7376–84. Epub 2011/07/05. pmid:21724911; PubMed Central PMCID: PMC3183974.
  11. 11. Mackeben M. Sustained focal attention and peripheral letter recognition. Spat Vis. 1999;12(1):51–72. Epub 1999/04/09. pmid:10195388.
  12. 12. Carrasco M, Talgar CP, Cameron EL. Characterizing visual performance fields: effects of transient covert attention, spatial frequency, eccentricity, task and set size. Spat Vis. 2001;15(1):61–75. Epub 2002/03/15. pmid:11893125; PubMed Central PMCID: PMC4332623.
  13. 13. Cameron EL, Tai JC, Carrasco M. Covert attention affects the psychometric function of contrast sensitivity. Vision Res. 2002;42(8):949–67. Epub 2002/04/06. pmid:11934448.
  14. 14. Corbett JE, Carrasco M. Visual performance fields: frames of reference. PLoS One. 2011;6(9):e24470. Epub 2011/09/21. pmid:21931727; PubMed Central PMCID: PMC3169603.
  15. 15. Himmelberg MM, Winawer J, Carrasco M. Stimulus-dependent contrast sensitivity asymmetries around the visual field. J Vis. 2020;20(9):18. Epub 2020/09/29. pmid:32986805; PubMed Central PMCID: PMC7533736.
  16. 16. Barbot A, Xue S, Carrasco M. Asymmetries in visual acuity around the visual field. J Vis. 2021;21(1):2. Epub 2021/01/05. pmid:33393963; PubMed Central PMCID: PMC7794272.
  17. 17. Talgar CP, Carrasco M. Vertical meridian asymmetry in spatial resolution: visual and attentional factors. Psychon Bull Rev. 2002;9(4):714–22. Epub 2003/03/05. pmid:12613674.
  18. 18. Fuller S, Carrasco M. Perceptual consequences of visual performance fields: the case of the line motion illusion. J Vis. 2009;9(4):13 1–7. Epub 2009/09/18. pmid:19757922; PubMed Central PMCID: PMC3703960.
  19. 19. Rovamo J, Virsu V. An estimation and application of the human cortical magnification factor. Exp Brain Res. 1979;37(3):495–510. Epub 1979/01/01. pmid:520439.
  20. 20. Virsu V, Rovamo J. Visual resolution, contrast sensitivity, and the cortical magnification factor. Exp Brain Res. 1979;37(3):475–94. Epub 1979/01/01. pmid:520438.
  21. 21. Kroon JN, Rijsdijk JP, van der Wildt GJ. Peripheral contrast sensitivity for sine-wave gratings and single periods. Vision Res. 1980;20(3):243–52. Epub 1980/01/01. pmid:7385598.
  22. 22. Rijsdijk JP, Kroon JN, van der Wildt GJ. Contrast sensitivity as a function of position on the retina. Vision Res. 1980;20(3):235–41. Epub 1980/01/01. pmid:7385597.
  23. 23. Robson JG, Graham N. Probability summation and regional variation in contrast sensitivity across the visual field. Vision Res. 1981;21(3):409–18. Epub 1981/01/01. pmid:7269319.
  24. 24. Lundh BL, Lennerstrand G, Derefeldt G. Central and peripheral normal contrast sensitivity for static and dynamic sinusoidal gratings. Acta Ophthalmologica. 1983;61(2):171–82. Epub 1983/04/01. pmid:6880630
  25. 25. Skrandies W. Human contrast sensitivity: regional retinal differences. Hum Neurobiol. 1985;4(2):97–9. Epub 1985/01/01. pmid:4030428.
  26. 26. Seiple W, Holopigian K, Szlyk JP, Wu C. Multidimensional visual field maps: relationships among local psychophysical and local electrophysiological measures. J Rehabil Res Dev. 2004;41(3A):359–72. Epub 2004/11/16. pmid:15543452.
  27. 27. Silva MF, Maia-Lopes S, Mateus C, Guerreiro M, Sampaio J, Faria P, et al. Retinal and cortical patterns of spatial anisotropy in contrast sensitivity tasks. Vision Res. 2008;48(1):127–35. Epub 2007/12/11. pmid:18067943.
  28. 28. Silva MF, Mateus C, Reis A, Nunes S, Fonseca P, Castelo-Branco M. Asymmetry of visual sensory mechanisms: electrophysiological, structural, and psychophysical evidences. J Vis. 2010;10(6):26. Epub 2010/10/05. pmid:20884575.
  29. 29. Abrams J, Nizam A, Carrasco M. Isoeccentric locations are not equivalent: the extent of the vertical meridian asymmetry. Vision Res. 2012;52(1):70–8. Epub 2011/11/17. pmid:22086075; PubMed Central PMCID: PMC3345502.
  30. 30. Baldwin AS, Meese TS, Baker DH. The attenuation surface for contrast sensitivity has the form of a witch’s hat within the central visual field. J Vis. 2012;12(11). Epub 2012/10/30. pmid:23104816.
  31. 31. Silva MF, d’Almeida OC, Oliveiros B, Mateus C, Castelo-Branco M. Development and aging of visual hemifield asymmetries in contrast sensitivity. J Vis. 2014;14(12). Epub 2014/10/19. pmid:25326605.
  32. 32. Altpeter E, Mackeben M, Trauzettel-Klosinski S. The importance of sustained attention for patients with maculopathies. Vision Res. 2000;40(10–12):1539–47. Epub 2000/05/02. pmid:10788657.
  33. 33. Carrasco M, Williams PE, Yeshurun Y. Covert attention increases spatial resolution with or without masks: support for signal enhancement. J Vis. 2002;2(6):467–79. Epub 2003/04/08. pmid:12678645.
  34. 34. Montaser-Kouhsari L, Carrasco M. Perceptual asymmetries are preserved in short-term memory tasks. Atten Percept Psychophys. 2009;71(8):1782–92. Epub 2009/11/26. pmid:19933562; PubMed Central PMCID: PMC3697833.
  35. 35. Fuller S, Rodriguez RZ, Carrasco M. Apparent contrast differs across the vertical meridian: visual and attentional factors. J Vis. 2008;8(1):16 1-. Epub 2008/03/06. pmid:18318619; PubMed Central PMCID: PMC2789458.
  36. 36. Chaikin JD, Corbin HH, Volkmann J. Mapping a field of short-time visual search. Science. 1962;138(3547):1327–8. Epub 1962/12/21. pmid:14019856.
  37. 37. Krose BJ, Julesz B. The control and speed of shifts of attention. Vision Res. 1989;29(11):1607–19. Epub 1989/01/01. pmid:2635484.
  38. 38. Carrasco M, Giordano AM, McElree B. Temporal performance fields: visual and attentional factors. Vision Res. 2004;44(12):1351–65. Epub 2004/04/07. pmid:15066395.
  39. 39. Rezec AA, Dobkins KR. Attentional weighting: a possible account of visual field asymmetries in visual search? Spat Vis. 2004;17(4–5):269–93. Epub 2004/11/24. pmid:15559106.
  40. 40. Pretorius LL, Hanekom JJ. An accurate method for determining the conspicuity area associated with visual targets. Hum Factors. 2006;48(4):774–84. Epub 2007/01/24. pmid:17240724.
  41. 41. Kristjansson A, Sigurdardottir HM. On the benefits of transient attention across the visual field. Perception. 2008;37(5):747–64. Epub 2008/07/09. pmid:18605148.
  42. 42. Najemnik J, Geisler WS. Eye movement statistics in humans are consistent with an optimal search strategy. J Vis. 2008;8(3):4 1–14. Epub 2008/05/20. pmid:18484810; PubMed Central PMCID: PMC2868380.
  43. 43. Najemnik J, Geisler WS. Simple summation rule for optimal fixation selection in visual search. Vision Res. 2009;49(10):1286–94. Epub 2009/01/14. pmid:19138697.
  44. 44. Fortenbaugh FC, Silver MA, Robertson LC. Individual differences in visual field shape modulate the effects of attention on the lower visual field advantage in crowding. J Vis. 2015;15(2). Epub 2015/03/12. pmid:25761337; PubMed Central PMCID: PMC4327314.
  45. 45. Toet A, Levi DM. The two-dimensional shape of spatial interaction zones in the parafovea. Vision Res. 1992;32(7):1349–57. Epub 1992/07/01. pmid:1455707.
  46. 46. He S, Cavanagh P, Intriligator J. Attentional resolution and the locus of visual awareness. Nature. 1996;383(6598):334–7. Epub 1996/09/26. pmid:8848045.
  47. 47. Greenwood JA, Szinte M, Sayim B, Cavanagh P. Variations in crowding, saccadic precision, and spatial localization reveal the shared topology of spatial vision. Proc Natl Acad Sci U S A. 2017;114(17):E3573–E82. Epub 2017/04/12. pmid:28396415; PubMed Central PMCID: PMC5410794.
  48. 48. Roberts M, Cymerman R, Smith RT, Kiorpes L, Carrasco M. Covert spatial attention is functionally intact in amblyopic human adults. J Vis. 2016;16(15):30. Epub 2016/12/30. pmid:28033433; PubMed Central PMCID: PMC5215291.
  49. 49. Purokayastha S, Roberts M, Carrasco M. Voluntary attention improves performance similarly around the visual field. Atten Percept Psychophys. 2021;83(7):2784–94. Epub 2021/05/27. pmid:34036535; PubMed Central PMCID: PMC8514247.
  50. 50. Kupers ER, Carrasco M, Winawer J. Modeling visual performance differences ’around’ the visual field: A computational observer approach. PLoS Comput Biol. 2019;15(5):e1007063. Epub 2019/05/28. pmid:31125331; PubMed Central PMCID: PMC6553792.
  51. 51. Cottaris NP, Wandell BA, Rieke F, Brainard DH. A computational observer model of spatial contrast sensitivity: Effects of photocurrent encoding, fixational eye movements, and inference engine. J Vis. 2020;20(7):17. Epub 2020/07/22. pmid:32692826; PubMed Central PMCID: PMC7424933.
  52. 52. Drasdo N. Receptive field densities of the ganglion cells of the human retina. Vision Res. 1989;29(8):985–8. Epub 1989/01/01. pmid:2629213.
  53. 53. Curcio CA, Allen KA. Topography of ganglion cells in human retina. J Comp Neurol. 1990;300(1):5–25. Epub 1990/10/01. pmid:2229487.
  54. 54. Dacey DM, Petersen MR. Dendritic field size and morphology of midget and parasol ganglion cells of the human retina. Proc Natl Acad Sci U S A. 1992;89(20):9666–70. Epub 1992/10/15. pmid:1409680; PubMed Central PMCID: PMC50193.
  55. 55. Dacey DM. The mosaic of midget ganglion cells in the human retina. J Neurosci. 1993;13(12):5334–55. Epub 1993/12/01. pmid:8254378; PubMed Central PMCID: PMC6576399.
  56. 56. Sjostrand J, Popovic Z, Conradi N, Marshall J. Morphometric study of the displacement of retinal ganglion cells subserving cones within the human fovea. Graefes Arch Clin Exp Ophthalmol. 1999;237(12):1014–23. Epub 2000/02/02. pmid:10654171.
  57. 57. Drasdo N, Millican CL, Katholi CR, Curcio CA. The length of Henle fibers in the human retina and a model of ganglion receptive field density in the visual field. Vision Res. 2007;47(22):2901–11. Epub 2007/02/27. pmid:17320143; PubMed Central PMCID: PMC2077907.
  58. 58. Liu Z, Kurokawa K, Zhang F, Lee JJ, Miller DT. Imaging and quantifying ganglion cells and other transparent neurons in the living human retina. Proc Natl Acad Sci U S A. 2017;114(48):12803–8. Epub 2017/11/16. pmid:29138314; PubMed Central PMCID: PMC5715765.
  59. 59. Webb SV, Kaas JH. The sizes and distribution of ganglion cells in the retina of the owl monkey. Aotus trivirgatus. Vision Res. 1976;16(11):1247–54. Epub 1976/01/01. pmid:827113.
  60. 60. Perry VH, Oehler R, Cowey A. Retinal ganglion cells that project to the dorsal lateral geniculate nucleus in the macaque monkey. Neuroscience. 1984;12(4):1101–23. Epub 1984/08/01. pmid:6483193.
  61. 61. Perry VH, Cowey A. The ganglion cell and cone distributions in the monkey’s retina: implications for central magnification factors. Vision Res. 1985;25(12):1795–810. Epub 1985/01/01. pmid:3832605.
  62. 62. Wassle H, Grunert U, Rohrenbeck J, Boycott BB. Cortical magnification factor and the ganglion cell density of the primate retina. Nature. 1989;341(6243):643–6. Epub 1989/10/19. pmid:2797190.
  63. 63. Leventhal AG, Rodieck RW, Dreher B. Retinal ganglion cell classes in the Old World monkey: morphology and central projections. Science. 1981;213(4512):1139–42. Epub 1981/09/04. pmid:7268423
  64. 64. Watson AB. A formula for human retinal ganglion cell receptive field density as a function of visual field location. J Vis. 2014;14(7). Epub 2014/07/02. pmid:24982468.
  65. 65. Farrell JE, Winawer J, Brainard DH, Wandell B. Modeling visible differences: The computational observer model. SID Symposium Digest of Technical Papers2014. p. 352–6.
  66. 66. Brainard D, Jiang H, Cottaris NP, Rieke F, Chichilnisky EJ, Farrell JE, et al. ISETBIO: Computational tools for modeling early human vision. Imaging and Applied Optics 2015; Arlington, Virginia: Optical Society of America; 2015.
  67. 67. Cottaris NP, Jiang H, Ding X, Wandell BA, Brainard DH. A computational-observer model of spatial contrast sensitivity: Effects of wave-front-based optics, cone-mosaic structure, and inference engine. J Vis. 2019;19(4):8. Epub 2019/04/04. pmid:30943530.
  68. 68. Horton JC, Hoyt WF. The representation of the visual field in human striate cortex. A revision of the classic Holmes map. Arch Ophthalmol. 1991;109(6):816–24. Epub 1991/06/01. pmid:2043069.
  69. 69. Polyak S. The main afferent fiber systems of the cerebral cortex in primates. Berkeley: University of California; 1932.
  70. 70. Polyak S. A contribution of the cerebral representation of the retina. J Comp Neurol. 1933;57(3):541–617.
  71. 71. Pointer JS. The cortical magnification factor and photopic vision. Biol Rev Camb Philos Soc. 1986;61(2):97–119. Epub 1986/05/01. pmid:3527286.
  72. 72. Tolhurst DJ, Ling L. Magnification factors and the organization of the human striate cortex. Hum Neurobiol. 1988;6(4):247–54. Epub 1988/01/01. pmid:2832355
  73. 73. Adams DL, Horton JC. A precise retinotopic map of primate striate cortex generated from the representation of angioscotomas. J Neurosci. 2003;23(9):3771–89. Epub 2003/05/09. pmid:12736348; PubMed Central PMCID: PMC6742198.
  74. 74. Myerson J, Manis PB, Miezin FM, Allman JM. Magnification in striate cortex and retinal ganglion cell layer of owl monkey: a quantitative comparison. Science. 1977;198(4319):855–7. Epub 1977/11/25. pmid:411172.
  75. 75. Van Essen DC, Newsome WT, Maunsell JH. The visual field representation in striate cortex of the macaque monkey: asymmetries, anisotropies, and individual variability. Vision Res. 1984;24(5):429–48. Epub 1984/01/01. pmid:6740964.
  76. 76. Barnett M, Aguirre GK. A spatial model of human retinal cell densities and solution for retinal ganglion cell displacement. Vision Sciences Society Annual Meeting; St. Pete Beach, FL, USA: Journal of Vision; 2018. p. 23.
  77. 77. Fahle M, Schmid M. Naso-temporal asymmetry of visual perception and of the visual cortex. Vision Res. 1988;28(2):293–300. Epub 1988/01/01. pmid:3414016.
  78. 78. Benson NC, Kupers ER, Barbot A, Carrasco M, Winawer J. Cortical magnification in human visual cortex parallels task performance around the visual field. Elife. 2021;10. Epub 2021/08/04. pmid:34342581; PubMed Central PMCID: PMC8378846.
  79. 79. Benson NC, Jamison KW, Arcaro MJ, Vu AT, Glasser MF, Coalson TS, et al. The Human Connectome Project 7 Tesla retinotopy dataset: Description and population receptive field analysis. J Vis. 2018;18(13):23. Epub 2018/12/29. pmid:30593068; PubMed Central PMCID: PMC6314247.
  80. 80. Angueyra JM, Rieke F. Origin and effect of phototransduction noise in primate cone photoreceptors. Nat Neurosci. 2013;16(11):1692–700. Epub 2013/10/08. pmid:24097042; PubMed Central PMCID: PMC3815624.
  81. 81. Rodieck RW. Quantitative analysis of cat retinal ganglion cell response to visual stimuli. Vision Res. 1965;5(11):583–601. Epub 1965/12/01. pmid:5862581
  82. 82. Enroth-Cugell C, Robson JG. The contrast sensitivity of retinal ganglion cells of the cat. J Physiol. 1966;187(3):517–52. Epub 1966/12/01. pmid:16783910; PubMed Central PMCID: PMC1395960.
  83. 83. Croner LJ, Kaplan E. Receptive fields of P and M ganglion cells across the primate retina. Vision Res. 1995;35(1):7–24. Epub 1995/01/01. pmid:7839612.
  84. 84. Green DG. Regional variations in the visual acuity for interference fringes on the retina. J Physiol. 1970;207(2):351–6. Epub 1970/04/01. pmid:5499023; PubMed Central PMCID: PMC1348710.
  85. 85. Williams DR. Aliasing in human foveal vision. Vision Res. 1985;25(2):195–205. Epub 1985/01/01. pmid:4013088.
  86. 86. Coletta NJ, Williams DR. Psychophysical estimate of extrafoveal cone spacing. J Opt Soc Am A. 1987;4(8):1503–13. Epub 1987/08/01. pmid:3625330.
  87. 87. Williams DR, Coletta NJ. Cone spacing and the visual resolution limit. J Opt Soc Am A. 1987;4(8):1514–23. Epub 1987/08/01. pmid:3625331
  88. 88. Anderson SJ, Hess RF. Post-receptoral undersampling in normal human peripheral vision. Vision Res. 1990;30(10):1507–15. Epub 1990/01/01. pmid:2247960
  89. 89. Anderson SJ, Mullen KT, Hess RF. Human peripheral spatial resolution for achromatic and chromatic stimuli: limits imposed by optical and retinal factors. J Physiol. 1991;442:47–64. Epub 1991/10/01. pmid:1798037; PubMed Central PMCID: PMC1179877.
  90. 90. Banks MS, Sekuler AB, Anderson SJ. Peripheral spatial vision: limits imposed by optics, photoreceptors, and receptor pooling. J Opt Soc Am A. 1991;8(11):1775–87. Epub 1991/11/01. pmid:1744774.
  91. 91. Sjostrand J, Olsson V, Popovic Z, Conradi N. Quantitative estimations of foveal and extra-foveal retinal circuitry in humans. Vision Res. 1999;39(18):2987–98. Epub 2000/02/09. pmid:10664798
  92. 92. Popovic Z, Sjostrand J. Resolution, separation of retinal ganglion cells, and cortical magnification in humans. Vision Res. 2001;41(10–11):1313–9. Epub 2001/04/27. pmid:11322976.
  93. 93. Popovic Z, Sjostrand J. The relation between resolution measurements and numbers of retinal ganglion cells in the same human subjects. Vision Res. 2005;45(17):2331–8. Epub 2005/06/01. pmid:15924946.
  94. 94. Wilkinson MO, Anderson RS, Bradley A, Thibos LN. Neural bandwidth of veridical perception across the visual field. J Vis. 2016;16(2):1. Epub 2016/01/30. pmid:26824638; PubMed Central PMCID: PMC5833322.
  95. 95. Anstis SM. Letter: A chart demonstrating variations in acuity with retinal position. Vision Res. 1974;14(7):589–92. Epub 1974/07/01. pmid:4419807.
  96. 96. Duncan RO, Boynton GM. Cortical magnification within human primary visual cortex correlates with acuity thresholds. Neuron. 2003;38(4):659–71. Epub 2003/05/27. pmid:12765616.
  97. 97. Carrasco M, Frieder KS. Cortical magnification neutralizes the eccentricity effect in visual search. Vision Res. 1997;37(1):63–82. Epub 1997/01/01. pmid:9068831
  98. 98. Carrasco M, McLean TL, Katz SM, Frieder KS. Feature asymmetries in visual search: effects of display duration, target eccentricity, orientation and spatial frequency. Vision Res. 1998;38(3):347–74. Epub 1998/04/16. pmid:9536360.
  99. 99. Schwarzkopf DS, Song C, Rees G. The surface area of human V1 predicts the subjective experience of object size. Nat Neurosci. 2011;14(1):28–30. Epub 2010/12/07. pmid:21131954; PubMed Central PMCID: PMC3012031.
  100. 100. Liu T, Heeger DJ, Carrasco M. Neural correlates of the visual vertical meridian asymmetry. J Vis. 2006;6(11):1294–306. Epub 2007/01/11. pmid:17209736; PubMed Central PMCID: PMC1864963.
  101. 101. Benson NC, Butt OH, Datta R, Radoeva PD, Brainard DH, Aguirre GK. The retinotopic organization of striate cortex is well predicted by surface topology. Curr Biol. 2012;22(21):2081–5. Epub 2012/10/09. pmid:23041195; PubMed Central PMCID: PMC3494819.
  102. 102. Silva MF, Brascamp JW, Ferreira S, Castelo-Branco M, Dumoulin SO, Harvey BM. Radial asymmetries in population receptive field size and cortical magnification factor in early visual cortex. Neuroimage. 2018;167:41–52. Epub 2017/11/21. pmid:29155078.
  103. 103. Himmelberg MM, Winawer J, Carrasco M. Linking contrast sensitivity to cortical magnification in human primary visual cortex. bioRxiv. 2021;10.04.463138.
  104. 104. Daniel PM, Whitteridge D. The representation of the visual field on the cerebral cortex in monkeys. J Physiol. 1961;159:203–21. Epub 1961/12/01. pmid:13883391; PubMed Central PMCID: PMC1359500.
  105. 105. Rolls ET, Cowey A. Topography of the retina and striate cortex and its relationship to visual acuity in rhesus monkeys and squirrel monkeys. Exp Brain Res. 1970;10(3):298–310. Epub 1970/01/01. pmid:4986000
  106. 106. Braccini C, Gambardella G, Sandini G, Tagliasco V. A model of the early stages of the human visual system: functional and topological transformations performed in the peripheral visual field. Biol Cybern. 1982;44(1):47–58. Epub 1982/01/01. pmid:7093369
  107. 107. Schutt HH, Wichmann FA. An image-computable psychophysical spatial vision model. J Vis. 2017;17(12):12. Epub 2017/10/21. pmid:29053781.
  108. 108. Bradley C, Abrams J, Geisler WS. Retina-V1 model of detectability across the visual field. J Vis. 2014;14(12). Epub 2014/10/23. pmid:25336179; PubMed Central PMCID: PMC4204678.
  109. 109. Watson AB, Ahumada AJ. Letter identification and the neural image classifier. J Vis. 2015;15(2). Epub 2015/03/12. pmid:25761333.
  110. 110. Kwon M, Liu R. Linkage between retinal ganglion cell density and the nonuniform spatial integration across the visual field. Proc Natl Acad Sci U S A. 2019;116(9):3827–36. Epub 2019/02/10. pmid:30737290; PubMed Central PMCID: PMC6397585.
  111. 111. Himmelberg MM, Kurzawski JW, Benson NC, Pelli DG, Carrasco M, Winawer J. Cross-dataset reproducibility of human retinotopic maps. Neuroimage. 2021;244:118609. Epub 2021/09/29. pmid:34582948; PubMed Central PMCID: PMC8560578.
  112. 112. Perry VH, Cowey A. The lengths of the fibres of Henle in the retina of macaque monkeys: implications for vision. Neuroscience. 1988;25(1):225–36. Epub 1988/04/01. pmid:3393279.
  113. 113. Tootell RB, Switkes E, Silverman MS, Hamilton SL. Functional anatomy of macaque striate cortex. II. Retinotopic organization. J Neurosci. 1988;8(5):1531–68. Epub 1988/05/01. PubMed Central PMCID: PMC6569212. pmid:3367210
  114. 114. Andrews TJ, Halpern SD, Purves D. Correlated size variations in human visual cortex, lateral geniculate nucleus, and optic tract. J Neurosci. 1997;17(8):2859–68. Epub 1997/04/15. pmid:9092607; PubMed Central PMCID: PMC6573115.
  115. 115. Kastner S, Schneider KA, Wunderlich K. Beyond a relay nucleus: neuroimaging views on the human LGN. Prog Brain Res. 2006;155:125–43. Epub 2006/10/10. pmid:17027384.
  116. 116. Malpeli JG, Baker FH. The representation of the visual field in the lateral geniculate nucleus of Macaca mulatta. J Comp Neurol. 1975;161(4):569–94. Epub 1975/06/15. pmid:1133232.
  117. 117. Connolly M, Van Essen D. The representation of the visual field in parvicellular and magnocellular layers of the lateral geniculate nucleus in the macaque monkey. J Comp Neurol. 1984;226(4):544–64. Epub 1984/07/10. pmid:6747034.
  118. 118. Schein SJ, de Monasterio FM. Mapping of retinal and geniculate neurons onto striate cortex of macaque. J Neurosci. 1987;7(4):996–1009. Epub 1987/04/01. pmid:3033167; PubMed Central PMCID: PMC6568992.
  119. 119. Azzopardi P, Cowey A. Preferential representation of the fovea in the primary visual cortex. Nature. 1993;361(6414):719–21. Epub 1993/02/25. pmid:7680108.
  120. 120. Malpeli JG, Lee D, Baker FH. Laminar and retinotopic organization of the macaque lateral geniculate nucleus: magnocellular and parvocellular magnification functions. J Comp Neurol. 1996;375(3):363–77. Epub 1996/11/18. pmid:8915836.
  121. 121. Shriki O, Kohn A, Shamir M. Fast coding of orientation in primary visual cortex. PLoS Comput Biol. 2012;8(6):e1002536. Epub 2012/06/22. pmid:22719237; PubMed Central PMCID: PMC3375217.
  122. 122. Barlow HB. Single units and sensation: a neuron doctrine for perceptual psychology? Perception. 1972;1(4):371–94. Epub 1972/01/01. pmid:4377168.
  123. 123. Hochstein S, Shapley RM. Linear and nonlinear spatial subunits in Y cat retinal ganglion cells. J Physiol. 1976;262(2):265–84. Epub 1976/11/01. PubMed Central PMCID: PMC1307643. pmid:994040
  124. 124. Demb JB, Haarsma L, Freed MA, Sterling P. Functional circuitry of the retinal ganglion cell’s nonlinear receptive field. J Neurosci. 1999;19(22):9756–67. Epub 1999/11/13. pmid:10559385; PubMed Central PMCID: PMC6782950.
  125. 125. Schwartz GW, Okawa H, Dunn FA, Morgan JL, Kerschensteiner D, Wong RO, et al. The spatial structure of a nonlinear receptive field. Nat Neurosci. 2012;15(11):1572–80. Epub 2012/09/25. pmid:23001060; PubMed Central PMCID: PMC3517818.
  126. 126. Shah NP, Brackbill N, Rhoades C, Kling A, Goetz G, Litke AM, et al. Inference of nonlinear receptive field subunits with spike-triggered clustering. Elife. 2020;9. Epub 2020/03/10. pmid:32149600; PubMed Central PMCID: PMC7062463.
  127. 127. Schwartz G, Rieke F. Perspectives on: information and coding in mammalian sensory physiology: nonlinear spatial encoding by retinal ganglion cells: when 1 + 1 not equal 2. J Gen Physiol. 2011;138(3):283–90. Epub 2011/08/31. pmid:21875977; PubMed Central PMCID: PMC3171084.
  128. 128. Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky EJ, et al. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature. 2008;454(7207):995–9. Epub 2008/07/25. pmid:18650810; PubMed Central PMCID: PMC2684455.
  129. 129. Ala-Laurila P, Greschner M, Chichilnisky EJ, Rieke F. Cone photoreceptor contributions to noise and correlations in the retinal output. Nat Neurosci. 2011;14(10):1309–16. Epub 2011/09/20. pmid:21926983; PubMed Central PMCID: PMC3183110.
  130. 130. Shapley R, Perry VH. Cat and monkey retinal ganglion cells and their visual functional roles. Trends in Neurosciences. 1986;9:229–35.
  131. 131. Rauschecker AM, Bowen RF, Parvizi J, Wandell BA. Position sensitivity in the visual word form area. Proc Natl Acad Sci U S A. 2012;109(24):E1568–77. Epub 2012/05/10. pmid:22570498; PubMed Central PMCID: PMC3386120.
  132. 132. Gibaldi A, Benson NC, Banks MS. Crossed-uncrossed projections from primate retina are adapted to disparities of natural scenes. Proc Natl Acad Sci U S A. 2021;118(7). Epub 2021/02/13. pmid:33574061; PubMed Central PMCID: PMC7896330.
  133. 133. Aghajari S, Vinke LN, Ling S. Population spatial frequency tuning in human early visual cortex. J Neurophysiol. 2020;123(2):773–85. Epub 2020/01/16. pmid:31940228; PubMed Central PMCID: PMC7052645.
  134. 134. Campbell FW, Green DG. Monocular versus binocular visual acuity. Nature. 1965;208(5006):191–2. Epub 1965/10/09. pmid:5884255
  135. 135. Hubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol. 1962;160:106–54. Epub 1962/01/01. pmid:14449617; PubMed Central PMCID: PMC1359523.
  136. 136. Banton T, Levi DM. Binocular summation in vernier acuity. J Opt Soc Am A. 1991;8(4):673–80. Epub 1991/04/01. pmid:2045969
  137. 137. Dougherty K, Cox MA, Westerberg JA, Maier A. Binocular Modulation of Monocular V1 Neurons. Curr Biol. 2019;29(3):381–91 e4. Epub 2019/01/22. pmid:30661798; PubMed Central PMCID: PMC6363852.
  138. 138. Ugurbil K, Xu J, Auerbach EJ, Moeller S, Vu AT, Duarte-Carvajalino JM, et al. Pushing spatial and temporal resolution for functional and diffusion MRI in the Human Connectome Project. Neuroimage. 2013;80:80–104. Epub 2013/05/25. pmid:23702417; PubMed Central PMCID: PMC3740184.
  139. 139. Van Essen DC, Smith SM, Barch DM, Behrens TE, Yacoub E, Ugurbil K, et al. The WU-Minn Human Connectome Project: an overview. Neuroimage. 2013;80:62–79. Epub 2013/05/21. pmid:23684880; PubMed Central PMCID: PMC3724347.
  140. 140. Benson NC, Yoon JMD, Forenzo D, Engel SA, Kay KN, Winawer J. Variability of the Surface Area of the V1, V2, and V3 Maps in a Large Sample of Human Observers. BioRXiv. 2021.
  141. 141. Roorda A, Williams DR. The arrangement of the three cone classes in the living human eye. Nature. 1999;397(6719):520–2. Epub 1999/02/24. pmid:10028967.
  142. 142. Hofer H, Carroll J, Neitz J, Neitz M, Williams DR. Organization of the human trichromatic cone mosaic. J Neurosci. 2005;25(42):9669–79. Epub 2005/10/21. pmid:16237171; PubMed Central PMCID: PMC6725723.