Skip to main content
Advertisement

< Back to Article

Fig 1.

Schematics of stimuli, fusion, and independence mechanisms.

(A) The depth of a central target can be defined by disparity and/or motion. Congruent stimuli (disparity and motion presented consistently) are represented as bivariate Gaussian distributions (magenta versus green blobs for near versus far stimuli, respectively). A single cue detector would sense depth along only one dimension (disparity or motion detector): distinguishing the stimuli in this case depends on making a judgment using the marginal distribution (illustrated along the top and left-hand sides of the disparity-motion space). A fusion mechanism (bottom left) combines disparity and motion distributions into a single dimension: this reduces the variance of the combined estimate (solid distributions) relative to the components (dotted distributions). The independence mechanism (bottom right) finds the optimal separating boundary between the stimuli: this increases the separation between the distributions to improve discrimination performance; this corresponds to the quadratic sum of performance along the component axes (by the Pythagorean theorem this means greater separation along the diagonal). Black, magenta, and green dashed lines overlaying the stimuli (not shown during the experiment) are used here to delineate the reference plane and the near and far target planes, respectively. Black, magenta, and green arrows represent the amount of displacement of the reference and target planes. Both dotted planes move sinusoidally (from left to right and vice versa) within the margins determined by the squares of the background (never overlapping with them). (B) Performance of the fusion (left) versus independence (right) mechanisms for the single-cue and incongruent-cue conditions. In both scenarios, the fusion mechanism is compromised and performance decreases, but the independence mechanism is unaffected because depth differences are detected independently. (C) Decoding predictions for an area that responds (ideally) based on fusion or independence. An example of a hypothetical mixed neuronal population response (i.e., neurons tuned to independent cues or to fusion) is shown in the middle panel. Red dotted line depicts the quadratic summation of the marginal cues. D, disparity; M, relative motion; DM, consistent combination of disparity and motion. Figure was adapted from Welchman, 2016 [2].

More »

Fig 1 Expand

Fig 2.

Whole-brain searchlight analyses for disparity, motion, and congruent conditions in monkey.

Flat maps showing the left and right monkey cortex. Borders between areas are defined by retinotopic mapping and indicated by the white dotted lines. Sulci/gyri are coded in dark/light gray. Results of a searchlight classifier analysis that moved iteratively throughout the entire volume of cortex, discriminating between near and far depth positions (group data, N = 2), are presented. The color code represents the t value of the classification accuracies obtained for depths defined by (A) disparity, (B) relative motion, and (C) the congruent combination of disparity and motion. The underlying data for the figures can be found at https://doi.org/10.5061/dryad.6pm117m. CIP, caudal intraparietal area; DP, dorsal prelunate area; FST, fundus of the superior temporal sulcus area; LIP, lateral intraparietal area; MST, medial superior temporal sulcus area; MT, middle temporal area; OT, occipitotemporal area; PIP, posterior intraparietal area; PIT, posterior inferotemporal area; V1, primary visual cortex; V2d, dorsal secondary visual area; V2v, ventral secondary visual area; V3A, visual area 3A; V3d, dorsal visual area 3; V3v, ventral visual areas 3; V4, visual area 4; V4A, visual area 4A; V4t, transitional visual area 4.

More »

Fig 2 Expand

Fig 3.

Whole-brain searchlight analyses for disparity, motion, and congruent conditions in human.

Flat maps showing the left and right human cortex. Data are from Ban and colleagues, 2012 [10]. Same conventions as in Fig 2. The color code represents the t value of the classification accuracies obtained for depths defined by (A) disparity, (B) relative motion, and (C) the congruent combination of disparity and motion (group data, N = 20). The map for the incongruent condition is shown in S4 Fig. The underlying data for the figures can be found at https://doi.org/10.5061/dryad.6pm117m. hMT+, human middle temporal area; LO, lateral occipital area; V3B/KO, area V3B, kinetic occipital area.

More »

Fig 3 Expand

Fig 4.

Classification performances and quadratic summation test.

(A) Prediction performance (accuracy and sensitivity) for near versus far discrimination in different ROIs and for different conditions. The red lines illustrate performance expected from the quadratic summation of prediction sensitivities for the marginal cues. Error bars, SEM. (B) Results as an integration index. A value of zero indicates the minimum bound for fusion (the prediction based on quadratic summation). Data are presented as notched distribution plots. The center of the “bowtie” represents the median, the greenish area depicts 68% confidence values, and the upper and lower error bars 95% confidence intervals. The underlying data for the figures can be found at https://doi.org/10.5061/dryad.6pm117m. CIP, caudal intraparietal area; DP, dorsal prelunate area; FST, fundus of the superior temporal sulcus area; LIP, lateral intraparietal area; MST, medial superior temporal sulcus area; MT, middle temporal area; OT, occipitotemporal area; PIP, posterior intraparietal area; PIT, posterior inferotemporal area; ROI, region of interest.

More »

Fig 4 Expand

Table 1.

Significance tests for the integration index, congruency, and transfer index.

More »

Table 1 Expand

Fig 5.

Congruency and transfer test.

(A) Prediction accuracy for near versus far classification when cues are congruent or incongruent in different ROIs. The horizontal line at 0.5 corresponds to chance performance. Error bars, SEM; *P < 0.01 uncorrected; **P < 0.01 Bonferroni corrected. (B) Prediction accuracy for the cross-cue transfer analysis in different regions. Classification performances are shown when data were trained and tested with the same cue (within-cue, dark purple), trained with one cue and tested with the other (cross-cue, cyan), and for randomly permuted data (light purple). Error bars, SEM. (C) Data shown as a transfer index. A value of 100% would indicate that prediction accuracies were equivalent for within- and between-cue testing. Distribution plots show the median; cyan area and error bars represent the 68% and 95% confidence intervals, respectively. Purple dotted horizontal lines depict a bootstrapped chance baseline based on the upper 95th percentile for transfer obtained with randomly permuted data. *P < 0.01 uncorrected; **P < 0.01 Bonferroni corrected. The underlying data for the figures can be found at https://doi.org/10.5061/dryad.6pm117m. CIP, caudal intraparietal area; DP, dorsal prelunate area; FST, fundus of the superior temporal sulcus area; LIP, lateral intraparietal area; MST, medial superior temporal sulcus area; MT, middle temporal area; OT, occipitotemporal area; PIP, posterior intraparietal area; PIT, posterior inferotemporal area; ROI, region of interest.

More »

Fig 5 Expand

Fig 6.

Flat maps for integration and transfer tests based on the searchlight analyses.

Integration and transfer test maps for monkeys (A, B) and humans (C, D), calculated from the results of group searchlight classifier analyses. Color code represents the P values obtained from the bootstrap distribution of the integration and transfer indices in monkey and human. The underlying data for the figures can be found at https://doi.org/10.5061/dryad.6pm117m. CIP, caudal intraparietal area; DP, dorsal prelunate area; FST, fundus of the superior temporal sulcus area; LIP, lateral intraparietal area; MST, medial superior temporal sulcus area; MT, middle temporal area; OT, occipitotemporal area; PIP, posterior intraparietal area; PIT, posterior inferotemporal area.

More »

Fig 6 Expand

Fig 7.

Cue integration summary map in monkey and human.

Summary maps highlighting the implication of the different areas in monkeys (A) and humans (B) across all the analyses performed: sensitivity for disparity, motion and congruent stimuli, and integration and transfer indices. Color code indicates each voxel that reached significance (monkey, P < 0.01; human, P < 0.05) in each of the five tests, ranging from 1 (one test passed) to 5 (five tests passed). The underlying data for the figures can be found at https://doi.org/10.5061/dryad.6pm117m. CIP, caudal intraparietal area; DP, dorsal prelunate area; FST, fundus of the superior temporal sulcus area; LIP, lateral intraparietal area; MST, medial superior temporal sulcus area; MT, middle temporal area; OT, occipitotemporal area; PIP, posterior intraparietal area; PIT, posterior inferotemporal area.

More »

Fig 7 Expand

Fig 8.

Depth discrimination task.

We assessed whether monkeys were able to discriminate different depth levels using the four stimulus conditions of the main experiment. (A) We show sensitivities for depth differences between two sequentially presented planes for monkeys B and T. Both monkeys were able to discriminate between depths for all conditions when the reference and target planes differed by more than 1.8 arcmin in depth. Particularly, monkey B performed excellently and was able to classify between congruent stimuli even for the finest depth difference used (0.3 arcmin). In general, when depths were discriminable, monkeys showed highest sensitivity to the congruent stimulus and lower sensitivity for motion than disparity. Discrimination for the incongruent condition was comparable to that of the single cues. (B) Overall discrimination accuracy across depth levels and monkeys. (C) Sensitivity calculated based on j.n.d. thresholds. As in humans, monkeys were most sensitive when disparity and motion concurrently signalled depth differences, and they were least sensitive for relative motion–related differences. Error bars show bootstrapped 95% confidence intervals; significance was set to P < 0.01. The underlying data for the figures can be found at https://doi.org/10.5061/dryad.6pm117m. j.n.d., just noticeable difference.

More »

Fig 8 Expand