Fig 1.
Band-pass filtered noise patches (1–6 cpd, 10° orientation bandwidth). Top row, a target presented to one eye. Middle rows, orthogonal competitors presented to the other eye. Bottom row, split competitor, parts of which were presented to each eye. Fixation point and black circular fusion frame were presented to both eyes throughout the experiment to stabilize the fixation and vergence so that the images presented to the two eyes would be aligned. In the main experiment, the right eye was the target eye and the left eye was the competitor eye. Two observers participated in additional experimental sessions in which the left eye was the target eye and the right eye was the competitor eye (See Psychophysics in Methods and Materials).
Fig 2.
The target was first presented alone monocularly for 2 seconds followed by an abrupt onset of the competitor in the other eye. This onset-flash suppression procedure ensured that the competitor dominated the percept for a period of time following its onset. One second after the competitor onset, the target changed its orientation (4° clockwise or counterclockwise; for illustration purposes the orientation change is more pronounced here). Observers reported the direction of the orientation change by pressing one of two buttons.
Fig 3.
A. The model simulates the firing rates of two populations of monocular neurons, responsive to stimuli presented to each of the two eyes, with a range of receptive field locations and orientation preferences. Depicted is an example of the feature-specific (FS) model, in which the two monocular neural populations share the same attentional gain factors, and stimulus-driven attention is selective for the orientation and size of the competitor. Bottom row, example stimuli. Target presented to right eye and large competitor presented to the left eye. Second row (from the bottom), excitatory drive. Simulated neurons are arranged in each panel according to their receptive field center (horizontal position) and preferred orientation (vertical position). Brightness at each location in the image corresponds to the excitatory drive to a single neuron. Third row, attentional gain factors. The attentional gains determine the strength of the attentional modulation as a function of receptive field center and orientation preference. Mid-gray indicates a value of 1, white indicates a value larger than 1 (attentional enhancement), and black indicates a value smaller than 1 (attentional suppression). Fourth row, suppressive drive computed from the product of the excitatory drive and the attention gain factors, and then pooled over space and orientation (see panel B), and across the two eyes (light gray arrows). Top row, output firing rates. The excitatory drive is multiplied by the attention gain factors and divided by the suppressive drive. B. The suppression kernel. The suppressive drive is pooled over space and orientation by convolving the attention-modulated excitatory drive with the suppression kernel.
Table 1.
Equations and parameter values for each of the components of the two models.
Fig 4.
Model fits for data from Ling and Blake (2012).
A and B. FS and ES model fits. Filled dots, psychophysical performance averaged across observers. Curves, best fits by each of the two models (parameter values reported in Table 2). C and D. The competitor-driven attentional gain factors estimated by the FS and ES models. E and F. The task-driven attentional gain factors estimated by the two models.
Table 2.
Best-fit parameter values for data from Ling and Blake (2012).
Fig 5.
A. Psychometric functions for each individual observer. Filled dots, psychophysical performance. Curves, best fit Naka-Rushton functions. B. Best-fit c50 and d'm parameter values for each individual observer. Error bars, bootstrapped 95% confidence intervals.
Fig 6.
A and B. FS and ES model fits. Filled dots, psychophysical performance averaged across observers. Error bars represents ±1 SEM. Curves are the best-fit d'(c) by each of the two models (parameter values reported in Table 3). C and D. Stimulus-driven attentional gain factors for the split-competitor condition, estimated by each of the two models. The stimulus-driven attentional gain factors for the other conditions and the goal-driven attentional gain factors are similar to those reported in Fig 4.
Table 3.
Best-fit parameter values for group-averaged data.
Table 4.
Best-fit parameter values for individual observers.
Fig 7.
A. Psychometric functions for two individual observers who participated in additional experimental sessions, in which the target was presented to the left eye and the competitor presented to the right eye. The corresponding data for right-eye targets are the same as the data reported in Fig 5. Filled dots, psychophysical performance. Curves, best fit Naka-Rushton functions. B. Best-fit c50 and d'm parameter values for each individual observer. LE represents left-eye target, and RE represents right-eye target. Error bars, bootstrapped 95% confidence intervals.
Table 5.
Best-fit parameter values for individual observers.