Fig 1.
(A) Illustration of a trial in the discrimination task. Subjects reported on each trial the tilt direction of a single ellipse (“clockwise” or “counterclockwise” relative to vertical). The elongation of the stimulus could take two values. We refer to the most elongated type of ellipse as a “high reliability” stimulus and the less elongated type as a “low reliability” stimulus. Feedback was provided by briefly turning the fixation cross red (error) or green (correct) after the response was given. (B) The subject-averaged data (filled circles) and model fits (curves) reveal that sensitivity was higher for stimuli with high reliability (black) compared to those with low reliability (red). Error bars represent 1 s.e.m. (C) Illustration of a trial in the visual search task with brief stimulus presentation time. (D) Top: examples of target-present displays under the four different levels of external uncertainty. Bottom: distributions from which the stimuli in the example displays were drawn. In all four examples, the ellipse at the “north” location is a target and the other three are distractors.
Table 1.
Overview of visual search task conditions and experimental subject groups.
Each group consisted of 10 subjects. The condition with unlimited stimulus time and no external uncertainty was excluded from the experiment, because subjects are expected to perform 100% correct on it.
Table 2.
Estimated sensory noise levels in the discrimination task (,
) and the customized experimental parameters (μtarget, σexternal) in the visual search task.
Table 3.
Overview of models and their free parameters for the visual search task with short display time.
Parameters σlow and σhigh only exist when the models are applied to conditions with short display time; in conditions with unlimited display time, the sensory noise level is fixed to a prespecified value (explained in Results).
Fig 2.
Simulated effects of four computational imperfections.
(A) Schematic illustration of a single trial in the simulation that was aimed at assessing how computational imperfections affect the optimal observer’s decision variable. On each trial, a stimulus set s and stimulus observations x were drawn from the generative model for the visual search task with 10% external uncertainty. Next, x was provided as input to the Flawless Bayesian model and to a variant of this model with a computational imperfection (e.g., a wrong belief about experimental parameter σexternal). Both models produce a decision variable, d(x). We denote the difference between these two decision variables by Δd(x), which can be thought of as a computational error. A total of 1 million trials were simulated using four different types of computational imperfection: (1) Gaussian noise on the local decision variables; (2) an overestimated value of σexternal; (3) overestimated values of σlow and σhigh; (4) item-to-item and trial-to-trial noise on σlow and σhigh. (B) The distribution of Δd(x) under each simulated computational imperfection (gray areas). In all four cases, this distribution is reasonably well approximated by a Gaussian distribution (black curves). The percentages indicate the accuracy loss caused by the computational imperfection; parameters μ and σ indicate the mean and standard deviation of the Gaussian fitted to each distribution. (C) The distribution of Δd(x) in a model that contains all four tested imperfections simultaneously.
Fig 3.
Results from the visual search conditions with unlimited display time.
(A) Left: AIC-based model comparison at the level of single subjects. Each column is a subject and each row is a model. The best model for each subject is indicated in dark blue (ΔAIC = 0). Right: Subject-averaged AIC values relative to the overall best model. The red dashed line indicates the ΔAIC≥10, which is interpreted as “no support”. (B) The subject data (black markers) are well accounted for by the “Imperfect Bayesian” and “Imperfect Max” models (black curves; the fits of both models are visually indistinguishable). Note that the distribution of d(s) (purple areas) becomes more concentrated around zero as the level of external uncertainty increases, due to the evidence generally being weaker in the tasks with more external uncertainty. (C) In all three conditions, the empirical d’ values (black) are lower than the values predicted by the Flawless Bayesian model (red). The average ratio between the d’ values is 0.834±0.017.
Fig 4.
Results from the visual search conditions with short display time.
(A) Left: AIC-based model comparison at the level of single subjects. Each column is a subject and each row is a model. The best model for each subject is indicated in dark blue (ΔAIC = 0). Right: Subject-averaged AIC values relative to the overall best model. The red dashed line indicates the ΔAIC≥10, which is interpreted as “no support”. (B) False-alarm rates (red) and hit rates conditioned on whether the target had high reliability (blue) or low reliability (green). The subject data (markers) are well accounted for by the Imperfect Bayesian (curves). (C) In all four conditions, the empirical d’ values (black) are lower than the values predicted by the Flawless Bayesian model (red). The average ratio between the d’ values is 0.808±0.037.
Fig 5.
Maximum likelihood estimates of the parameters in the imperfect Bayesian model.
The reported parameters for conditions with unlimited display time were obtained with the model variant in which σi was fixed to 0.875.
Table 4.
Models 2 and 10 are the Imperfect Bayesian and Imperfect Max models, respectively. Bayes factor BFinclusion indicates whether there is evidence for an effect of internal or external uncertainty on the optimality index. All Bayes factors are smaller than 1, indicating evidence against an effect.