Table 1.
Abbreviations and symbols.
Fig 1.
A: Subjects were presented with visual (svis) and vestibular (svis) headings either in the same direction (C = 1) or in different directions (C = 2). In different sessions, subjects were asked to judge whether stimuli had the same cause (‘unity judgment’, explicit causal inference) or whether the vestibular heading was to the left or right of straight forward (‘inertial discrimination’, implicit causal inference). B: Distribution of stimuli used in the task. Mean stimulus direction was drawn from a discrete uniform distribution (−25°, −20°, −15°,…,25°). In 20% of the trials, svis ≡ svest (‘same’ trials, C = 1); in the other 80% (‘different’, C = 2), disparity was drawn from a discrete uniform distribution (±5°, ±10°, ±20°, ±40°), which led to a correlated pattern of heading directions svis and svest. Visual cue reliability cvis was also drawn randomly on each trial (high, medium, and low).
Fig 2.
A: Observer models consist of three model factors: Causal inference strategy, Shape of sensory noise, and Type of prior over stimuli (see text). B: Graphical representation of the observer model. In the left panel (C = 1), the visual (svis) and vestibular (svest) heading direction have a single, common cause. In the right panel (C = 2), svis and svest have separate sources, although not necessarily statistically independent. The observer has access to noisy sensory measurements xvis, xvest, and knows the visual reliability level of the trial cvis. The observer is either asked to infer the causal structure (unity judgment, explicit causal inference), or whether the vestibular stimulus is rightward of straight ahead (inertial discrimination, implicit causal inference). Model factors affect different stages of the observer model: the strategy used to combine the two causal scenarios; the type of prior over stimuli pprior(svis, svest|C); and the shape of sensory noise distributions p(xvis|svis, cvis) and p(xvest|svest) (which affects equally both how noisy measurements are generated and the observer’s beliefs about such noise). C: Example decision boundaries for the Bay-X-E model (for the three reliability levels), and for the Fix model, for a representative observer. The observer reports ‘unity’ when the noisy measurements xvis, xvest fall within the boundaries. Note that the Bayesian decision boundaries expand with larger noise. Nonlinearities are due to the interaction between eccentricity-dependence of the noise and the prior (wiggles are due to the discrete empirical prior).
Fig 3.
Results of the explicit causal inference (unity judgment) task. A: Proportion of ‘unity’ responses, as a function of stimulus disparity (difference between vestibular and visual heading direction), and for different levels of visual cue reliability. Bars are ±1 SEM across subjects. Unity judgments are modulated by stimulus disparity and visual cue reliability. B: Protected exceedance probability and estimated posterior frequency (mean ± SD) of distinct model components for each model factor. Each factor also displays the Bayesian omnibus risk (BOR). C: Model fits of several models of interest (see text for details). Shaded areas are ±1 SEM of model predictions across subjects. Numbers on top right of each panel report the absolute goodness of fit.
Fig 4.
Results of the implicit causal inference (left/right inertial discrimination) task. A: Vestibular bias as a function of co-presented visual heading direction svis, at different levels of visual reliability. Bars are ±1 SEM across subjects. The inset shows a cartoon of how the vestibular bias is computed as minus the point of subjective equality of the psychometric curves of left/right responses (L/R PSE) for vestibular stimuli svest, for a representative subject and for a fixed value of svis. The vestibular bias is strongly modulated by svis and its reliability. B: Protected exceedance probability and estimated posterior frequency (mean ± SD) of distinct model components for each model factor. Each factor also displays the Bayesian omnibus risk (BOR). C: Model fits of several models of interests (see text for details). Shaded areas are ±1 SEM of model predictions across subjects. Numbers on top right of each panel report the absolute goodness of fit.
Fig 5.
Posteriors over model parameters.
Each panel shows the marginal posterior distributions over a single parameter for each subject and task. Each line is an individual subject’s posterior (thick line: interquartile range; light line: 95% credible interval); different colors correspond to different tasks. For each subject and task, posteriors are marginalized over models according to their posterior probability (see Methods). For each parameter we report the across-tasks compatibility probability Cp, that is the (posterior) probability that subjects were best described by the assumption that parameter values were the same across separate tasks, above and beyond chance. The first two rows of parameters compute compatibility across all three tasks, whereas in the last row compatibility only includes the bisensory tasks (bisensory inertial discrimination and unity judgment), as these parameters are irrelevant for the unisensory task.
Fig 6.
Results of the joint fits across tasks. A: Protected exceedance probability and estimated posterior frequency (mean ± SD) of distinct model components for each model factor. Each factor also displays the Bayesian omnibus risk (BOR). B: Joint model fits of the explicit causal inference (unity judgment) task, for different models of interest. Each panel shows the proportion of ‘unity’ responses, as a function of stimulus disparity and for different levels of visual reliability. Bars are ±1 SEM of data across subjects. Shaded areas are ±1 SEM of model predictions across subjects. Numbers on top right of each panel report the absolute goodness of fit across all tasks. C: Joint model fits of the implicit causal inference task, for the same models of panel B. Panels show vestibular bias as a function of co-presented visual heading direction svis, and for different levels of visual reliability. Bars are ±1 SEM of data across subjects. Shaded areas are ±1 SEM of model predictions across subjects.
Table 2.
Joint fit parameters.
Fig 7.
Sensitivity analysis of factorial model comparison.
Protected exceedance probability of distinct model components for each model factor in the joint fits. Each panel also shows the estimated posterior frequency (mean ± SD) of distinct model components, and the Bayesian omnibus risk (BOR). Each row represents a variant of the factorial comparison. 1st row: Main analysis (as per Fig 6A). 2nd row: Uses marginal likelihood as model comparison metric. 3rd row: Uses hyperprior α0 = 1 for the frequencies over models in the population (instead of a flat prior over model factors). 4th row: Uses ‘probability matching’ strategy for the Bayesian causal inference model (replacing model averaging). 5th row: Includes probability matching as a sub-factor of the Bayesian causal inference family (in addition to model averaging).
Table 3.
List of algorithms and computational procedures.