Skip to main content
Advertisement

< Back to Article

Fig 1.

Summary of observers’ data.

The reported number of flashes (upper row) and reported number of beeps (lower row). Each age group is shown by a separate column. Symbols give data from bi-modal trials. Horizontal dotted and dashed lines give responses on uni-modal trials, with the error bars / shaded regions giving ±1 SE across observers for bi- and uni-modal data, respectively. The influence of audition on vision (top row) or of vision on audition (lower row) is characterised by the slope of the best fitting regression lines (black lines). Regressions were performed for individual observers and subsequently averaged (for illustration only).

More »

Fig 1 Expand

Fig 2.

Sensory weights.

(a) The relationship between the relative reliability of vision and the relative influence of vision, for each age group. It is clear that the relative reliability of vision predicts its relative influence for all groups. Covariance ellipses give 1SE around the mean. Note that the relative reliability of audition, rrA, and the relative weight for audition, rwA can be calculated in an analogous way such that rrA = 1 − rrV and rwA = 1 − rwV. Thus, the relative reliability of audition predicts the relative weight for audition in exactly the same way as for vision. (b) Sensory weights for non-focal modalities. Red bars give the weight given to (task-irrelevant) auditory information when reporting the number of flashes, while green bars give the visual weight when estimating the number of beeps. Black bars show the amount of integration, as quantified by the sum of the weights given to non-focal cues. Asterisks show the groups for which this sum is significantly less than 1 (one-sample t-tests).

More »

Fig 2 Expand

Fig 3.

Response variance as a function of age.

Lighter bars give response variance, averaged across uni-modal conditions for vision (V; green) and audition (A; red). Variance for bi-modal conditions is shown by darker bars for visual responses (VA; green) and auditory responses (AV; red). Error bars give ±1SE across observers.

More »

Fig 3 Expand

Fig 4.

Uni-modal likelihoods.

The best-fitting uni-modal likelihoods for vision (green) and audition (red), averaged (for illustration only) across all observers; they have been slightly horizontally offset for visibility. The spread of the likelihood (i.e. the inverse reliability) is fixed as a function of the number of events, but differs between vision and audition. On average, vision was less reliable than audition (σV = 0.772, SE = 0.052; σA = 0.488, SE = 0.058).

More »

Fig 4 Expand

Fig 5.

The partial integration model.

(a) An example bi-modal likelihood, centred on 1 flash and 3 beeps. The uni-modal marginals are shown alongside. (b) The coupling prior, and (c) the bi-modal likelihood after combination with the coupling prior; the peak of the distribution has shifted towards V = A. (d) The visual marginal (dashed green) is multiplied by the prior over the number of events (black) to give the posterior probability distribution of the number of visual events (solid green). (e) The posterior distribution for audition (solid red), given the prior over the number of events (black). Note that to allow easy comparison across the three models, the prior over the number of events is shown as a sequential step after the coupling prior is applied and the subsequent marginals are estimated. The two priors could equivalently be combined and applied in a single step. All plots show the averaged model fit across the set of observers (N = 36) who were best characterised by the PI model, as determined by comparing the likelihood of the data, given each of the three models.

More »

Fig 5 Expand

Fig 6.

Focal switching model.

Example (a) Uni-modal visual (green) and auditory (red) likelihoods. (b) On visual trials, the observer samples from the visual estimator with probability pF, and from the auditory estimator with probability 1 − pF. On auditory trials, these probabilities, or weights are reversed. The resultant likelihoods are shown in (c). Similarly to the PI model, posterior distributions (d, e, solid lines) are created by combining these likelihoods with a prior (black) over the number of events. All plots show the averaged model fit, averaged across the set of observers (N = 25) who were best characterised by the Focal Switching model.

More »

Fig 6 Expand

Fig 7.

Modality switching model.

(a) Unimodal visual (green) and auditory (red) likelihoods. (b) On both visual and auditory trials, the observer samples the visual estimator with probability pV, and the auditory estimator with probability 1 − pV. The resultant likelihoods (slightly offset for visibility) are shown in (c). Posterior distributions (d, e, solid lines) are created by combining these likelihoods with a prior (black) over the number of events. All plots show the averaged model fit across the set of observers (N = 15) who were best characterised by the Modality Switching model.

More »

Fig 7 Expand

Table 1.

Fitted parameters for the three models.

More »

Table 1 Expand

Fig 8.

The best-fitting model of audio-visual interactions, as a function of age.

More »

Fig 8 Expand

Fig 9.

Trial schematic.

(a) Instructions were shown at the start of each block of trials, and the voice of Stinker the dog gave the same instructions. A progress bar showed Stinker getting closer to his treats, as more trials were completed. (b) Either an ‘F’ or ‘B’ in the centre of the screen reminded the participant of the task. (c) After the letter was clicked, flashes, beeps or both were presented. The inset shows an example congruent trial (upper) and conflict trial (lower). (d) The participant was prompted to respond. An image of Stinker the dog appeared every few trials, with Stinker’s voice offering words of encouragement or comments, e.g. ‘You’re great’, or ‘I’m hungry’.

More »

Fig 9 Expand