Skip to main content
Advertisement
  • Loading metrics

Older adults preserve audiovisual integration through enhanced cortical activations, not by recruiting new regions

  • Samuel A. Jones ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    samuel.jones@ntu.ac.uk

    Affiliations Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom, Department of Psychology, Nottingham Trent University, Nottingham, United Kingdom

  • Uta Noppeney

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom, Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands

Abstract

Effective interactions with the environment rely on the integration of multisensory signals: Our brains must efficiently combine signals that share a common source, and segregate those that do not. Healthy ageing can change or impair this process. This functional magnetic resonance imaging study assessed the neural mechanisms underlying age differences in the integration of auditory and visual spatial cues. Participants were presented with synchronous audiovisual signals at various degrees of spatial disparity and indicated their perceived sound location. Behaviourally, older adults were able to maintain localisation accuracy. At the neural level, they integrated auditory and visual cues into spatial representations along dorsal auditory and visual processing pathways similarly to their younger counterparts but showed greater activations in a widespread system of frontal, temporal, and parietal areas. According to multivariate Bayesian decoding, these areas encoded critical stimulus information beyond that which was encoded in the brain areas commonly activated by both groups. Surprisingly, however, the boost in information provided by these areas with age-related activation increases was comparable across the 2 age groups. This dissociation—between comparable information encoded in brain activation patterns across the 2 age groups, but age-related increases in regional blood-oxygen-level-dependent responses—contradicts the widespread notion that older adults recruit new regions as a compensatory mechanism to encode task-relevant information. Instead, our findings suggest that activation increases in older adults reflect nonspecific or modulatory mechanisms related to less efficient or slower processing, or greater demands on attentional resources.

Introduction

The effective integration of multisensory signals is central to our ability to successfully interact with the world. Locating and swatting a mosquito, for example, relies on spatial information from hearing, vision, and touch. When signals from different senses are known to come from a common cause, humans typically perform this integration process in a statistically near-optimal way, weighting the contribution of each input by its relative reliability [15] (i.e., inverse of variance; though also see, for instance, [6,7]). However, determining specifically which signals share a common cause, and should thus be integrated, is computationally challenging. Young, healthy adults balance sensory integration and segregation in line with the predictions of normative Bayesian causal inference [812]: They bind inputs that are close together in space and time but process them independently when they are spatially or temporally disparate and hence unlikely to share a common source. Recent functional magnetic resonance imaging (fMRI) and electroencephalography research has revealed that, for audiovisual spatial signals, these operations take place dynamically across the cortical hierarchy that encompasses primary sensory areas as well as higher-level regions such as intraparietal sulcus and planum temporale [10,13]. Evidence also suggests that they interact with top-down attentional processes [5,1419].

Normal healthy ageing leads to a variety of sensory and cognitive changes, including loss of sensory acuity [2022], reduced processing speed [23], and impaired attentional and working memory processes [24,25]. In multisensory perception, ageing has been associated with altered susceptibility to the sound-induced flash and McGurk illusions [2630]; these age differences may be caused by various computational or neural mechanisms, including changes in sensory acuity, prior binding tendency, and attentional resources (for further discussion, see [31]). By contrast, older adults perform in a way that is comparable to their younger counterparts on audiovisual integration of spatial signals (as indexed by the spatial ventriloquist illusion) [32,33]. They weight and combine sensory signals in ways that are consistent with normative Bayesian causal inference. However, they sacrifice response speed to maintain this audiovisual localisation accuracy [32].

This raises the question of how older adults preserve audiovisual integration and spatial localisation accuracy in these intersensory selective attention paradigms. There are 3 possibilities:

First, older adults may engage the same neural mechanisms, in the same way as their younger counterparts, to form neural spatial representations that are similar between age groups. In short, older adults’ preserved behavioural performance is mirrored by preserved neural processing.

Second, older adults may show neural encoding deficits in the key regions engaged by younger adults. To compensate for such deficits, they recruit additional regions. Critically, if such activations are truly compensatory, we would expect age differences not only in the magnitude of the regional blood-oxygen-level-dependent (BOLD) responses but also in their information content: The additional brain activations would encode more task- or stimulus-relevant information in older than in younger participants. We might also expect representations of the stimuli in areas along the dorsal visual and auditory spatial processing hierarchies to be degraded, necessitating such compensatory activity. This compensatory recruitment of extra regions to sustain task performance in older adults has been widely held, in the healthy ageing research field, to explain the additional activations typically found in older adults (see, for example, [3436]).

Third, older adults may show increased activations that are not directly attributable to compensatory activity. Indeed, the notion of age-related compensatory recruitment has recently been challenged by research into the impact of healthy ageing on memory [37] and motor performance [38]. These studies also observed that older adults activate additional cortical regions while performing tasks. Crucially, however, sophisticated model-based multivariate Bayesian decoding analyses found that these regions did not encode additional information relevant for task performance. The authors therefore concluded that the age-related activation increases may instead reflect nonspecific mechanisms such as reduced neural processing efficiency. In our spatial localisation task, this could mean that older observers suffer from noisier neural coding despite their behavioural performance being largely preserved. For instance, it is increasingly understood that ageing affects auditory temporal processing, with potential associated effects on spatial processing (for instance, interaural time difference cues [39]). As a consequence, and as recently suggested by computational modelling of behavioural data [32], older adults may accumulate noisier sensory information for longer until they reach a decision threshold and commit to a response. This would result in larger BOLD responses in the associated regions [40]. Older adults may additionally, or alternatively, need to exert more top-down attentional control to attenuate internal sensory noise, or engage more cognitive control to inhibit conflicting or irrelevant visual and auditory signals [41]. Common to all these potential mechanisms is that any age-related activation increases would not encode additional stimulus- or task-relevant information in older, compared to younger, adults. Instead, activation increases would reflect more general mechanisms that may help to enhance existing neural encoding in older adults, thereby allowing them to maintain precision and accuracy of spatial representations at the neural and behavioural levels.

To adjudicate between these 3 possibilities, we presented healthy younger and older participants with synchronous audiovisual signals at varying degrees of spatial disparity in a spatial ventriloquist paradigm. In an auditory selective attention task, participants reported the location of the auditory signal, while ignoring the task-irrelevant visual signals (which were spatially congruent or incongruent). First, we investigated whether older and younger observers weight and combine audiovisual signals similarly into spatial representations at the behavioural level. Second, we used multivariate pattern analysis to assess whether observers’ neural spatial representations, decoded from activity patterns along the dorsal visual and auditory spatial processing hierarchies [10,13], were comparable between younger and older adults. Third, we applied whole-brain univariate analyses to identify the neural systems supporting spatial localisation performance more broadly and assessed differences in activation levels between older and younger participants. Finally, using multivariate Bayesian decoding [37,38,42], we assessed whether regions with greater activation in older adults encoded the same amount of stimulus- or task-relevant information (such as visual and auditory location, or their spatial relationship) in both age groups.

Results

Audiovisual integration behaviour

Inside the scanner, participants were presented with synchronous auditory and visual signals at the same (i.e., congruent) or opposite (i.e., incongruent) locations sampled from 4 possible spatial locations along the azimuth. The experimental design thus conformed to a 4 (auditory location: −15°, −5°, 5°, or 15° azimuth) × 3 (sensory context: unisensory auditory, audiovisual congruent, audiovisual incongruent) factorial design (see Fig 1B). On each trial, participants reported their perceived sound location as accurately as possible by pressing one of 4 spatially corresponding buttons with their right hand. As shown in Fig 1C, both younger and older adults can locate unisensory auditory and audiovisual congruent stimuli quite accurately, though we observe a small central bias for stimuli presented at the most eccentric locations. On audiovisual incongruent trials, their reported sound location is biased by—i.e., shifted towards—the location of the co-occurring visual signal. Crucially, this crossmodal bias is stronger for small audiovisual spatial disparities (5° eccentricity) than for large audiovisual spatial disparities (15° eccentricity). Thus, both younger and older adults combine audiovisual signals in a way that is consistent with the computational principles of Bayesian causal inference: They integrate audiovisual signals when they are close in space and hence likely to come from one source, but segregate those with larger spatial disparities. However, at large spatial disparities, we observe a small trend towards greater crossmodal biases for older than for younger observers.

thumbnail
Fig 1. Experimental design and behavioural results.

(A and B) The experiment conformed to a 4 (auditory location) × 3 (sensory context: unisensory auditory, audiovisual congruent, audiovisual incongruent) factorial design. Auditory (white noise bursts) and visual signals (cloud of dots) were sampled from 4 possible azimuthal locations (−15°, −5°, 5°, or 15°). Auditory and visual stimuli were presented either at the same (congruent) or opposite (incongruent) spatial locations, or the auditory stimulus was presented alone (unisensory). Participants reported their perceived location of the sound. (C) Across-participants mean (± SEM) perceived sound locations as a function of the true sound location (x axis). The data underlying this Figure can be found in S1 Data.

https://doi.org/10.1371/journal.pbio.3002494.g001

Consistent with these impressions, a 2 (hemifield: left or right) × 2 (eccentricity: 5° or 15°) × 3 (sensory context: unisensory auditory, audiovisual congruent, or audiovisual incongruent) × 2 (age group: younger or older) mixed ANOVA on localisation responses identified significant main effects of eccentricity and sensory context (see Table 1). Moreover, a small three-way (eccentricity × sensory context × age) interaction was observed. This likely reflects a stronger visual influence on perceived sound location in older adults for audiovisual stimuli at large spatial disparities (see right panel of Fig 1C), suggesting older observers’ ability to segregate audiovisual signals is slightly inferior to that of younger adults. Potentially, this small difference across age groups may result from subtle age-related decreases in auditory spatial reliability, which become apparent in challenging sound localisation tasks with interfering spatially disparate visual signals. However, no follow-up t tests that separately compared the age groups in each condition reached statistical significance, p > .05 (see Table A in S1 Text for full results, including Bayes factors). No other significant effects were observed.

thumbnail
Table 1. Results of mixed ANOVA on mean auditory localisation responses during the spatial ventriloquist task.

https://doi.org/10.1371/journal.pbio.3002494.t001

Overall, these behavioural results suggest that older and younger adults combine auditory and visual signals into spatial representations in a way that is consistent with Bayesian causal inference. They also suggest that the age groups are largely comparable in their visual and auditory spatial precision.

fMRI results

Decoding spatial representations from fMRI activation patterns along audiovisual pathways.

Next, we used fMRI decoding methods to investigate whether older and younger adults integrate auditory and visual signals into comparable spatial representations at the neural level, thereby mirroring the behavioural pattern. More specifically, we asked whether older adults assign similar weights to auditory and visual signals when combining them into neural representations along the auditory and visual spatial processing hierarchies that have been identified in previous research on younger adults [5,10,13,14,43]. To address this question, we trained support vector regression models to learn the mapping between regional fMRI activation patterns and external spatial locations, specifically for audiovisual congruent trials. We then applied those trained support vector regression models to the activation patterns evoked by audiovisual incongruent trials (as well as to unisensory auditory and to different audiovisual congruent trials).

This was performed separately in multiple regions across the auditory and visual spatial processing hierarchies. In visually dominant regions, the decoded spatial locations for audiovisual incongruent trials should largely reflect the true location of the visual stimulus. Similarly, in auditory dominant regions, the decoded spatial locations for audiovisual incongruent trials should reflect the true location of the auditory stimulus. Crucially, in regions with crossmodal influences, the decoded locations should be influenced by both auditory and visual locations. This analysis approach thus allows us to investigate how specific brain regions weigh and integrate auditory and visual signals, rather than just addressing the final reported location via behavioural responses.

Fig 2 shows the spatial locations decoded with support vector regression from regional BOLD response patterns for unisensory auditory, congruent audiovisual, and incongruent audiovisual stimuli along the dorsal auditory and visual spatial processing hierarchies identified in previous research [5,10,13,14,43]. As previously reported for younger populations [10,13], primary auditory area A1 and “higher-level” auditory area planum temporale encoded mainly the sound location, while “low-level” visual areas V1-V3, posterior intraparietal sulcus, and anterior intraparietal sulcus represented the visual location. As anticipated, decoding accuracy for visual stimulus location (which is encoded retinotopically [44]) was far higher than for auditory stimulus location (which is encoded across broadly tuned neural populations [45]). Further, the decoding accuracy for audiovisual congruent stimuli was smaller for parietal than occipital visual areas, reflecting the increase in receptive field sizes along the visual processing hierarchy.

thumbnail
Fig 2. fMRI multivariate decoding results (support vector regression).

Across-participants mean (±1 SEM) decoded spatial locations for younger (blue) and older (red) participants for (A) unisensory auditory, (B) congruent audiovisual, and (C) incongruent audiovisual stimuli. Results for 5 ROIs are shown: visual regions (V1-V3); posterior intraparietal sulcus (IPS 0–2); anterior intraparietal sulcus (IPS 3–4); planum temporale (PT); and primary auditory cortex (A1). Note that for incongruent conditions, results for all ROIs are plotted according to the location of the auditory stimulus. The data underlying this Figure can be found in S1 Data.

https://doi.org/10.1371/journal.pbio.3002494.g002

Most importantly, the comparison between unisensory auditory, congruent audiovisual, and incongruent audiovisual conditions provides insights into how different regions combine auditory and visual signals.

In planum temporale, congruent visual inputs increased decoding accuracy compared to unisensory auditory conditions. Conversely, incongruent visual inputs biased auditory spatial encoding mainly at small spatial disparities (i.e., a “neural ventriloquist effect”). These crossmodal biases broke down at large spatial disparities, when the brain infers that 2 signals come from different sources, thereby mirroring the integration profile observed at the behavioural level.

In visual areas, we observed an influence of a displaced sound on the decoded spatial location mainly at large spatial disparities. This pattern may be explained by the fact that, at small spatial disparities, observers experience a ventriloquist illusion and thus perceive the sound shifted towards the visual signal. By contrast, at large spatial disparities (when observers are less likely to experience a ventriloquist illusion), a displaced sound from the opposite hemifield biases the spatial encoding in visual cortices via mechanisms of top-down attention. As previously reported [5,10,13,14,43], these crossmodal interactions increased across the cortical hierarchy, being more pronounced in intraparietal sulcus and planum temporale than in early visual and auditory cortices.

These impressions were confirmed statistically by applying the same analyses used to assess behavioural responses: 2 (hemifield: left or right) × 2 (eccentricity: 5° or 15°) × 3 (sensory context: unisensory auditory, audiovisual congruent, or audiovisual incongruent) × 2 (age group: younger or older) mixed ANOVAs were conducted on decoded spatial estimates, separately for each region of interest (ROI) along the visual and auditory processing hierarchy (Table 2). Here, we report results after Bonferroni correction for multiple comparisons in 5 regions; see Table E in S1 Text for uncorrected values. We observed main effects of, and/or interactions with, stimulus eccentricity in all ROIs, confirming that all regions encoded information about the location of the stimuli. Importantly, significant effects of sensory context were apparent in all ROIs except primary auditory cortex, suggesting that all regions except A1 held at least some information about whether a visual stimulus was present or its spatial congruence with the sound. We confirmed that these sensory context effects were not driven entirely by differences between unisensory auditory versus audiovisual stimuli: follow-up ANOVAs that excluded the unisensory condition, so 2 (hemifield: left or right) × 2 (eccentricity: 5° or 15°) × 2 (congruence: audiovisual congruent or audiovisual incongruent) × 2 (age group: younger or older), still showed a significant main effect of congruence and/or an eccentricity × congruence interaction in all ROIs except A1 (for detailed results, see Tables F-J in S1 Text).

thumbnail
Table 2. Results of ANOVAs on support vector regression decoded responses in 5 ROIs.

https://doi.org/10.1371/journal.pbio.3002494.t002

Some significant effects of hemifield were observed specifically in anterior intraparietal sulcus: both hemifield × eccentricity and hemifield × sensory context interactions were found, indicating a degree of left/right bias in the decoded stimulus locations in this region.

Crucially, however, we observed no significant effect of age on the locations decoded from the activation patterns along the auditory and visual spatial processing hierarchies (see Fig 2 and Table 2). Collectively, these results compellingly demonstrate that younger and older adults combine auditory and visual signals into spatial representations along the auditory and visual processing hierarchies in accordance with similar Bayesian computational principles, further supporting the conclusions from our behavioural analysis.

Identification of neural systems involved in spatial localisation of audiovisual signals.

The behavioural and neuroimaging analyses reported so far provide convergent evidence that older and younger adults combine audiovisual signals into spatial representations in a similar way. These analyses focused selectively on observers’ spatial representations, obtained either directly from their behavioural reports or via neural decoding of BOLD responses along the auditory and visual spatial processing hierarchies. Next, we asked more broadly which neural systems are engaged in localisation tasks. Do older and younger adults engage overlapping or partly distinct neural systems for audiovisual spatial processing? Do the activation levels differ across age groups in particular regions? To define these task- and stimulus-related processes most broadly, we compared all stimulus conditions to fixation (i.e., all stimulus conditions > fixation) using mass-univariate general linear model analysis. Moreover, we assessed the neural underpinnings of cognitive control and attentional operations that are critical for localising a sound when presented together with a spatially displaced visual signal (i.e., incongruent > congruent audiovisual stimuli; see Table 3 and Figs 3 and 4, for details).

thumbnail
Fig 3. fMRI activation results for older and younger adults.

Activations for all stimuli (i.e., pooled over auditory, audiovisual congruent, and incongruent) relative to fixation are rendered on an inflated canonical brain (top row) and coronal/transverse sections (middle row). Green = conjunction over both age groups (AllOlder > FixationOlder) ∩ (AllYounger > FixationYounger). Purple = age related activation increases (AllOlder > FixationOlder) > (AllYounger > FixationYounger). For inflated brain: bright outlines = height threshold p < .05 whole-brain familywise error corrected. For visualisation purposes, we also show activations at p < .001, uncorrected, as darker filled areas. Extent threshold k > 0 voxels). For brain sections, height threshold p < .05 whole-brain familywise error corrected. Bottom row: Bar plots show mean (±1 SEM) age differences in parameter estimates (arbitrary units) for audiovisual congruent, audiovisual incongruent, and unisensory auditory stimuli at 5° and 15° eccentricities, pooled over left and right stimulus locations, at the indicated peak MNI coordinates. Three illustrative anatomical regions are shown: left inferior frontal sulcus (IFS), left planum temporale (PT), and right intraparietal sulcus (IPS). The data underlying this Figure can be found in S2 Data.

https://doi.org/10.1371/journal.pbio.3002494.g003

thumbnail
Fig 4. Activation increases for incongruent > congruent audiovisual stimuli.

Activation increases for incongruent relative to congruent stimuli (pooled over age groups) are rendered on an inflated canonical brain. Green areas = height threshold p < .05, whole-brain familywise error corrected. For visualisation purposes, we also show activations at p < .001, uncorrected, in yellow. Bar plots show parameter estimates (across-participants mean ± 1 SEM; arbitrary units) for congruent, incongruent, and unisensory stimuli at 5° and 15° eccentricities, pooled over left and right, at the indicated MNI peak coordinates in 3 anatomical regions: left anterior insula, left pre-supplementary motor area (pre-SMA), and right precuneus. The data underlying this Figure can be found in S2 Data.

https://doi.org/10.1371/journal.pbio.3002494.g004

Effects of stimuli and task relative to fixation. A conjunction analysis over age groups revealed stimulus-induced activations in a widespread neural system encompassing key areas of the auditory spatial processing hierarchy such as left planum temporale, extending into left inferior parietal lobe and intraparietal sulci bilaterally (AllOlder > FixationOlder) ∩ (AllYounger > FixationYounger) [46,47]. At a lower threshold of significance, we also observed stimulus-induced activations in the right hemisphere from right planum temporale into inferior parietal lobe and bilateral insulae. Moreover, we observed common activations related to response selection and motor processing in left precentral gyrus/sulcus and right cerebellum.

Next, we identified regions with greater activations for older relative to younger adults by testing for the interaction (AllOlder > FixationOlder) > (AllYounger > FixationYounger). We observed activation increases for older adults in dorsolateral prefrontal cortices along the inferior frontal sulcus. Interestingly, increased activations for older adults were often found adjacent to the regions that were commonly activated for both groups. For instance, we observed greater activations in the lateral plana temporalia extending into more posterior superior temporal cortices. Likewise, the parietal activations extended from the areas observed for both age groups more posteriorly. Moreover, older adults showed increased activations in the inferior frontal sulcus, a region previously implicated in cognitive control of audiovisual processing tasks [40,48]. In summary, older adults showed increased activations relative to younger adults along the spatial auditory pathways from temporal to parietal and frontal cortices.

The opposite contrast (AllYounger > FixationYounger) > (AllOlder > FixationOlder) revealed no activations that were significantly greater in the younger age group.

Overall, these results suggest that older adults sustain spatial localisation performance by increasing activations in a widespread neural system encompassing regions typically associated with auditory spatial processing, such as planum temporale, and in regions associated with attention and executive functions, such as parietal cortices and insulae.

Effects of audiovisual spatial incongruency. Consistent with previous research [14,40,48,49], incongruent relative to congruent audiovisual stimuli increased activations in a widespread attentional and cognitive control system including medial and lateral posterior parietal cortices, inferior frontal sulcus and bilateral anterior insulae (i.e., Incong > Cong, pooled over age groups). However, none of these incongruence effects significantly interacted with age group after whole-brain correction (IncongOlder > CongOlder) > (IncongYounger > CongYounger) or (IncongYounger > CongYounger) > (IncongOlder > CongOlder).

Quantifying stimulus-relevant information in task-related BOLD responses.

The activation increases for older relative to younger adults raise the critical question of whether/how they contribute to sound localisation performance in older adults. Do these age-related activation increases encode information about task-relevant variables such as stimulus location or audiovisual congruency, thereby enabling older adults to maintain localisation accuracy? Further, do they encode information that is redundant or complementary to that encoded in brain areas jointly activated by both age groups? To address these questions, we used model-based multivariate Bayesian decoding. This approach treats different sets of brain regions as models to predict target variables (such as stimulus location) and provides an approximation to the log model evidence, which trades off a model’s accuracy in predicting a target variable with its complexity. Therefore, unlike discriminative approaches such as support vector regression, multivariate Bayesian decoding allows one to assess the relative contributions of different regions (and their combinations) to encoding target variables—such as stimulus location or congruence—using standard procedures of Bayesian model comparison.

Specifically, we compared the predictive ability of 3 candidate sets of regions: (i) the regions activated jointly by older and younger adults [O∩Y]; (ii) the regions activated more by older than younger adults [O>Y]; and (iii) the union of the two [O>Y ∪ O∩Y]. To match the number of features across these 3 sets, we limited each set of regions to the most significant 1,000 voxels (see Materials and methods for details).

We computed multivariate Bayesian decoding models separately for 4 target variables relating to stimulus properties: visual location (VisL ≠ VisR), auditory location (AudL ≠ AudR), and spatial congruence at small (Incong5 ≠ Cong5) and large (Incong15 ≠ Cong15) disparities.

In both age groups, log model evidence summed over participants was greater for the [O>Y] than for the [O∩Y] set for all target variables. This suggests that the regions in which older participants show greater activations encode stimulus-relevant information better than the regions commonly activated in both age groups. Indeed, as shown in Fig 4, the age-related activation increases are found particularly in planum temporale and parietal cortices, which have previously been shown to be critical for encoding spatial information about auditory and visual stimuli and their spatial congruency [10,43,50].

Moreover, the union model [O>Y] ∪ [O∩Y] outperformed the more parsimonious models [O∩Y] and [O>Y] for each of the target variables. Bayesian model selection indicated that the protected exceedance probability was above 0.81 for the union model across all target variables in both age groups (see Fig 5). These model comparison results collectively show that, in both age groups, the regions with greater activations in older adults [O>Y] encode significant information about task-relevant variables that is complementary to the information encoded in regions commonly activated by younger and older adults [O∩Y].

thumbnail
Fig 5. Results of multivariate Bayesian decoding analysis.

Comparison of 3 sets of regions ([O∩Y], [O>Y], or union of both: [O>Y] ∪ [O∩Y]) in their ability to predict stimulus-related target variables: visual location, auditory location, congruent/incongruent at 5°, and congruent/incongruent at 15°. Protected exceedance probabilities, based on Bayesian model selection, are shown for each set of regions and target variable. The data underlying this Figure can be found in S1 Data.

https://doi.org/10.1371/journal.pbio.3002494.g005

Next, we asked whether this increase in stimulus and task-relevant information for [O>Y] regions is more prevalent or important in older adults, as they show more activations in these regions. To address this question, we assessed whether the union [O>Y] ∪ [O∩Y] relative to the more parsimonious models [O∩Y] and [O>Y] won more frequently in the older age group. Contrary to this conjecture, there were no significant age differences in the frequency with which the union model was the winning model for predicting any of the 4 target variables (χ2 tests of association, p > .05, BF01 ≥ 1.98).

To further explore possible age differences, we investigated the relative contributions of the 3 sets of regions to the encoding of task-relevant variables in older and younger participants. We did this by entering the difference in log model evidence for the union [O>Y] ∪ [O∩Y] set relative to the O∩Y set for each older and younger participant into Mann–Whitney U tests, separately for each of the 4 target variables. After Bonferroni correction for multiple comparisons, none of these tests revealed any significant differences between age groups across the VisL ≠ VisR (U = 116.000, p > .99, BF01 = 2.415, one tailed), AudL ≠ AudR (U = 126.000, p > .99, BF01 = 2.866, one tailed), and Incong5 ≠ Cong5 (U = 139.000, p > .99, BF01 = 2.568, one tailed) target variables (please note that Bayes factors do not contain any adjustment for multiple comparisons). Only for the Incong15 ≠ Cong15 target variable did we observe a small, nonsignificant trend for a greater “boost” in model evidence for the union [O>Y] ∪ [O∩Y] set, relative to the O∩Y set, for older adults compared to younger adults, U = 69.000, p = .052, BF01 = 0.616, one tailed.

Taken together, these results suggest that task-relevant information is encoded in each of the sets of regions and, in particular, in areas that are more strongly activated by older adults [O>Y], suggesting that older adults boost activations in brain regions that are critical for task-performance and encoding stimulus-relevant information. Further, the information encoded in the conjunction [O∩Y] and the “greater activation” [O>Y] sets were not redundant but at least partly complementary, so that the union set [O>Y] ∪ [O∩Y] outperformed both of those more parsimonious models. In other words, activation patterns in [O∩Y] and in [O>Y] made complementary contributions to encoding task- and stimulus-relevant variables.

Crucially, however, this was true for both older and younger adults. Likewise, the additional information gained by adding the “greater activation” [O>Y] set to the conjunction [O∩Y] set was comparable in both age groups. These results suggest that older adults show increased activations in brain areas that are important for encoding stimulus- and task-relevant information.

Discussion

Healthy ageing leads to deficits in sensory processing and higher-order cognitive mechanisms. Nevertheless, older adults have been shown to maintain the ability to appropriately integrate and segregate audiovisual signals to aid stimulus localisation [32,51]. The present study investigated the neural mechanisms that support this maintenance of performance.

In agreement with previous research [20,32,51,52], our behavioural results suggest that older adults were largely able to maintain audiovisual spatial localisation accuracy. The responses of both age groups were consistent with the principles of Bayesian causal inference: Crossmodal biases were strongest when the sound and visual signals were spatially close together (and therefore more likely to share a common source), and weakest when the 2 signals were highly spatially separated (and therefore less likely to share a common source). We observed one small but significant three-way interaction between age, eccentricity, and sensory context. The profile of results (see Fig 1C) suggests that this effect was driven primarily by older adults’ sound localisation responses being more biased towards an incongruent visual stimulus (i.e., a greater ventriloquist effect) at large (30°) spatial disparities. These stronger audiovisual spatial biases for older adults at large spatial disparities were not observed in our previous behavioural research that took place outside the scanner [32]. One possibility is that they result from the greater attentional resources needed to effectively integrate or segregate audiovisual signals in the noisy environment of the MRI scanner. Background noise reduces a target sound’s signal-to-noise ratio, increasing the attentional resources required to identify and locate it, particularly in the presence of a highly salient and incongruent visual distractor (as in our large audiovisual disparity condition). As argued in a recent review [31], the greatest effects of ageing on multisensory integration are often found in situations of high attentional demand featuring, for example, noise or distractor signals (see, for instance, [5355]). Similarly, small age-related hearing deficits may only become apparent under adverse listening conditions [56]. However, a similar result—older adults exhibiting stronger ventriloquist effects at larger spatial disparities—has previously been found even in the absence of background noise [33]. It is therefore possible that, rather than experimental design or stimulus factors, this small discrepancy in findings between our previous behavioural work [32] and the present study may be explained by differences in the samples. Perhaps the older participants in our behavioural study were simply less affected by age-related hearing loss or temporal processing deficits [39]. Future behavioural research could further explore these issues by systematically assessing the effects of ageing on spatial localisation in a ventriloquist task under various degrees of background noise, attentional load, and task demands in a large, diverse sample. It is also interesting to note that this behavioural effect is not reflected in the spatial representations decoded along the audiovisual processing hierarchy (discussed in more detail below), possibly because age-related differences arise in cortical areas beyond our regions of interest. However, given the differences between the fMRI and behavioural data and their analyses, it would be inappropriate to draw any strong conclusions here.

Having established that older and younger adults similarly integrate audiovisual signals into spatial perceptual reports, we next investigated their underlying neural representations as decoded from fMRI BOLD response patterns along the auditory and visual spatial processing pathways. As previously shown in human neuroimaging and neurophysiology studies [10,13,14,5759], audiovisual interactions increased progressively across the cortical hierarchy. Primary auditory cortices (A1) encoded primarily the location of the auditory component of the stimuli, and early visual cortices (V1-V3) mainly that of the visual component, but small significant effects of sensory context and even audiovisual spatial congruency were observed even in primary visual areas. Again, these findings align nicely with a wealth of studies showing audiovisual interaction effects in primary sensory cortices [49,6063]. Interestingly, a displaced visual stimulus biased the spatial encoding mainly at small spatial disparities in planum temporale, thereby mirroring the profile of crossmodal biases observed at the behavioural level that are consistent with Bayesian causal inference. By contrast, a displaced auditory stimulus biased the spatial encoding mainly at large spatial disparities in visual cortices. The latter suggests that the crossmodal biases on spatial representations decoded from visual cortices arise mainly from top-down, possibly attentional, influences. At small spatial disparities the perceived location of the less spatially reliable sound is shifted towards the visual location and thus does not affect spatial encoding in visual cortices. At large spatial disparities, audiovisual integration is attenuated or even abolished, so a spatially displaced sound may exert top-down attentional influences on the activation patterns in visual cortices.

Critically, none of these effects varied with age. Fig 2 shows that the decoded stimulus locations (averaged across participants) were near identical in older and younger adults for unisensory auditory, congruent audiovisual, and incongruent audiovisual stimuli in all ROIs. These results suggest that healthy ageing does not substantially alter how the brain integrates audiovisual inputs into spatial representations along the auditory or visual cortical pathways.

Despite these remarkably similar decoding profiles between the 2 age groups, across the auditory and visual processing hierarchies, we observed significantly greater BOLD responses across an extensive network of frontal, temporal, and parietal regions for older relative to younger adults in the spatial localisation task. This is in line with previous work showing age-related activation increases, especially in frontal and parietal regions, in a wide variety of situations [35,37,38,64,65], including those that involve processing of complex multisensory stimuli [66]. In the present study, older adults showed greater activations in areas such as superior temporal cortices (including plana temporalia), as well as inferior frontal sulci and intraparietal sulci. Some of these areas were adjacent to, or even partly overlapped with, those activated by both age groups (i.e., task-relevant activations above baseline were present in both groups but were greater in older adults).

This dissociation between age-related increases in regional BOLD responses, and comparable neural spatial representations along the audiovisual pathways, raises the question of what these activation increases contribute to task performance. What is their functional role? Specifically, we aimed to distinguish between 2 possible mechanisms: First, older adults may recruit additional areas to compensate for processing and representational encoding deficits in other regions. This idea has previously been suggested for a variety of scenarios in which older adults also showed increased activations [35,67,68] (though see also [37,69]). In such a case, we would expect that regions with age-related activation increases encode information about task-relevant variables more strongly in older than in younger adults.

Second, the age-related activation increases may not indicate compensatory recruitment of extra neural systems to encode stimulus- or task-relevant variables, but rather reflect more nonspecific processes. For instance, age-related activation increases may result from attentional or cognitive control mechanisms that are needed to form neural representations and produce behavioural responses that are matched in spatial precision and accuracy to their younger counterparts. Older adults may also increase activations to overcome inefficient neural processing or need more processing time to accumulate noisier evidence into spatial decisions, resulting in greater BOLD responses. Common to all these nonspecific mechanisms is that the set of regions exhibiting age-related activation increases should contribute similarly to encoding task-relevant information in older and younger populations.

To adjudicate between these 2 classes of neural mechanisms, we applied multivariate Bayesian decoding to compare the information about stimulus location and audiovisual congruency that is encoded in areas with (1) joint activations in both age groups [O∩Y], (2) increased activations in older adults [O>Y], and (3) the union of those 2 sets of regions [O>Y] ∪ [O∩Y]. All 3 sets of regions encoded task-relevant information about sound location and audiovisual spatial disparity. Moreover, formal model comparison indicated that the union model outperformed both of the more parsimonious models that included only 1 set of regions. This increase in model evidence for the union model indicates that regions with age-related activation increases [O>Y] and conjunction regions [O∩Y] provide complementary, rather than redundant, information about task-relevant variables. Further, it suggests that this information is encoded in a widespread, distributed way. Crucially, however, the boost in explanatory power when the regions were combined was comparable between younger and older adults.

Collectively, these results strongly argue against our first hypothesis that older adults engage new compensatory regions to encode stimulus variables. Instead, they align perfectly with previous work by Morcom and Henson [37], who also found that regions with age-related activation increases during memory tasks did not encode extra information in older adults. Likewise, Knights and colleagues [38] report that greater or more widespread activations in older adults did not encode more task-relevant information in a simple target detection/motor response task. Our results thus add to a growing body of research showing that age-related increases in BOLD activity are not indicative of “compensation by reorganisation” [70].

Together with this previous research, our multivariate Bayesian decoding results suggest that the activation increases may reflect more nonspecific compensatory processes. For example, our older adults may have expended more effort or top-down attentional control, used inefficient encoding strategies [38], or accumulated noisier sensory evidence for longer, to maintain spatial localisation performance despite age-related hearing loss or temporal processing deficits that make sound localisation more challenging. This would result in greater and more dispersed BOLD responses in key regions and is consistent with recent computational modelling of audiovisual spatial localisation in younger and older adults [32]. To differentiate between some of these potential mechanisms, future research may employ imaging methods with higher temporal resolution (such as magnetoencephalography) alongside stimuli with longer durations to compare the accumulation of sensory evidence over time between age groups [49]. Another possibility is that these age effects are related to general declines in γ-aminobutyric acid [71], which may lead to greater and less focused activations in older adults; this hypotheses would be a good future target for research employing magnetic resonance spectroscopy.

In conclusion, older adults show greater frontoparietal activations than their younger counterparts during audiovisual spatial integration. Yet, despite differences in BOLD response magnitude, the stimulus-relevant information encoded in these regions is comparable across the 2 age groups. Representations of audiovisual spatial stimuli in regions of the established dorsal auditory and visual processing pathways also remain remarkably unchanged in older adults. This dissociation—between comparable response accuracy and information encoded in brain activity patterns across the 2 age groups, but age-related activation increases—argues against the notion of “compensation by reorganisation” where new regions are recruited to encode stimulus- or task-relevant variables. Instead, our results suggest that age-related activation increases may reflect nonspecific mechanisms such as greater demands on attentional or cognitive control, or longer, less efficient, noisier neural encoding.

Materials and methods

Participants

Twenty younger and 29 older adults were initially recruited from participant databases for a behavioural screening session (see Materials and Methods in S1 Text for details). Two older adults were excluded from the study due to the presence of MRI contraindications, 3 failed to score above 24 on the Montreal Cognitive Assessment [72], and 1 reported taking antidepressant medication. A further 7 older, and 3 younger, adults were excluded for insufficient gaze fixation in the behavioural task. One younger participant could not be contacted following the behavioural session. Therefore, 16 younger (mean age = 24.19, SD = 4.56, 10 female) and 16 older (mean age = 70.75, SD = 4.71, 12 female) adults took part in all 3 experimental sessions. Those 32 included participants that had normal or corrected-to-normal vision, reported no hearing impairment, and were able to distinguish left from right sounds with a just-noticeable difference (JND) of below 10°. The study was approved by the University of Birmingham Ethical Review Committee (Application ERN_15-1458AP1). All participants gave informed consent and were compensated for their time in cash or research credits.

Design and procedure (spatial ventriloquist paradigm inside the scanner)

In a spatial ventriloquist paradigm, participants were presented with synchronous auditory and visual signals at the same or different locations. The auditory signal originated from one of 4 possible spatial locations (−15°, −5°, 5°, or 15° visual angle) along the azimuth. For any given auditory location, a synchronous visual signal was presented at the same spatial location (audiovisual congruent trial), at the symmetrically opposite location (audiovisual incongruent trial), or was absent (unisensory auditory trial). On each trial, observers reported the sound location as accurately as possible by pressing one of 4 spatially corresponding buttons with their right hand. Thus, our design conformed to a 4 (auditory location: −15°, −5°, 5°, or 15° azimuth) × 3 (sensory context: unisensory auditory, audiovisual congruent, audiovisual incongruent) factorial design (see Fig 1B). Participants fixated a central cross (white; 0.75° diameter) throughout the experiment. Trials were presented with a stimulus onset asynchrony (SOA) of 2.3 seconds. To increase design efficiency, the activation trials were presented in a pseudorandomised fashion interleaved with 6.9-second fixation periods approximately every 20 trials. The experiment included 10 trials (per condition, per run) × 12 conditions × 11 five-minute runs (split over 2 separate days).

Experimental setup

Stimuli were presented using Version 3 of the Psychophysics Toolbox [73], running on MATLAB 2014b on an Apple MacBook. Auditory stimuli were presented at approximately 75 dB SPL through Optime 1 electrodynamic headphones (MR Confon). Visual stimuli were back-projected by a JVC DLA-SX21E projector onto an acrylic screen, viewed via a mirror attached to the MRI head coil. The total viewing distance from eye to screen was 68 cm. Participants responded using infrared response pads (Nata Technologies) held in the right hand.

Stimuli

Visual stimuli consisted of an 80-ms flash of 20 white dots (diameter of 0.4° visual angle), whose locations were sampled from a bivariate Gaussian distribution with a standard deviation of 2.5° in horizontal and vertical directions, presented on a black background.

Auditory spatialised stimuli (80 ms duration) were created by convolving a burst of white noise (with 5 ms onset and offset ramps) with spatially specific head-related transfer functions (HRTFs) based on the KEMAR dummy head of the MIT Media Lab [74]. Sounds were generated independently for every trial and presented with a 5-ms on/off ramp.

Analysis of behavioural data (spatial ventriloquist paradigm inside the scanner)

For each participant, we calculated the mean auditory localisation response for each combination of auditory and visual locations. Responses to stimuli in the left hemifield were multiplied by −1, then participant-specific mean auditory localisation responses were entered into a 2 (hemifield: left or right) × 2 (eccentricity: 5° or 15°) × 3 (sensory context: unisensory auditory, audiovisual congruent, or audiovisual incongruent) × 2 (age group: younger or older) mixed ANOVA with the group factor as the only between-participants factor. An equivalent Bayesian mixed ANOVA, as implemented in JASP Version 0.16.4 [75], was also conducted, and result tables include BFexcl values for all main and interaction effects. These values represent the probability of the observed data occurring under a model that excludes a given term, relative to all other models. Thus, a higher number indicates more evidence that the term does not have predictive value within the model. JASP default priors were used for all Bayesian statistical tests. Analyses and underlying data, including of reaction times and participant responses during the behavioural screening session (which were substantively similar to responses inside the scanner), are all available in the Supporting information: see S1 Data for underlying data, and Fig A and Tables B-D in S1 Text for analyses.

Please note that many of the dependent variables analysed in this study are unlikely to be drawn from normal distributions. Though t tests and ANOVAs can be quite robust to this violation of their assumptions, individual analyses should be interpreted with caution (and considered in the context of the other information provided, such as descriptive plots and corresponding Bayesian tests).

MRI data acquisition

A 3T Philips MRI scanner with a 32-channel head coil was used to acquire both T1-weighted anatomical images (TR = 8.4 ms, TE = 3.8 ms, flip angle = 8°, FOV = 288 mm × 232 mm, image matrix = 288 × 232, 175 sagittal slices acquired in ascending direction, voxel size = 1 × 1 × 1 mm) and T2*-weighted axial echoplanar images with bold oxygenation level-dependent (BOLD) contrast (gradient echo, SENSE factor of 2, TR = 2,800 ms, TE = 40 ms, flip angle = 90°, FOV = 192 mm × 192 mm, image matrix 76 × 76, 38 transversal slices acquired in ascending direction, voxel size = 2.5 × 2.5 × 2.5 mm with a 0.5-mm interslice gap).

Each participant took part in 2 one-hour scanning sessions, performed on separate days. In total (pooled over the 2 days), 11 task runs of 115 volumes each were acquired (i.e., 1,265 scanning volumes in total). Each scanning session also involved a further 115-volume resting-state run, during which participants were instructed to fixate a central cross. Four additional volumes were discarded from each scanning run prior to the analysis to allow for T1 equilibration effects.

fMRI data analysis

Our fMRI analysis assessed the commonalities and differences in audiovisual spatial processing and integration between younger and older adults by combining 3 complementary methodological approaches. First, we used multivariate pattern decoding with support vector regression to characterise how auditory and visual information are combined into spatial representations along the dorsal visual and auditory processing hierarchies in younger and older participants. Second, we used conventional mass-univariate analyses to investigate how congruent and incongruent audiovisual stimulation influences univariate BOLD responses across the entire brain. Third, we used multivariate Bayesian decoding to assess how the neural systems that show greater activations for older adults, as well as those that were activated in both groups, encode information about the spatial location or congruency of audiovisual stimuli.

Preprocessing and within-participant (first-level) general linear models.

MRI data were analysed in SPM12 [76]. Each participant’s functional scans were realigned/unwarped to correct for movement, slice-time corrected, and coregistered to the anatomical scan. For multivariate pattern decoding (i.e., support vector regression and multivariate Bayesian decoding), these native-space data were spatially smoothed with a Gaussian kernel of 3 mm FWHM. For mass-univariate analyses and multivariate Bayesian decoding, the slice-time-corrected and realigned images were normalised into Montreal Neurological Institute (MNI) space using parameters from segmentation of the T1 structural image [77], resampled to a spatial resolution of 2 × 2 × 2 mm3 and spatially smoothed with a Gaussian kernel of 8 mm full-width at half-maximum.

The following processing steps were conducted separately on both native-space and MNI-transformed data. Each voxel’s time series was high-pass filtered to 1/128 Hz. The fMRI experiment was modelled in an event-related fashion with regressors entered into the design matrix after convolving each event-related unit impulse (coding the stimulus onset) with a canonical hemodynamic response function and its first temporal derivative. In addition to modelling the 12 conditions in our 4 (auditory location: −15°, −5°, 5°, or 15° visual angle) × 3 (sensory context: unisensory auditory, audiovisual congruent, audiovisual incongruent) within-participant factorial design, the model included the realignment parameters as nuisance covariates to account for residual motion artifacts. For the mass-univariate analysis and the multivariate Bayesian decoding analysis, the design matrix also modelled the button response choices as a single regressor to account for motor responses. To enable more reliable estimates of the activation patterns, we did not account for observers’ response choices in the support vector regression analysis that is reported in this manuscript (sound locations and observers’ sound localisation responses were highly correlated). However, a control analysis confirmed that the fMRI decoded spatial locations did not differ across age groups when observers’ spatially specific responses were also modelled.

Correcting BOLD response for age-related changes in vascular reactivity.

The normal ageing process can lead to complex and nonuniform changes in vascular reactivity and neurovascular coupling [78,79]. To at least partly account for these changes, we corrected the BOLD-response amplitude (i.e., parameter estimates pertaining to the canonical hemodynamic response function) in each voxel in the MNI-normalised data based on the resting state fluctuation amplitude (or scan-to-scan signal variability) [79,80]. Resting-state data were preprocessed exactly as the task (i.e., spatial ventriloquist) data (i.e., realigned/unwarped, slice-time corrected, coregistered to the anatomical image, normalised to MNI space, resampled, and spatially smoothed with a Gaussian kernel of 8 mm FWHM). We applied additional steps to minimise the effect of motion, and other nuisance variables, on the signal. First, we applied wavelet despiking [81] and linear and quadratic detrending. The BOLD response over scans was then residualised with respect to the following regressors: white matter signal (the mean across all voxels containing white matter, according to SPM’s automated segmentation algorithm, was taken for each volume, and the time-varying signal included as a regressor); cerebrospinal fluid signal (using the same procedure as with white matter); and movement parameters (and their first derivatives). The signal was then bandpass-filtered at 0.01 to 0.08 Hz to maximise the contribution of physiological factors to the signal fluctuation. The standard deviation of the remaining variation across scans at each voxel was calculated to create the final resting state fluctuation map (separately for each scanning day). The parameter estimates in each voxel, condition, and participant were standardised by dividing by the relevant resting state fluctuation amplitude value prior to further analysis.

Decoding audiovisual spatial representations using support vector regression.

Using multivariate pattern decoding with support vector regression, we investigated how younger and older adults combine auditory and visual signals into spatial representations along the auditory and visual processing hierarchies. The basic rationale of this analysis is as follows: We first train a model to learn the mapping from fMRI activation patterns in ROIs to stimulus locations in the external world based solely on congruent audiovisual stimuli. We then use this learnt mapping to decode the spatial locations from activation patterns of the incongruent audiovisual signals. In putatively unisensory auditory regions, locations decoded from fMRI activation patterns for incongruent trials should therefore reflect only the sound location (irrespective of the visual location); in unisensory visual regions, decoded locations should reflect only the visual location; and in audiovisual integration regions, the decoded locations should be somewhere between the auditory and visual locations. Hence, the locations decoded from activation patterns for audiovisual incongruent stimuli provide insights into how regions weigh and combine spatial information from vision and audition. This approach is closely linked to our behavioural analysis, which focuses on how observers weight and combine audiovisual signals into spatial percepts or reported locations.

For the multivariate decoding analysis, we extracted the parameter estimates of the canonical hemodynamic response function for each condition and run from voxels of the regions of interest (i.e., fMRI activation vectors; see ROI section below). The parameter estimates pertaining to the canonical hemodynamic response function defined the magnitude of the BOLD response to the auditory and audiovisual stimuli in each voxel. Each fMRI activation vector for the 12 conditions in our 4 (auditory location) × 3 (sensory context) factorial design was based on 10 trials within a particular run. Activation vectors were normalised to between 0 and 1.

For each of the 5 ROIs along the visual and auditory processing hierarchies, we trained a support vector regression model (with default parameters C = 1 and γ = 1/n features, as implemented in LIBSVM 3.17 [82], accessed via The Decoding Toolbox Version 3.96 [83]) to learn the mapping from the fMRI activation vectors to the external spatial locations based on the audiovisual spatially congruent conditions from all but one of the 11 runs. This learnt mapping from activation patterns to external spatial locations was then used to decode the spatial location from the fMRI activation patterns of the unisensory auditory, audiovisual congruent, and audiovisual incongruent conditions of the remaining run. In a leave-one-run-out cross-validation scheme, the training-test procedure was repeated for all 11 runs. The decoded spatial estimates for each condition were then averaged across runs.

The decoded spatial estimates were then analysed in the same way as the behavioural data: Responses to stimuli in the left hemifield were multiplied by −1, then condition-specific estimates were entered into a 2 (hemifield: left or right) × 2 (eccentricity: 5° or 15°) × 3 (sensory context: unisensory auditory, audiovisual congruent, or audiovisual incongruent) × 2 (age group: younger or older) mixed ANOVA at the second (random effects) level separately for each ROI. For analysis, incongruent conditions were labelled based on the location of the stimulus that corresponds with the ROI’s dominant sensory modality: V1-V3 and intraparietal sulcus responses were labelled based on the location of the visual stimulus; planum temporale and A1 were labelled based on the location of the auditory stimulus. As with the behavioural data, corresponding Bayesian mixed ANOVAs [75] were also conducted, and results tables include BFexcl values for all main and interaction effects. Versions of the analyses where all incongruent stimuli were labelled based on the auditory location are also available in Tables K-M in S1 Text, though note that this approach introduces artificial interaction effects between stimulus eccentricity and audiovisual congruence for visual-dominant ROIs.

Regions of interest for support vector regression analysis. Our support vector regression analysis selectively focused on regions along the dorsal auditory and visual spatial processing pathways that have previously been shown to be critical for integrating auditory and visual signals into spatial representations [5,10,13,14,61]. Specifically, we defined 5 ROIs based on inverse-normalised group-level probabilistic maps. Left and right hemisphere maps were combined. Visual (V1-V3) and intraparietal sulcus (IPS 0–2, IPS 3–4) ROIs were defined using retinotopic maximum probability maps [44]. Primary auditory cortex (A1) was defined based on cytoarchitectonic maximum probability maps [84]. Planum temporale was defined based on labels of the Destrieux atlas [85,86], as implemented in Freesurfer 5.3.0 [87].

Conventional second-level mass-univariate analysis: Identifying stimulus- and task-related activations.

Using conventional mass-univariate analysis, we next characterised activations for audiovisual stimuli relative to fixation, and audiovisual spatial incongruence, across the entire brain, and compared between older and younger participants. At the first level, condition-specific effects for each participant were estimated according to the general linear model (see earlier section) and passed to a second-level ANOVA as contrasts. Inferences were made at the second level to allow for random effects analysis and population-level inferences [88].

At the random effects (i.e., group) level, we tested for:

  1. Effects present in both age groups for all stimuli (unisensory auditory, audiovisual congruent, and audiovisual incongruent) relative to fixation:
    • (AllOlder > FixationOlder) ∩ (AllYounger > FixationYounger)
  2. Age group differences in the effects of all stimuli relative to fixation:
    • (AllOlder > FixationOlder) > (AllYounger > FixationYounger)
    • (AllYounger > FixationYounger) > (AllOlder > FixationOlder)
  3. The effect of audiovisual spatial incongruence, averaged across age groups:
    • Incong > Cong
  4. The interaction between audiovisual spatial incongruence and age group:
    • (IncongOlder > CongOlder) > (IncongYounger > CongYounger)
    • (IncongYounger > CongYounger) > (IncongOlder > CongOlder)

Unless otherwise stated, activations are reported at p < .05 at the voxel level, familywise error corrected for multiple comparisons across the entire brain.

Multivariate Bayesian decoding to compare the ability of sets of regions to predict task-relevant variables.

We assessed the extent to which activations identified by the mass-univariate analysis contributed to encoding of visual or auditory location, and their spatial relationship (i.e., congruence), in younger and older participants. Our key question was whether regions with greater activations for older than younger adults contribute more to encoding these task-relevant variables in both age groups.

To address this question, we used multivariate Bayesian decoding, as implemented in SPM12 [42], which estimates the set of activation patterns that best predicts a particular target variable such as visual or auditory location using hierarchical parametric empirical Bayes. Multivariate Bayes treats a set of regions as a model for encoding a particular target variable (for instance, auditory location left versus right). It estimates the log model evidence, which trades off model accuracy with complexity [42,89]. The model evidence can then be used to compare different models using Bayesian model selection (BMS) at the group (i.e., random effects) level [90]. Hence, unlike support vector regression, multivariate Bayesian decoding allows us to compare the relative contributions of different areas of interest to encoding or predicting a particular target variable (for instance, auditory location left versus right) using standard procedures of Bayesian model comparison. Specifically, we used multivariate Bayesian decoding to compare the contributions of 3 functionally defined sets of regions to encoding stimulus and task-relevant variables:

  1. Activations that are common to younger and older participants (referred to as [O∩Y]), as specified by the conjunction (using the conjunction null [46,47]): (AllOlder > FixationOlder) ∩ (AllYounger > FixationYounger).
  2. Activations that were enhanced for older relative to younger participants (referred to as [O>Y]), as specified by: (AllOlder > FixationOlder) > (AllYounger > FixationYounger).
  3. The union [O>Y] ∪ [O∩Y] of each of the above 2 sets of regions.

These sets of regions were defined based on the respective inverse normalised statistical comparisons at the random effects group level, using a leave-one-participant-out scheme. They were constrained to include only the 1,000 voxels with the greatest t value for the respective comparisons; the union set [O>Y] ∪ [O∩Y] was created by randomly sampling 500 unique (nonoverlapping) voxels from each of the 2 component sets of regions.

For each set of regions, we fitted 4 independent multivariate Bayes models, predicting different target variables:

  1. Visual location [VisL ≠ VisR]
  2. Auditory location [AudL ≠ AudR]
  3. Incongruence with 5° eccentricity [Incong5 ≠ Cong5]
  4. Incongruence with 15° eccentricity [Incong15 ≠ Cong15]

Both predictor and target variables were residualised with respect to effects of no interest (i.e., all general linear model covariates other than those involved in the target contrast).

Please note that the contrasts used to define sets of regions were orthogonal to the target variables (for instance, the contrast [All > Fixation], pooled over both age groups, is orthogonal to visual location [VisL ≠ VisR]). Moreover, the sets of regions were defined using a leave-one-participant-out cross-validation scheme, so each participant’s own activations were not used to define their participant-specific sets.

Separate multivariate Bayes models were fitted for each participant, for each set of regions, and for each target variable. We entered the resulting log model evidence values into statistical analyses and Bayesian model comparison procedures to assess the contributions of the 3 different sets of regions to the encoding of the 4 target variables and to explore whether/how these contributions varied with age. More specifically, the analysis included the following steps:

First, we assessed whether information is encoded in a more sparse or distributed fashion in each region by comparing models in which patterns are individual voxels (i.e., “sparse”) versus clusters (i.e., smooth spatial prior). In our data, the sparse model (in which the weights of individual voxels are optimised) outperformed the smooth model across all analyses (paired-sample t tests of log model evidences, p < .001), so we will focus selectively on the results from this model class.

We also ensured that the target variables could be decoded reliably from each set of regions by comparing the evidence for each “model of interest” with the evidence of models in which the design matrix had been randomly phase shuffled (i.e., stimulus onset times uniformly shifted by a random amount; this was repeated 20 times, and the mean of the log model evidence was taken; see, for instance, [37] for a similar approach). Using t tests, we compared the difference in real versus shuffled model evidences and confirmed that the real models performed significantly better for all sets of regions and target variables (p < .05, one tailed) except Incong15 ≠ Cong15 in the O∩Y set of regions, t(31) = 1.24, p = .113.

Next, and more importantly, we assessed which of the 3 candidate sets of regions (i.e., (1) [O∩Y], the conjunction of activations in older and younger; (2) [O>Y], activation increases in older relative to younger adults; or (3) [O>Y] ∪ [O∩Y], the union of sets 1 and 2) is the best model or predictor for each of the target variables, separately for the older and younger groups, by performing Bayesian model selection at the random effects (group) level, as implemented in SPM12 [90]. We report log model evidence values, as well as the protected exceedance probability that a given model is better than any of the other candidate models beyond chance [91]. If the regions with greater activations in older (relative to younger) adults make critical contributions to encoding the task-relevant target variable, we would expect the model evidence for the union [O>Y] ∪ [O∩Y] to exceed that of the conjunction model [O∩Y]. Further, we formally assessed whether the frequency with which each model “won” differed between age groups using a χ2 test of association (1 test per target variable). We report p values after Bonferroni correction for multiple (i.e., 4 target variables) comparisons.

Finally, we investigated whether the set of regions with greater activations for older participants (i.e., [O>Y] set) contributes more to the encoding of the critical target variables in older adults by comparing the difference in log model evidence for the union [O>Y] ∪ [O∩Y] set relative to the joint [O∩Y] set between older and younger adults in a nonparametric Mann–Whitney U tests separately for each of the 4 target variables (VisL ≠ VisR, AudL ≠ AudR, Incong5 ≠ Cong5, and Incong15 ≠ Cong15). We report p values after Bonferroni correction for multiple (i.e., 4 target variables) comparisons. Full output from these tests, as well as corresponding Bayesian statistics [75], are available in Table N in S1 Text.

Supporting information

S1 Data. Excel spreadsheet with individual numerical data organised into separate sheets corresponding to the following: Figs 1C, 2A, 2B, 2C, 5, and ABCI in S1 Text; and Tables 1, 2, and A-N in S1 Text.

https://doi.org/10.1371/journal.pbio.3002494.s001

(XLSX)

S2 Data. ZIP file containing the second-level general linear model from the mass-univariate analysis, including values underlying the following: Figs 3, 4, and D-H in S1 Text; and Table 3.

The data are stored in MATLAB structures and NIfTI files and are best viewed using the SPM12 toolbox.

https://doi.org/10.1371/journal.pbio.3002494.s002

(ZIP)

S1 Text. PDF document containing supporting results and methods.

https://doi.org/10.1371/journal.pbio.3002494.s003

(PDF)

Acknowledgments

The authors wish to thank Stephen Mayhew for helpful discussions and support during the design of this research.

References

  1. 1. Alais D, Burr D. The Ventriloquist Effect Results from Near-Optimal Bimodal Integration. Curr Biol. 2004;14:257–262. pmid:14761661
  2. 2. Ernst MO, Banks MS. Humans integrate visual and haptic information in a statistically optimal fashion. Nature. 2002;415:429–433. pmid:11807554
  3. 3. Fetsch CR, Pouget A, DeAngelis GC, Angelaki DE. Neural correlates of reliability-based cue weighting during multisensory integration. Nat Neurosci. 2012;15:146–154. pmid:22101645
  4. 4. Helbig HB, Ernst MO, Ricciardi E, Pietrini P, Thielscher A, Mayer KM, et al. The neural mechanisms of reliability weighted integration of shape information from vision and touch. NeuroImage. 2012;60:1063–1072. pmid:22001262
  5. 5. Rohe T, Noppeney U. Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Control. eNeuro. 2018:ENEURO.0315-17.2018. pmid:29527567
  6. 6. Battaglia PW, Jacobs RA, Aslin RN. Bayesian integration of visual and auditory signals for spatial localization. J Opt Soc Am A. 2003;20:1391. pmid:12868643
  7. 7. Meijer D, Veselič S, Calafiore C, Noppeney U. Integration of audiovisual spatial signals is not consistent with maximum likelihood estimation. Cortex. 2019;119:74–88. pmid:31082680
  8. 8. Beierholm U, Shams L, Ma WJ, Koerding K. Comparing Bayesian models for multisensory cue combination without mandatory integration. Advances in neural information processing systems. 2007. pp. 81–88. Available: http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2007_368.pdf
  9. 9. Rohe T, Noppeney U. Sensory reliability shapes perceptual inference via two mechanisms. J Vis. 2015;15:22. pmid:26067540
  10. 10. Rohe T, Noppeney U. Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception. Kayser C, editor. PLoS Biol. 2015;13:e1002073. pmid:25710328
  11. 11. Shams L, Beierholm UR. Causal inference in perception. Trends Cogn Sci. 2010;14:425–432. pmid:20705502
  12. 12. Wozny DR, Beierholm UR, Shams L. Probability Matching as a Computational Strategy Used in Perception. Maloney LT, editor. PLoS Comput Biol. 2010;6:e1000871. pmid:20700493
  13. 13. Rohe T, Noppeney U. Distinct Computational Principles Govern Multisensory Integration in Primary Sensory and Association Cortices. Curr Biol. 2016;26:509–514. pmid:26853368
  14. 14. Ferrari A, Noppeney U. Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biol. 2021;19:e3001465. pmid:34793436
  15. 15. Odegaard B, Wozny DR, Shams L. The effects of selective and divided attention on sensory precision and integration. Neurosci Lett. 2016;614:24–28. pmid:26742638
  16. 16. Talsma D, Senkowski D, Soto-Faraco S, Woldorff MG. The multifaceted interplay between attention and multisensory integration. Trends Cogn Sci. 2010;14:400–410. pmid:20675182
  17. 17. Vercillo T, Gori M. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration. Front Integr Neurosci. 2015:9. pmid:25999825
  18. 18. Zuanazzi A, Noppeney U. Additive and interactive effects of spatial attention and expectation on perceptual decisions. Sci Rep. 2018;8:6732. pmid:29712941
  19. 19. Zuanazzi A, Noppeney U. Distinct Neural Mechanisms of Spatial Attention and Expectation Guide Perceptual Inference in a Multisensory World. J Neurosci. 2019;39:2301–2312. pmid:30659086
  20. 20. Dobreva MS, O’Neill WE, Paige GD. Influence of aging on human sound localization. J Neurophysiol. 2011;105:2471–2486. pmid:21368004
  21. 21. Li KZH, Lindenberger U. Relations between aging sensory/sensorimotor and cognitive functions. Neurosci Biobehav Rev. 2002;26:777–783. pmid:12470689
  22. 22. Salthouse TA, Hancock HE, Meinz EJ, Hambrick DZ. Interrelations of Age, Visual Acuity, and Cognitive Functioning. J Gerontol B Psychol Sci Soc Sci. 1996;51B:P317–P330. pmid:8931619
  23. 23. Salthouse TA. Aging and measures of processing speed. Biol Psychol. 2000;54:35–54. pmid:11035219
  24. 24. Bugg JM, DeLosh EL, Davalos DB, Davis HP. Age Differences in Stroop Interference: Contributions of General Slowing and Task-Specific Deficits. Aging Neuropsychol Cogn. 2007;14:155–167. pmid:17364378
  25. 25. Tsvetanov KA, Mevorach C, Allen H, Humphreys GW. Age-related differences in selection by visual saliency. Atten Percept Psychophysiol. 2013;75:1382–1394. pmid:23812959
  26. 26. DeLoss DJ, Pierce RS, Andersen GJ. Multisensory Integration, Aging, and the Sound-Induced Flash Illusion. Psychol Aging. 2013;28:802–812. pmid:23978009
  27. 27. McGovern DP, Roudaia E, Stapleton J, McGinnity TM, Newell FN. The sound-induced flash illusion reveals dissociable age-related effects in multisensory integration. Front Aging Neurosci. 2014:6. pmid:25309430
  28. 28. Sekiyama K, Soshi T, Sakamoto S. Enhanced audiovisual integration with aging in speech perception: a heightened McGurk effect in older adults. Front Psychol. 2014:5. pmid:24782815
  29. 29. Setti A, Burke KE, Kenny RA, Newell FN. Is inefficient multisensory processing associated with falls in older people? Exp Brain Res. 2011;209:375–384. pmid:21293851
  30. 30. Setti A, Burke KE, Kenny R, Newell FN. Susceptibility to a multisensory speech illusion in older persons is driven by perceptual processes. Front Psychol. 2013:4. pmid:24027544
  31. 31. Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex. 2021;138:1–23. pmid:33676086
  32. 32. Jones SA, Beierholm U, Meijer D, Noppeney U. Older adults sacrifice response speed to preserve multisensory integration performance. Neurobiol Aging. 2019;84:148–157. pmid:31586863
  33. 33. Park H, Nannt J, Kayser C. Sensory- and memory-related drivers for altered ventriloquism effects and aftereffects in older adults. Cortex. 2021;135:298–310. pmid:33422888
  34. 34. Cabeza R, Anderson ND, Locantore JK, McIntosh AR. Aging Gracefully: Compensatory Brain Activity in High-Performing Older Adults. NeuroImage. 2002;17:1394–1402. pmid:12414279
  35. 35. Davis SW, Dennis NA, Daselaar SM, Fleck MS, Cabeza R. Que PASA? The posterior-anterior shift in aging. Cereb Cortex N Y N. 1991;2008(18):1201–1209. pmid:17925295
  36. 36. Reuter-Lorenz PA, Park DC. How Does it STAC Up? Revisiting the Scaffolding Theory of Aging and Cognition. Neuropsychol Rev. 2014;24:355–370. pmid:25143069
  37. 37. Morcom AM, Henson RNA. Increased Prefrontal Activity with Aging Reflects Nonspecific Neural Responses Rather than Compensation. J Neurosci. 2018;38:7303–7313. pmid:30037829
  38. 38. Knights E, Morcom AM, Henson RN. Does Hemispheric Asymmetry Reduction in Older Adults in Motor Cortex Reflect Compensation? J Neurosci. 2021;41:9361–9373. pmid:34580164
  39. 39. DeVries L, Anderson S, Goupell MJ, Smith E, Gordon-Salant S. Effects of aging and hearing loss on perceptual and electrophysiological measures of pulse-rate discrimination. J Acoust Soc Am. 2022;151:1639–1650. pmid:35364956
  40. 40. Noppeney U, Ostwald D, Werner S. Perceptual Decisions Formed by Accumulation of Audiovisual Evidence in Prefrontal Cortex. J Neurosci. 2010;30:7434–7446. pmid:20505110
  41. 41. Rey-Mermet A, Gade M. Inhibition in aging: What is preserved? What declines? A meta-analysis. Psychon Bull Rev. 2018;25:1695–1716. pmid:29019064
  42. 42. Friston K, Chu C, Mourão-Miranda J, Hulme O, Rees G, Penny W, et al. Bayesian decoding of brain images. NeuroImage. 2008;39:181–205. pmid:17919928
  43. 43. Mihalik A, Noppeney U. Causal Inference in Audiovisual Perception. J Neurosci. 2020;40:6600–6612. pmid:32669354
  44. 44. Wang L, Mruczek REB, Arcaro MJ, Kastner S. Probabilistic Maps of Visual Topography in Human Cortex. Cereb Cortex. 2015;25:3911–3931. pmid:25452571
  45. 45. Stecker GC, Middlebrooks JC. Distributed coding of sound locations in the auditory cortex. Biol Cybern. 2003;89:341–349. pmid:14669014
  46. 46. Nichols T, Brett M, Andersson J, Wager T, Poline J-B. Valid conjunction inference with the minimum statistic. NeuroImage. 2005;25:653–660. pmid:15808966
  47. 47. Friston KJ, Penny WD, Glaser DE. Conjunction revisited. NeuroImage. 2005;25:661–667. pmid:15808967
  48. 48. Gau R, Noppeney U. How prior expectations shape multisensory perception. NeuroImage. 2016;124(Part A):876–886. pmid:26419391
  49. 49. Werner S, Noppeney U. Distinct Functional Contributions of Primary Sensory and Association Areas to Audiovisual Integration in Object Categorization. J Neurosci. 2010;30:2662–2675. pmid:20164350
  50. 50. Aller M, Mihalik A, Noppeney U. Audiovisual adaptation is expressed in spatial and decisional codes. Nat Commun. 2022;13:3924. pmid:35798733
  51. 51. Park H, Nannt J, Kayser C. Diversification of perceptual mechanisms underlying preserved multisensory behavior in healthy aging. Neuroscience. 2020 Feb.
  52. 52. Dobreva MS, O’Neill WE, Paige GD. Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects. Exp Brain Res. 2012;223:441–455. pmid:23076429
  53. 53. Barrett MM, Newell FN. Task-Specific, Age Related Effects in the Cross-Modal Identification and Localisation of Objects. Multisens Res. 2015;28:111–151. pmid:26152055
  54. 54. Furman JM, Müller MLTM, Redfern MS, Jennings JR. Visual–vestibular stimulation interferes with information processing in young and older humans. Exp Brain Res. 2003;152:383–392. pmid:12920495
  55. 55. Mevorach C, Spaniol MM, Soden M, Galea JM. Age-dependent distractor suppression across the vision and motor domain. J Vis. 2016;16:27. pmid:27690167
  56. 56. Rimmele JM, Sussman E, Poeppel D. The role of temporal structure in the investigation of sensory memory, auditory scene analysis, and speech perception: A healthy-aging perspective. Int J Psychophysiol. 2015;95:175–183. pmid:24956028
  57. 57. Cao Y, Summerfield C, Park H, Giordano BL, Kayser C. Causal Inference in the Multisensory Brain. Neuron. 2019;102:1076–1087.e8. pmid:31047778
  58. 58. Dahl CD, Logothetis NK, Kayser C. Spatial organization of multisensory responses in temporal association cortex. J Neurosci. 2009;29:11924–11932. pmid:19776278
  59. 59. Rohe T, Ehlis A-C, Noppeney U. The neural dynamics of hierarchical Bayesian causal inference in multisensory perception. Nat Commun. 2019;10:1907. pmid:31015423
  60. 60. Besle J, Fischer C, Bidet-Caulet A, Lecaignard F, Bertrand O, Giard M-H. Visual Activation and Audiovisual Interactions in the Auditory Cortex during Speech Perception: Intracranial Recordings in Humans. J Neurosci. 2008;28:14301–14310. pmid:19109511
  61. 61. Gau R, Bazin P-L, Trampel R, Turner R, Noppeney U. Resolving multisensory and attentional influences across cortical depth in sensory cortices. elife. 2020;9:e46856. pmid:31913119
  62. 62. Iurilli G, Ghezzi D, Olcese U, Lassi G, Nazzaro C, Tonini R, et al. Sound-driven synaptic inhibition in primary visual cortex. Neuron. 2012;73:814–828. pmid:22365553
  63. 63. Martuzzi R, Murray MM, Michel CM, Thiran J-P, Maeder PP, Clarke S, et al. Multisensory Interactions within Human Primary Cortices Revealed by BOLD Dynamics. Cereb Cortex. 2007;17:1672–1679. pmid:16968869
  64. 64. Jimura K, Braver TS. Age-Related Shifts in Brain Activity Dynamics during Task Switching. Cereb Cortex. 2010;20:1420–1431. pmid:19805420
  65. 65. Velanova K, Lustig C, Jacoby LL, Buckner RL. Evidence for Frontally Mediated Controlled Processing Differences in Older Adults. Cereb Cortex. 2007;17:1033–1046. pmid:16774962
  66. 66. Townsend J, Adamo M, Haist F. Changing channels: An fMRI study of aging and cross-modal attention shifts. NeuroImage. 2006;31:1682–1692. pmid:16549368
  67. 67. Grady C. The cognitive neuroscience of ageing. Nat Rev Neurosci. 2012;13:491. pmid:22714020
  68. 68. Reuter-Lorenz PA, Cappell KA. Neurocognitive Aging and the Compensation Hypothesis. Curr Dir Psychol Sci. 2008;17:177–182.
  69. 69. Morcom AM, Johnson W. Neural Reorganization and Compensation in Aging. J Cogn Neurosci. 2015;27:1275–1285. pmid:25603025
  70. 70. Cabeza R, Albert M, Belleville S, Craik FIM, Duarte A, Grady CL, et al. Maintenance, reserve and compensation: the cognitive neuroscience of healthy ageing. Nat Rev Neurosci. 2018;19:701–710. pmid:30305711
  71. 71. Porges EC, Jensen G, Foster B, Edden RA, Puts NA. The trajectory of cortical GABA across the lifespan, an individual participant data meta-analysis of edited MRS studies. Baker CI, Clarke W, editors. eLife. 2021;10:e62575. pmid:34061022
  72. 72. Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, et al. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. 2005;53:695–699. pmid:15817019
  73. 73. Kleiner M, Brainard D, Pelli D. What’s new in Psychtoolbox-3? 30th European Conference on Visual Perception. 2007.
  74. 74. Gardner B, Martin K. HRTF Measurements of a KEMAR Dummy Head Microphone. 1994. Report No.: 280.
  75. 75. JASP Team. JASP (Version 0.16.4). 2022.
  76. 76. Friston KJ, Holmes AP, Worsley KJ, Poline J-P, Frith CD, Frackowiak RSJ. Statistical parametric maps in functional imaging: A general linear approach. Hum Brain Mapp. 1994;2:189–210.
  77. 77. Ashburner J, Friston KJ. Unified segmentation. NeuroImage. 2005;26:839–851. pmid:15955494
  78. 78. D’Esposito M, Zarahn E, Aguirre GK, Rypma B. The Effect of Normal Aging on the Coupling of Neural Activity to the Bold Hemodynamic Response. NeuroImage. 1999;10:6–14. pmid:10385577
  79. 79. Kannurpatti SS, Biswal BB. Detection and scaling of task-induced fMRI-BOLD response using resting state fluctuations. NeuroImage. 2008;40:1567–1574. pmid:18343159
  80. 80. Tsvetanov KA, Henson RNA, Tyler LK, Davis SW, Shafto MA, Taylor JR, et al. The effect of ageing on fMRI: Correction for the confounding effects of vascular reactivity evaluated by joint fMRI and MEG in 335 adults. Hum Brain Mapp. 2015;36:2248–2269. pmid:25727740
  81. 81. Patel AX, Kundu P, Rubinov M, Jones PS, Vértes PE, Ersche KD, et al. A wavelet method for modeling and despiking motion artifacts from resting-state fMRI time series. NeuroImage. 2014;95:287–304. pmid:24657353
  82. 82. Chang C, Lin C. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology. 2011. p. 27:1–27:27.
  83. 83. Hebart MN, Görgen K, Haynes J-D. The Decoding Toolbox (TDT): a versatile software package for multivariate analyses of functional imaging data. Front Neuroinformatics. 2015;8:88. pmid:25610393
  84. 84. Eickhoff SB, Stephan KE, Mohlberg H, Grefkes C, Fink GR, Amunts K, et al. A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. NeuroImage. 2005;25:1325–1335. pmid:15850749
  85. 85. Dale AM, Fischl B, Sereno MI. Cortical Surface-Based Analysis: I. Segmentation and Surface Reconstruction. NeuroImage. 1999;9:179–194. pmid:9931268
  86. 86. Destrieux C, Fischl B, Dale A, Halgren E. Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature. NeuroImage. 2010;53:1–15. pmid:20547229
  87. 87. Fischl B. FreeSurfer. NeuroImage. 2012;62:774–781. pmid:22248573
  88. 88. Friston KJ, Holmes AP, Price CJ, Büchel C, Worsley KJ. Multisubject fMRI studies and conjunction analyses. NeuroImage. 1999;10:385–396. pmid:10493897
  89. 89. Morcom AM, Friston KJ. Decoding episodic memory in ageing: A Bayesian analysis of activity patterns predicting memory. NeuroImage. 2012;59:1772–1782. pmid:21907810
  90. 90. Stephan KE, Penny WD, Daunizeau J, Moran RJ, Friston KJ. Bayesian model selection for group studies. NeuroImage. 2009;46:1004–1017. pmid:19306932
  91. 91. Rigoux L, Stephan KE, Friston KJ, Daunizeau J. Bayesian model selection for group studies—Revisited. NeuroImage. 2014;84:971–985. pmid:24018303