The ability to synthesize information across multiple senses is known as multisensory integration and is essential to our understanding of the world around us. Sensory stimuli that occur close in time are likely to be integrated, and the accuracy of this integration is dependent on our ability to precisely discriminate the relative timing of unisensory stimuli (crossmodal temporal acuity). Previous research has shown that multisensory integration is modulated by both bottom-up stimulus features, such as the temporal structure of unisensory stimuli, and top-down processes such as attention. However, it is currently uncertain how attention alters crossmodal temporal acuity. The present study investigated whether increasing attentional load would decrease crossmodal temporal acuity by utilizing a dual-task paradigm. In this study, participants were asked to judge the temporal order of a flash and beep presented at various temporal offsets (crossmodal temporal order judgment (CTOJ) task) while also directing their attention to a secondary distractor task in which they detected a target stimulus within a stream visual or auditory distractors. We found decreased performance on the CTOJ task as well as increases in both the positive and negative just noticeable difference with increasing load for both the auditory and visual distractor tasks. This strongly suggests that attention promotes greater crossmodal temporal acuity and that reducing the attentional capacity to process multisensory stimuli results in detriments to multisensory temporal processing. Our study is the first to demonstrate changes in multisensory temporal processing with decreased attentional capacity using a dual task paradigm and has strong implications for developmental disorders such as autism spectrum disorders and developmental dyslexia which are associated with alterations in both multisensory temporal processing and attention.
Citation: Dean CL, Eggleston BA, Gibney KD, Aligbe E, Blackwell M, Kwakye LD (2017) Auditory and visual distractors disrupt multisensory temporal acuity in the crossmodal temporal order judgment task. PLoS ONE 12(7): e0179564. https://doi.org/10.1371/journal.pone.0179564
Editor: Krish Sathian, Emory University, UNITED STATES
Received: January 13, 2017; Accepted: May 30, 2017; Published: July 19, 2017
Copyright: © 2017 Dean et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All "Auditory and visual distractors disrupt multisensory temporal acuity in the crossmodal temporal order judgment task" files are available from the Inter-university Consortium for Political and Social Research database (http://doi.org/10.3886/E100708V1)."
Funding: We would like to thank the Oberlin College Research Fellowship and the Oberlin College Office of Foundation, Government, and Corporate Grants for their support of this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Temporal influences on multisensory integration
As we interact with the world around us, we encounter many stimuli that are perceptible to multiple senses. The field of multisensory integration studies the neurological processes that combine these disparate unisensory stimuli into one unified perception of the world and the resulting changes in perception and behavior . Several stimulus features modulate the likelihood and strength of multisensory integration and have been termed the principles of multisensory integration. For example, unisensory stimuli that share a close temporal and spatial correspondence are more likely to be integrated [2,3]. Additionally, greater integration has been observed in response to stimuli that are relatively less salient . Evidence for the importance of the temporal principle was first established in multisensory neurons in the superior colliculus (SC) of anesthetized cats . Two unimodal stimuli presented closely in time were more likely to produce a response that was superadditive relative to the sum of both unisensory components . Furthermore, the magnitude of the multisensory enhancement decreased as the paired stimuli are presented at larger temporal asynchronies, although some neurons respond most strongly to particular temporal offsets between unisensory stimuli . This effect has been demonstrated for audiovisual, visual-somatosensory, and auditory-somatosensory stimulus pairs . The temporal principle has also been shown to apply to human perception, and several constructs have been developed to quantify differences in multisensory temporal processing [6,7]. The temporal window of integration describes the interval of time over which two stimuli may be perceptually bound into a unified percept, and this window has been shown to differ across individuals , recalibrate based on task demands [9–11], and narrow due to training [6,12–14]. Closely related to the temporal window of integration is the concept of crossmodal temporal acuity which describes the amount of time necessary for a participant to distinguish temporal features across sensory modalities [8,15]. Importantly, disruptions in the temporal processing of multisensory information have been strongly linked to several developmental disorders including autism spectrum disorder, dyslexia, and schizophrenia [16–19]. Multisensory temporal processing is also known to develop across childhood and reach adult-like levels in adolescence [20,21].
Top-down and attentional influences on multisensory integration
In addition to the bottom-up stimulus features discussed in the previous section, several top-down processes such as attention also interact with and modulate multisensory integration (for general review see ). In crossmodal attentional cuing, a stimulus in one sensory modality can spatially direct attention to benefit the processing of a target in a different modality [23–26]. Similarly, attentional resources that are captured by a stimulus in one modality can spread to an unattended stimulus in another modality as long as they share a high temporal correspondence [27–30]. Lastly, a non-spatial, task irrelevant auditory or tactile stimulus can direct attention to a visual target in a complex, dynamic environment [31,32].
Several studies have also investigated whether multisensory integration can occur pre-attentively or is dependent on top-down attentional processes. While some studies suggest that attention is necessary for the integration of multisensory stimuli [33–38], other studies provide evidence that integration is independent of the effects of attention [39–42]. Aspects of the multisensory stimulus may modulate whether attention is necessary for multisensory integration. For example, multisensory speech integration has been consistently shown to lessen under high attentional demands [36–38]; however, emotional multisensory stimuli may be integrated pre-attentively . Additionally, multisensory stimuli of varying modalities are more effective at capturing exogenous attention, particularly in highly distracting circumstances [43,44]. However, a recently published study has shown that attention is necessary for multisensory integration regardless of the complexity of the multisensory information being integrated .
Interaction between multisensory attention and temporal processing
As discussed above, both bottom-up features, such as the temporal relationship between unisensory stimuli, and top-down processes such as attention influence the likelihood that unisensory stimuli will be perceptually combined. A growing number of studies have begun to explore how multisensory temporal processing and attention interact to inform our understanding of multisensory events in our environment. A group of studies have found that the crossmodal effects of attention decrease with increasing temporal disparity between the unisensory subcomponents [30,31,45]. For example, the crossmodal spread of attention between an attended stimulus of one modality to an unattended stimulus of another modality decreases as the two stimuli are separated in time .
Attention also alters the speed of processing of stimuli such that attended objects come to our conscious awareness earlier than unattended objects. This phenomenon is described by the law of prior entry . In a multisensory context, when attention is directed to a single modality, objects in that modality will be perceived earlier than objects in another modality. This prior entry effect has been observed across several modality pairings [47–52]. Prior entry in a crossmodal context is usually assessed using crossmodal temporal order judgment (CTOJ) or simultaneity judgment (SJ) tasks. In these tasks, participants either judge the temporal order (CTOJ) or simultaneity (SJ) of stimuli across two modalities that are separated by varied stimulus onset asynchronies (SOA). For both CTOJ and SJ tasks, a point of subjective simultaneity (PSS) can be determined that represents the temporal relationship between the two unimodal stimuli that is perceived as simultaneous by the participant. If a participant is directed to specifically attend to one modality, the PSS will shift toward the participant perceiving the attended modality earlier .
Multisensory researchers have begun to explore how attention may alter multisensory temporal processing by changing the temporal window of integration or crossmodal temporal acuity. A previous study conducted by Vatakis and Spence (2006) presented paired visual and auditory stimuli at various SOAs within a stream of unimodal or multimodal distractors to investigate temporal crowding in a CTOJ experiment. They observed changes in crossmodal temporal acuity (increases in the just noticeable difference (JND)) as a function of position in the distractor stream and the modality of the distractor stream with audiovisual distractors disrupting TOJ performance the most. The results of this study demonstrate that temporal crowding may decrease crossmodal temporal acuity . Alternatively, Van der Burg et al investigated the effects of spatial crowding on crossmodal temporal acuity in a novel synchrony judgment task. Participants viewed complex and dynamic stimuli, 19 discs uniquely modulating in luminance one of which matched an amplitude modulated tone, while judging which visual stimulus was synchronous to the tone. Synchrony judgment performance was unchanged by number of discs indicating that visual spatial crowding does not significantly alter crossmodal temporal acuity . Donohue et al sought to determine whether attention would influence the size of the temporal window of integration. They used a selective attention paradigm for which attention was directed to the left or right hemisphere, and stimuli could be attended or unattended (i.e. occurring in the attended or unattended hemisphere). Three distinct behavioral tasks gave three different patterns of interactions between attention and the temporal window of integration, indicating that the effect of attention on multisensory temporal processing is complex .
Current study questions and hypotheses
Although a handful of studies have investigated the links between attention and multisensory temporal processing, their lack of consistency suggests that we are far from a complete understanding. Thus far, no studies have investigated changes in crossmodal temporal acuity while increasing attentional load. Similar dual-task study designs have revealed that an attentionally demanding secondary task can decrease multisensory integration [37,38,56]. Additionally, only one study has investigated whether distractor modality differentially impacts multisensory temporal processing . The present study investigated whether increasing attentional load would decrease crossmodal temporal acuity in a CTOJ task by utilizing a dual-task paradigm. Participants were asked to judge the temporal order of a flash and a beep presented at various SOAs while also directing their attention to a secondary distractor task, in which the subject must detect a target stimulus within a stream of visual or auditory distractors. We hypothesized that crossmodal temporal acuity would decrease with increasing load and that the modality of the distractor would modulate the extent of the effect for visual—leading versus auditory-leading stimulus pairs. We did find decreases in crossmodal temporal acuity with increasing attentional load; however, these effects were indistinguishable across distractor modalities.
Materials and methods
A total of 88 (55 females, 18–38 years of age, mean age of 22) typically developing adults are included in the data analysis for this study. 73 (44 females, 18–38 years of age, mean age of 22) participants completed the CTOJ task along with visual distractors (RSVP experiment), and 29 (17 females, 18–28 years of age, mean age of 21.5) completed the CTOJ task along with auditory distractors (RSAP experiment). 14 participants completed both experiments in separate sessions. Some participants completed additional experimental tasks while completing the current study procedures. Participants were excluded from final analysis if they did not complete all load conditions for either the RSVP or RSAP experiment [RSVP: 9 participants (7 females, mean age of 20.0); RSAP: 0 participants] or did not have a total accuracy of at least 70% on the distractor task for both load conditions [RSVP: 4 participants (3 females, mean age of 20.8); RSAP: 19 participants (14 females, mean age of 21.1)]. Participants reported normal to corrected-to-normal hearing and vision and no history of developmental disorders or seizures. Participants gave written informed consent and were compensated for their time. Study procedures were approved by the Oberlin College Institutional Review Board and were conducted under the guidelines of Helsinki. Data was collected for the RSVP experiment from June 2013 through July 2014 and for the RSAP experiment June 2014 through January 2015. Participants were recruited through flyers distributed across and the Oberlin College campus and online for the Oberlin community. Potential participants contacted the lab through email or phone to receive more information about study participation and to schedule an appointment if interested in participating.
Experimental design overview
All study procedures were completed in a dimly lit, sound-attenuated room. Participants were monitored via closed-circuit cameras for safety and to ensure on-task behavior. All visual stimuli were presented on a 24” Asus VG 248 LCD monitor at a screen resolution of 1920 x 1080 and a refresh rate of 144Hz that was set at a viewing distance of 50cm from the participant. All auditory stimuli were presented from Dual LU43PB speakers which were powered by a Lepai LP-2020A+ 2-Ch digital amplifier and were located to the right and left of the participant. Stimulus and SOA durations were confirmed prior to data collection using an oscilloscope and photodiode to measure visual stimuli. SuperLab 4.5 software was used for stimulus presentation and participant response collection. Participants indicated their responses on a Cedrus RB-834 response box, and responses were saved to a text file.
This study employed a dual task design to determine whether distracting attention from a multisensory task would alter crossmodal temporal acuity and whether this effect depended on the modality of the distractor. Similar dual task designs have been shown to reduce attentional capacity [57–59]. Participants completed a primary crossmodal temporal order judgment (CTOJ) task and were also presented with either a rapid serial visual presentation (RSVP) stream or a rapid serial auditory presentation (RSAP) stream in three conditions of increasing perceptual load. Participants were asked to detect a target stimulus within the RSVP or RSAP stream while they completed the CTOJ task. Perceptual load was varied for the distractor tasks to titrate the attentional resources distracted from the CTOJ task. All study procedures related to each distractor modality were completed together. Participants completed the CTOJ task at varying perceptual loads of the distractor task, and each load condition was separated into blocks. Further, the order of the load condition blocks was randomized across participants. Thus, each block tested a particular distractor modality by perceptual load condition. For each block, participants first practiced the CTOJ task without any distracting stimuli. They then practiced the CTOJ task with the additional instructions for that perceptual load.
Crossmodal temporal order judgment task (Fig 1A)
A: Participants completed a CTOJ task during which they determined whether a flash (gray border at the edge of the screen) or a beep occurred first. SOAs ranged from -500–500 with negative SOAs indicating that the beep occurred first. B: Some participants completed the CTOJ task while completing a secondary task with visual distractors. Participants were instructed to either ignore the distractors (NL), report a yellow letter (LL), or report a number (HL). C: The remaining participants completed the CTOJ task while completing a secondary task with auditory distractors. Participants were instructed to either ignore the distractors (NL), report a tone that was two octaves above the standard tones (LL), or report a tone that was twice the length of the standard tone (HL).
Visual stimuli consisted of a gray flash at the border of the screen subtending 1.7° from the edge of the screen. (Fig 1A) The flash was presented 28.1° horizontally and 15.9° vertically from central fixation for 21ms. Auditory stimuli consisted of a 3500Hz pure tone beep presented centrally for 21ms at 70dB SPL. For each trial, there was a 500ms pre-stimulus interval during which either an RSVP or RSAP stream was presented. For negative SOA trials, the beep was then presented followed by the flash at varying SOAs. For positive SOA trials, the flash was presented before the beep at varying SOAs. The SOA increments were: -500, -400, -300, -200, -150, -100, -50, 0, 50, 100, 150, 200, 300, 400, and 500 ms. The SOA of 0ms indicates that the auditory and visual stimuli were presented simultaneously. Positive and negative SOA trials were repeated eight times per block across two blocks for a total of 16 trials. Simultaneous trials were repeated 16 times per block across two blocks for a total of 32 trials. The RSVP or RSAP stream continued during the presentation of the CTOJ stimuli and for 500ms after. Then, a response screen was presented that asked “which came first?” Participants indicated their response with a “flash” or “beep” button press. Once participants responded to the CTOJ task, they were asked to report with a “yes” or “no” button press whether they detected a target in the RSVP or RSAP streams in the LL and HL blocks. In the NL block, the next trial started after the participant reported on the CTOJ task. Participants first completed a practice round to establish baseline accuracy for each block. In the practice round, each trial was repeated until participants could correctly identify whether the flash or beep came first. The practice round included -500, -400, -300, 300, 400, and 500 ms SOAs. After completing the practice, participants completed two identical blocks and were given the opportunity to take a short break between blocks. The trials within blocks were presented in random order.
Visual distractor task (Fig 1B)
This visual distractor task was similar to the previously reported methods in Gibney et al 2017 . (Fig 1B) Stimuli consisted of rapid serial visual presentations (RSVP) of white and yellow letters and white numbers subtending a 3.5° visual angle and presented at center. Some letters (I, B, O) and numbers (1, 8, 0) did not appear in the RSVP streams because the visual similarity between the letters and numbers would be confusing for participants. The RSVP stream was presented continuously before and after the CTOJ stimuli. Each letter/number in the RSVP stream was presented for 100ms with 20ms between letters/numbers. The distractor task included three condition types: no perceptual load (NL), low perceptual load (LL), and high perceptual load (HL). The participant was presented with an RSVP stream and either asked to ignore it (NL), detect infrequent yellow letters (LL), or detect infrequent white numbers (HL). Previously published dual task studies have utilized similar RSVP streams composed of letters and numbers with a color change representing a low load target and/or a number representing a high load target because a color difference is easier to detect than a graphemic difference and would thus require less attentional resources to process [60–63]. Each RSVP stream had a 25% probability of containing no numbers or yellow letters, a yellow letter only, a number only, or a yellow letter and number resulting in a 50% probability of a target being present for the LL and HL conditions. After each trial, participants were asked to respond first to the CTOJ task then report with a “yes” or “no” button press whether they observed a target for that trial. Each load condition was completed in a separate block, and participants were able to take breaks between blocks. The order that participants completed the load condition blocks was randomized and counterbalanced across participants.
Auditory distractor task (Fig 1C)
Stimuli consisted of rapid serial auditory presentations (RSAP) of musical notes presented centrally at 60dB SPL. (Fig 1C) The musical notes were pure tones whose frequency fell on an accepted musical note in a twelve point scale within the C4-C5 octave (262–523 Hz) range. The RSAP stream was presented continuously before and after the CTOJ stimuli. Each musical note in the RSAP stream was presented for 100ms (25ms rise and fall time) with 20ms between notes. The distractor task included three condition types: no perceptual load (NL), low perceptual load (LL), and high perceptual load (HL). The participant was presented with an RSAP stream and either asked to ignore it (NL), detect infrequent notes of a much higher frequency (two octaves above the frequency range used for non-targets: 1046–2093 Hz) (LL), or detect infrequent tones that were double the duration (200ms) as non-target tones (HL). Previously published dual task studies have utilized similar RSAP streams with frequency and duration changes identifying targets [64–67]. Preliminary data in the lab confirmed that the duration change was more difficult to detect than the frequency/pitch change and was thus assumed to require more attentional resources to detect. Each RSAP stream had a 25% probability of containing no frequency or duration targets, a frequency target only, a duration target only, or both a frequency and duration target resulting in a 50% probability of a target being present for the LL and HL conditions. After each trial, participants were asked to respond first to the CTOJ task then report with a “yes” or “no” button press whether they observed a target for that trial. Each load condition was completed in a separate block, and participants were able to take breaks between blocks. The order that participants completed the load condition blocks was randomized and counterbalanced across participants.
Crossmodal temporal order judgment task.
Participants who completed both RSVP and RSAP experiments were included in the analysis with participants who completed one experiment because the experimental effects did not differ in this subgroup. Percent flash first reports were calculated for each SOA within load condition for each participant. Percent flash first reports were then averaged across participants. All statistical analyses were completed using SPSS software. We conducted a Repeated Measures Analysis of Variance (RMANOVA) on percent flash first reports with SOA and perceptual load as within-subjects factors separately for the RSVP and RSAP experiments. We also calculated the partial η2 for the perceptual load main effect, SOA main effect, and the SOA by load interaction to determine whether auditory and visual distractors had similar effect sizes on CTOJ performance. The effect size was calculated post-data collection and was not used to determine sample size for the experiment. We then conducted paired sample t-tests between NL and LL/HL to compare differences in percent flash first reports across perceptual loads for each SOA. Alpha error was controlled by adjusting the alpha level to p = .0017 (.05/30 comparisons). To compare across the RSVP and RSAP experiments, we calculated difference scores (HL-NL and LL-NL) in accuracy for each SOA excluding 0ms since there is no correct answer. We then conducted a RMANOVA on the difference scores with SOA, sign (positive versus negative SOA), and perceptual load as within-subjects factors and distractor modality as a between-subjects factor because few participants completed both experiments. Significant effects were explored using post-hoc paired sample t-tests and a bonferroni-adjusted alpha level of p = .0021 (.05/24 comparisons).
Calculation of the psychometric function.
We individually fit each participant’s percent flash first reports across SOA data to a psychometric function using the curve fitting toolbox in Matlab for each perceptual load using the following four factor sigmoidal function [68,69]:
We used the following starting values for each of the four factors: A (upper asymptote) = 100, B (slope) = 5, C (inflection point) = 0, D (lower asymptote) = 0. Furthermore, A was restricted to a range of 75–100, and D was restricted to a range of 0–25. Participants were excluded from this component of the data analysis if the r2 value of their psychometric function was less than 75% for any perceptual load. We then determined the point of subjective simultaneity (PSS) as the inflection point (factor C in the above equation) which indicates the point on the curve for which participants are equally likely to report that the flash or beep occurred first . We calculated the negative just noticeable difference (nJND) as the difference in SOA between 25% and 50% flash first reports and the positive JND (pJND) as the difference in SOA between 50% and 75% flash first reports. We conducted RMANOVAs on the PSS, nJND, and pJND values separately with load as a within-subjects factor and distractor modality as a between-subjects factor. We then conducted paired-sample t-tests for the PSS, nJND, and pJND between NL and LL/HL separately for the visual and auditory distractor versions of the task. Alpha error was controlled by adjusting the alpha level to p = .0125 (.05/4 comparisons). We determined the effect size of the influence of load on the positive and negative JNDs by calculating the Cohen’s d for the NL/HL difference scores for both auditory and visual distractors to determine whether the effect sizes were equivalent across distractor modalities.
Performance on the distractor task.
We calculated percent accuracy on the distractor task for each participant across SOAs separately for each load and distractor modality. We then conducted a RMANOVA on accuracy with perceptual load as a within-subjects factor and distractor modality as a between-subjects factor.
Participants completed a dual task paradigm that included a CTOJ task and a distractor task that was composed of either visual or auditory distractors and varied in perceptual load (NL, LL, HL). These tasks were used to determine whether directing attention away from the CTOJ task would decrease crossmodal temporal acuity and whether the modality of the distractor modulated this effect. Participants judged the relative order of a visual flash and auditory beep separated by varying SOAs and reported which they perceived as coming first. Average percent visual first reports were calculated for each SOA and load condition separately for the visual and auditory distractors.
Performance on the crossmodal temporal order judgment task
We conducted a RMANOVA on percent flash reports for the visual distractor version of the task with perceptual load and SOA as within-subjects factors. We found a significant main effect of SOA [F(14,1008) = 583.44, p < .001; partial η2 = .890], indicating that our CTOJ task was successful in testing crossmodal temporal performance. (Fig 2) Perceptual load did not significantly influence percent flash reports [F(2,144) = 0.44, p = .643; partial η2 = .006]; however, the SOA by perceptual load interaction was significant [F(28,2016) = 8.30, p < .001; partial η2 = .103], indicating that perceptual load did alter percent flash first reports differently across SOAs. We next conducted paired-sample t-tests between loads at each SOA. The following SOAs were significant after correcting for multiple comparisons: NL/LL [no SOAs] and NL/HL [-500 (t(72) = 3.37, p = .001); -400 (t(72) = 4.11, p = 1.04x10-4); -300 (t(72) = 4.11, p = 1.04x10-4); -200 (t(72) = 7.15, p<10−5); -150 (t(72) = 5.28, p<10−5); -100 (t(72) = 4.09, p = 1.11x10-4); 200 (t(72) = 4.00, p = 1.52x10-4); 300 (t(72) = 4.08, p = 1.15x10-4); 400 (t(72) = 3.43, p = .001); 500 (t(72) = 3.57, p = .001)].
SOA significantly influenced the percent of flash-first reports with positive SOAs (visual leading) resulting in more visual first reports. SOA and perceptual load significantly interacted for both distractor modalities indicating that perceptual load modulates performance on the CTOJ task. Error bars represent the SEM. * indicate significant differences between NL and HL and/or NL and LL at the Bonferroni-corrected alpha level of p < .0018.
We conducted a RMANOVA on percent flash reports for the auditory distractor version of the task with perceptual load and SOA as within-subjects factors. We found a significant main effect of SOA [F(14,392) = 273.29, p < .001; partial η2 = .907], indicating that our CTOJ task was successful in testing crossmodal temporal performance. (Fig 2) The main effect of perceptual load approached significance [F(2,56) = 3.12, p = .052; partial η2 = .100]; however, the SOA by perceptual load interaction was significant [F(28,784) = 3.79, p < .001; partial η2 = .119], indicating that perceptual load did alter percent flash reports more strongly at particular SOAs. We next conducted paired-sample t-tests between loads at each SOA. The following SOAs were significant after correcting for multiple comparisons: NL/LL [-100 (t(28) = 3.85, p = .001)] and NL/HL [-200 (t(28) = 4.32, p = 1.77x10-4)]. Taken together, our results clearly demonstrate that increasing perceptual load in both the visual and auditory modalities interferes with performance on the CTOJ task.
Comparisons of crossmodal temporal order judgment performance across distractor modalities
Because both visual and auditory distractors disrupted CTOJ performance, difference scores (HL-NL or LL-NL) in percent accuracy were calculated for both the visual and auditory distractor versions of the CTOJ task to compare across distractor modality. We conducted a RMANOVA on the difference scores with perceptual load, SOA, and sign (positive versus negative SOAs) as within-subjects factors and distractor modality as a between-subjects factor. The main effect of load was significant [F(1,100) = 5.337, p = .023] indicating that difference scores were larger overall for HL (difference of 6.2) than LL (difference of 4.2). The main effect of distractor modality was not significant [F(1,100) = .040, p = .841] indicating that visual and auditory distractors lead to similar effects on CTOJ performance. The interaction between SOA and sign [F(6,600) = 2.839, p = .010] was significant, indicating that difference scores were larger for auditory-leading trials (mean difference of 6.6 for auditory-leading and 3.8 for visual-leading) but only at particular SOAs. However, post-hoc comparisons between positive and negative SOAs were not significant for visual or auditory distractors at the Bonferroni-corrected alpha level of p = .0021. The interaction between SOA and load [F(6,600) = 2.181, p = .043] was also significant, indicating that the effect of load on difference scores depended on the SOA. However, post-hoc comparisons between HL and LL difference scores were only significant for the -200ms SOA for visual distractors once correcting for multiple comparisons [t(72) = 6.59, p<10−5]. Taken together, these results indicate that the strongest modulators of difference scores were the perceptual load of the distractors and the SOA of the CTOJ stimuli and that the modality of the distractors did not have a significant influence.
Average visual-first reports for each SOA were fit to a sigmoid curve for each participant separately for each load. The PSS (representing the inflection point of the sigmoid) and positive and negative JNDs (representing temporal acuity) were calculated for each load and participant. (Fig 3) A RMANOVA of the PSS with load and modality as factors revealed no significant main effects, indicating that the PSS did not change across load [F(2,186) = 0.83, p = .439] or distractor modality [F(1,93) = 1.07,p = .304], nor did they interact [F(2,186) = 0.46,p = .631]. (Fig 3A) Perceptual load did significantly influence both the negative (Fig 3B) [F(2,170) = 17.37, p < .001] and positive (Fig 3C) [F(2,166) = 12.65, p < .001] JNDs. Neither the distractor modality nor the interaction between modality and load were significant for positive [main effect of modality: F(1,83) = 0.08,p = .780; interaction: F(2,166) = 1.09,p = .340] or negative [main effect of modality: F(1,85) = 0.04,p = .834; interaction: F(2,170) = 1.751,p = .177] JNDs. Taken together, this indicates that while increasing perceptual load led to decreased crossmodal temporal acuity, the modality of the distractor did not influence this effect. Paired samples t-tests demonstrate that the negative JND for the visual distractor version of the task [NL: -72.1, LL: -77.8, HL: -105.4] was significantly larger between NL/HL [t(59) = 4.82,p < .001; Cohen’s d = .62] when correcting for multiple comparisons but not between NL/LL [t(59) = 1.05, p = .296]. On the auditory distractor version of the task [NL: -69.7, LL: -94.4, HL: -103.4], HL was significantly larger as compared to NL [t(26) = 3.12,p = .004; Cohen’s d = .41] when correcting for multiple comparisons but not between NL/LL [t(26) = 2.44, .021]. Positive JNDs significantly increased between the NL and HL conditions but not between the NL and LL conditions for the visual and auditory distractor versions of the task [Visual Means: NL: 86.6, LL: 93.1, HL: 134.9] [Visual: NL/LL: t(59) = 1.11,p = .296; NL/HL: t(59) = 3.94,p < .001; Cohen’s d = .48] and [Auditory Means: NL: 86.9, LL: 105.0, HL: 116.7] [Auditory: NL/LL: t(26) = 2.14,p = .042; NL/HL: t(26) = 2.92,p = .007; Cohen’s d = .33].
Individual participant data was fit with a psychometric function for each perceptual load. The resulting mean PSS (A), nJND (B), and pJND (C) are shown grouped by the modality of the distractor task. Both the nJND and pJND, but not the PSS, increased with increasing load. No significant effects of distractor modality were found. Error bars represent SEM. * Indicate significant differences (p < .0125) as compared to NL.
Distractor task performance (Fig 4)
Accuracy was lower for HL compared to LL for both visual and auditory distractors. Additionally, accuracy was higher for the visual distractor task then the auditory distractor task. Error bars represent SEM. * Indicate significance differences between LL and HL.
Concurrent with the CTOJ task, participants viewed a rapid serial visual presentation (RSVP) or a rapid serial auditory presentation (RSAP). Targets were present in 50% of the trials. We conducted a RMANOVA with response accuracy on the distractor tasks as the dependent variable and perceptual load (LL or HL) as a within-subjects factor and modality of the distractor task as a between-subjects factor. Response accuracy was significantly influenced by perceptual load [F(1,98) = 66.74, p < .001] and was higher for LL than HL for both distractor modalities [overall mean accuracy of 95.83 for LL and 88.43 for HL], indicating that the high load versions of the distractor task were more difficult. (Fig 4) This suggests that the HL versions of the distractor tasks draw more attention from the CTOJ task. The modality of the distractors also significantly influenced response accuracy [F(1,98) = 29.21, p < .001] with the visual distractors leading to greater accuracy as compared to auditory distractors [overall mean accuracy of 94.33 for visual distractors and 89.93 for auditory distractors], indicating that the auditory distractor task was more difficult than the visual distractor task. The interaction between distractor modality and load was not significant [F(1,98) = 0.12, p = .734].
The present study investigated the interactions between attention and multisensory temporal processing by utilizing a dual-task paradigm to reduce the attentional capacity available to process multisensory temporal information. Participants completed a CTOJ task for which they were asked to judge the temporal order of a flash and beep presented at various SOAs while also directing their attention to a secondary distractor task for which they detected a target stimulus within a stream of visual or auditory distractors. We also tested whether the modality of the distractor task would differentially affect performance on the primary CTOJ task. We found decreases in performance on the CTOJ task with increasing visual and auditory perceptual load. Specifically, we found a significant SOA by load interaction in the RMANOVA for visual-first reports and a significant main effect of load in the RMANOVA for accuracy difference scores. Additionally, both the negative and positive JND increased with increasing visual and auditory load. Taken together, these results strongly suggest that attention promotes greater crossmodal temporal acuity and that reducing the attentional capacity available to process multisensory stimuli is detrimental to multisensory temporal processing.
Interestingly, the effect of the distractor task was not uniform across SOAs as evidenced by significant interactions between SOA, sign, and load on difference scores. Participants maintained a relatively high accuracy at the longest SOAs, suggesting that the distractor task did not simply affect the overall performance level. We also found that participants’ performance was more strongly affected for negative (auditory-leading) SOAs. Because of the relative differences in the speed of light versus sound, light from an audiovisual event will often reach our eyes before the corresponding sound reaches our ears. Thus, the most commonly encountered SOAs in natural environments are visual leading. Given the greater detriments to crossmodal temporal performance for auditory-leading SOAs, our results indicate that this less encountered temporal relationship may rely more heavily on attentional resources to be discernable.
Although our study provides strong evidence that attention promotes more accurate crossmodal temporal processing, the observed interaction between multisensory temporal processing and attention may differ depending on the experimental manipulation of attention. For example, temporal but not spatial crowding appears to disrupt crossmodal temporal acuity [53,54]. Additionally, attention may interact differently with multisensory temporal processing when unisensory stimuli are to be integrated (temporal window of integration)  versus when the temporal relationship between unisensory stimuli is being actively compared (crossmodal temporal acuity). Thus, a reduced attentional capacity may have a different impact on the temporal window of integration than what is predicted by the current study findings. Unfortunately, investigating the effects of perceptual load on the temporal window of integration is problematic given that increased perceptual load has been linked to decreases in multisensory integration . Future studies will also need to investigate whether increased perceptual load equally disrupts crossmodal temporal acuity for higher order multisensory stimuli since attention may interact differently with multisensory temporal processing depending on task-specific features  and because complex stimuli have larger temporal windows of integration .
Contrary to our hypothesis, we found no differences between the auditory and visual distractor tasks in their effects on crossmodal temporal acuity. Both distractor tasks resulted in poorer CTOJ performance and increases in the JND that were statistically indistinguishable across modalities. We also did not observe changes in the PSS with increasing load when the CTOJ task was accompanied by either the visual or auditory distractor task. Notably, our results stand in contrast to the theory of prior entry, which predicts that modality-specific attention should alter the speed of neural processing of stimuli in the corresponding modality. For example, if the visual distractor task reduced the capacity to process visual stimuli, participants should show greater accuracy for auditory-leading and poorer accuracy for visual-leading pairs of stimuli. Prior entry has been established in selective attention paradigms and may function differently in dual task paradigms. Alternatively, it is possible that in our dual task paradigm, the visual and auditory distractor tasks reduce capacity in a supramodal rather than modality-specific manner. Previous studies have investigated whether attentional capacity is supramodal or modality-specific (for a general discussion see ). However, these studies have generated conflicting results with some studies finding that attentional capacity is independent across sensory modalities  and others finding that perceptual load in one modality interferes with performance and neural processing in a different modality [71,72].
We did find differences in performance on the distractor task as a function of perceptual load and distractor modality. For both the auditory and visual distractors, we found decreases in accuracy with increasing perceptual load. This indicates that for both modalities, the high load feature that participants were instructed to detect (numbers for the RSVP and longer duration for the RSAP) was more difficult and thus likely demanded more attentional resources. We also found that the auditory distractor task resulted in lower accuracy than the visual distractor task and far more participant exclusions due to poor performance. This suggests that the visual and auditory distractor tasks may have demanded unequal attentional resources. Additionally, this study had approximately twice the number of participants in the RSVP versus RSAP experiments. These differences between the two versions of the task may have acted as a confound and masked some relevant differences between the effects of distractor modality. However, these differences across the visual and auditory distractor tasks are unlikely to be major confounds because of the almost indistinguishable effects of increasing load on CTOJ performance across the two distractor modalities. Additionally, measures of effect size for load and SOA were similar across distractor modalities, suggesting that increasing visual versus auditory load had similar effects on CTOJ performance. However, future studies utilizing visual and auditory distractor tasks that are more equivalent in their difficulty and sample sizes are needed to confirm whether auditory and visual distractors equally effect crossmodal temporal acuity. We also noted a slightly unequal gender ratio in our included participants and a very unequal gender ratio in our excluded participants. Very little is known about the potential influence of gender on multisensory integration; thus, future research may be needed to evaluate potential gender differences in the effects of attention on multisensory processing.
Potential neural mechanisms
Much is known about how the brain represents the temporal relationships between unisensory events. This knowledge can be useful to help frame our understanding of how attention alters multisensory temporal processing. One such framework is the temporal window of integration (TWIN) model that was proposed by Colonius and Deiderich [10,73]. In this model, unisensory information is initially processed independently and is thought to be engaged in a “race.” The next stage of the model, the integration stage, includes all processes after the initial unisensory “race.” If multiple unisensory signals enter the integration stage within the same window of time, they will be integrated into a multisensory percept . In this framework, a reduced attentional capacity could alter the initial processing of unisensory signals such that the initial signal is delayed and initiates the integration stage abnormally late, thus increasing the interval for which the second signal could reach the integration stage. Additionally, attention could alter the length of the integration stage such that reducing the attentional capacity for either modality elongates the length of the integration stage for signals from either modality. Our data more strongly support attention acting at the integration stage because the auditory and visual distractor tasks had equivalent effects on CTOJ performance.
Although the TWIN model is helpful in our understanding of the neural mechanisms of the effects of attention on multisensory temporal processing, it may not apply in the case of crossmodal temporal acuity when participants are actively contrasting (as opposed to integrating) multisensory temporal information. Thus, instead of unisensory signals needing to arrive within a set time-period of each other, the temporal pattern of these signals may be compared in multimodal brain areas or across unimodal areas. For example, the superior temporal sulcus (STS) has been demonstrated to differ in its activity depending on the temporal structure of multisensory events [74–77]. Areas such as the STS may compare the relative onsets or temporal profiles of each unisensory signal arriving from their respective sensory cortices to determine their temporal relationship. A reduced attentional capacity could delay the onset of unisensory information reaching STS or enlarge the time over which unisensory information feeds into STS. Any of these potential mechanisms would lead to a decreased ability to discern temporal order. Additionally, alterations in synchronous oscillatory activity between unisensory and multisensory areas may be the underlying mechanism for the effects of attention on crossmodal temporal acuity. Synchronous coupling of ongoing oscillations across unisensory and multisensory areas has been shown to be important for multisensory integration , and audiovisual synchrony has been demonstrated to influence oscillations at gamma frequencies . The relative contributions of all the aforementioned potential neural mechanisms could be assessed using electroencephalography (EEG) and comparing changes in neural activity with increasing load to the corresponding changes in performance on the CTOJ task. Specifically, changes in oscillatory amplitude, phase locking, and coherence either from trial to trial or between groups of electrodes could be used to assess the role of synchrony and power in ongoing oscillatory activity. Additionally, changes in behavior with increasing load could be linked to changes in peak amplitude, width, and latency to assess the role of changes in the onset or temporal signature of unisensory signals.
Implications for developmental disorders
In characterizing the relationship between attention and temporal multisensory processing, the present study may shed light on the multisensory deficits present in many neurodevelopmental disorders. Enlargements in the temporal window of integration have been found for autism spectrum disorders (ASD), dyslexia, and schizophrenia [6,16,17,19]. These disorders also show alterations in the control of top-down attentional functions [80–82]. Although the present study investigates the effect of attention on crossmodal temporal acuity in typically developed adults, our findings raise important questions as to the cause of the enlarged temporal window in developmental disorders. The enlarged temporal window could result from an alteration in sensory functioning and/or differences in top-down attention. Future studies comparing differences in performance between neurotypicals and those with developmental disorders on this CTOJ task with increasing load may cast light upon this question. Additionally, future studies could compare individual differences in attentional capacity with measures of the temporal window of integration or crossmodal temporal acuity to determine whether participants with relatively limited attention resources are likely to have greater difficulty distinguishing temporal information across modalities.
Knowing whether changes in sensory versus attentional processes have a stronger impact on multisensory temporal processing could help in the advancement of potential remediation strategies for developmental disorders. For example, differences in the temporal window of integration have been linked to speech deficits in ASD  and the relative timing between auditory and visual signals influences the effects of visual speech on auditory speech perception . Additionally, the temporal window can be narrowed through perceptual training [13,14,77]; however, it has currently not been demonstrated whether narrowing the temporal window leads to improved speech perception in ASD. Training individuals with developmental disorders in the realm of attentional control to help improve attentional capacity either alone or in conjunction with temporal perceptual training may lead to greater improvements in speech perception. Overall, the results of this study add to our understanding of how attention interacts with multisensory integration. Importantly, we have provided a clear link between attentional capacity of both the visual and auditory modalities and a person’s ability to discriminate small temporal differences. The findings of this study have important implications not only for our understanding of developmental disorders but also for the design of multisensory warning signals and other multisensory stimuli for entertainment purposes that are increasingly being incorporated into our technology .
We would like to acknowledge the contributions of Susan Russ to some of the data collection and analysis for this study. We would also like to thank the Oberlin College Research Fellowship and the Oberlin College Office of Foundation, Government, and Corporate Grants for their support of this study.
- Conceptualization: LDK.
- Formal analysis: LDK EA BAE KDG CLD.
- Funding acquisition: LDK.
- Investigation: LDK EA BAE CLD KDG MB.
- Methodology: LDK EA CLD BAE.
- Project administration: LDK.
- Resources: LDK.
- Software: LDK EA CLD BAE KDG.
- Supervision: LDK.
- Validation: LDK CLD KDG.
- Visualization: LDK.
- Writing – original draft: LDK KDG BAE CLD.
- Writing – review & editing: LDK CLD BAE KDG EA MB.
- 1. Stein BE, Huneycutt WS, Meredith MA (1988) Neurons and behavior: the same rules of multisensory integration apply. Brain Res 448: 355–358. pmid:3378157
- 2. Meredith MA, Stein BE (1996) Spatial determinants of multisensory integration in cat superior colliculus neurons. J Neurophysiol 75: 1843–1857. pmid:8734584
- 3. Meredith MA, Nemitz JW, Stein BE (1987) Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors. J Neurosci 7: 3215–3229. pmid:3668625
- 4. Meredith MA, Stein BE (1986) Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. J Neurophysiol 56: 640–662. pmid:3537225
- 5. Wallace MT, Meredith MA, Stein BE (1998) Multisensory integration in the superior colliculus of the alert cat. J Neurophysiol 80: 1006–1010. pmid:9705489
- 6. Wallace MT, Stevenson RA (2014) The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia 64: 105–123. pmid:25128432
- 7. Slutsky DA, Recanzone GH (2001) Temporal and spatial dependency of the ventriloquism effect. Neuroreport 12: 7–10. pmid:11201094
- 8. Noel J-P, Wallace M (2016) Relative contributions of visual and auditory spatial representations to tactile localization. Neuropsychologia 82: 84–90. pmid:26768124
- 9. Mégevand P, Molholm S, Nayak A, Foxe JJ (2013) Recalibration of the multisensory temporal window of integration results from changing task demands. PLoS ONE 8: e71608. pmid:23951203
- 10. Diederich A, Colonius H (2015) The time window of multisensory integration: relating reaction times and judgments of temporal order. Psychol Rev 122: 232–241. pmid:25706404
- 11. Vroomen J, Keetels M (2010) Perception of intersensory synchrony: a tutorial review. Atten Percept Psychophys 72: 871–884. pmid:20436185
- 12. Stevenson RA, Ghose D, Fister JK, Sarko DK, Altieri NA, Nidiffer AR, et al. (2014) Identifying and quantifying multisensory integration: a tutorial review. Brain Topogr 27: 707–730. pmid:24722880
- 13. Stevenson RA, Wilson MM, Powers AR, Wallace MT (2013) The effects of visual training on multisensory temporal processing. Exp Brain Res 225: 479–489. pmid:23307155
- 14. Powers AR, Hillock AR, Wallace MT (2009) Perceptual training narrows the temporal window of multisensory binding. J Neurosci 29: 12265–12274. pmid:19793985
- 15. Kostaki M, Vatakis A (2016) Crossmodal binding rivalry: A “race” for integration between unequal sensory inputs. Vision Res 127: 165–176. pmid:27591367
- 16. Foss-Feig JH, Kwakye LD, Cascio CJ, Burnette CP, Kadivar H, Stone WL, et al. (2010) An extended multisensory temporal binding window in autism spectrum disorders. Exp Brain Res 203: 381–389. pmid:20390256
- 17. Kwakye LD, Foss-Feig JH, Cascio CJ, Stone WL, Wallace MT (2011) Altered auditory and multisensory temporal processing in autism spectrum disorders. Front Integr Neurosci 4: 129. pmid:21258617
- 18. Woynaroski TG, Kwakye LD, Foss-Feig JH, Stevenson RA, Stone WL, Wallace MT (2013) Multisensory speech perception in children with autism spectrum disorders. J Autism Dev Disord 43: 2891–2902. pmid:23624833
- 19. Hairston WD, Burdette JH, Flowers DL, Wood FB, Wallace MT (2005) Altered temporal profile of visual-auditory multisensory interactions in dyslexia. Exp Brain Res 166: 474–480. pmid:16028030
- 20. Hillock AR, Powers AR, Wallace MT (2011) Binding of sights and sounds: age-related changes in multisensory temporal processing. Neuropsychologia 49: 461–467. pmid:21134385
- 21. Hillock-Dunn A, Grantham DW, Wallace MT (2016) The temporal binding window for audiovisual speech: Children are like little adults. Neuropsychologia. pmid:26920938
- 22. Talsma D, Senkowski D, Soto-Faraco S, Woldorff MG (2010) The multifaceted interplay between attention and multisensory integration. Trends Cogn Sci (Regul Ed) 14: 400–410. pmid:20675182
- 23. Driver J, Spence C (1998) Attention and the crossmodal construction of space. Trends Cogn Sci (Regul Ed) 2: 254–262.
- 24. Pierno AC, Caria A, Glover S, Castiello U (2005) Effects of increasing visual load on aurally and visually guided target acquisition in a virtual environment. Appl Ergon 36: 335–343. pmid:15854577
- 25. Mazza V, Turatto M, Rossi M, Umiltà C (2007) How automatic are audiovisual links in exogenous spatial attention? Neuropsychologia 45: 514–522. pmid:16581094
- 26. Holmes NP, Sanabria D, Calvert GA, Spence C (2007) Tool-use: capturing multisensory spatial attention or extending multisensory peripersonal space? Cortex 43: 469–489. pmid:17533769
- 27. Zimmer U, Roberts KC, Harshbarger TB, Woldorff MG (2010) Multisensory conflict modulates the spread of visual attention across a multisensory object. Neuroimage 52: 606–616. pmid:20420924
- 28. Molholm S, Martinez A, Shpaner M, Foxe JJ (2007) Object-based attention is multisensory: co-activation of an object’s representations in ignored sensory modalities. Eur J Neurosci 26: 499–509. pmid:17650120
- 29. Busse L, Roberts KC, Crist RE, Weissman DH, Woldorff MG (2005) The spread of attention across modalities and space in a multisensory object. Proc Natl Acad Sci U S A 102: 18751–18756. pmid:16339900
- 30. Donohue SE, Roberts KC, Grent-’t-Jong T, Woldorff MG (2011) The cross-modal spread of attention reveals differential constraints for the temporal and spatial linking of visual and auditory stimulus events. J Neurosci 31: 7982–7990. pmid:21632920
- 31. Van der Burg E, Olivers CNL, Bronkhorst AW, Theeuwes J (2008) Pip and pop: nonspatial auditory signals improve spatial visual search. J Exp Psychol Hum Percept Perform 34: 1053–1065. pmid:18823194
- 32. Van der Burg E, Olivers CNL, Bronkhorst AW, Theeuwes J (2009) Poke and pop: tactile-visual synchrony increases visual saliency. Neurosci Lett 450: 60–64. pmid:19013216
- 33. Tang X, Wu J, Shen Y (2016) The interactions of multisensory integration with endogenous and exogenous attention. Neurosci Biobehav Rev 61: 208–224. pmid:26546734
- 34. Talsma D, Doty TJ, Woldorff MG (2007) Selective attention and audiovisual integration: is attending to both modalities a prerequisite for early integration? Cereb Cortex 17: 679–690. pmid:16707740
- 35. Mozolic JL, Hugenschmidt CE, Peiffer AM, Laurienti PJ (2008) Modality-specific selective attention attenuates multisensory integration. Exp Brain Res 184: 39–52. pmid:17684735
- 36. Alsius A, Navarra J, Soto-Faraco S (2007) Attention to touch weakens audiovisual speech integration. Exp Brain Res 183: 399–404. pmid:17899043
- 37. Alsius A, Navarra J, Campbell R, Soto-Faraco S (2005) Audiovisual integration of speech falters under high attention demands. Curr Biol 15: 839–843. pmid:15886102
- 38. Gibney KD, Aligbe E, Eggleston BA, Nunes SR, Kerkhoff WG, Dean CL, et al. (2017) Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity. Front Integr Neurosci. Jan 20;11:1. http://journal.frontiersin.org/article/10.3389/fnint.2017.00001/abstract. pmid:28163675
- 39. Wahn B, König P (2015) Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration. Front Psychol 6: 1084. pmid:26284008
- 40. Vroomen J, Bertelson P, de Gelder B (2001) The ventriloquist effect does not depend on the direction of automatic visual attention. Percept Psychophys 63: 651–659. pmid:11436735
- 41. Vroomen J, Driver J, de Gelder B (2001) Is cross-modal integration of emotional expressions independent of attentional resources? Cogn Affect Behav Neurosci 1: 382–387. pmid:12467089
- 42. Bertelson P, Vroomen J, de Gelder B, Driver J (2000) The ventriloquist effect does not depend on the direction of deliberate visual attention. Percept Psychophys 62: 321–332. pmid:10723211
- 43. Santangelo V, Spence C (2008) Is the exogenous orienting of spatial attention truly automatic? Evidence from unimodal and multisensory studies. Conscious Cogn 17: 989–1015. pmid:18472279
- 44. Spence C, Santangelo V (2009) Capturing spatial attention with multisensory cues: a review. Hear Res 258: 134–142. pmid:19409472
- 45. Talsma D, Senkowski D, Woldorff MG (2009) Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli. Exp Brain Res 198: 313–328. pmid:19495733
- 46. Spence C, Parise C (2010) Prior-entry: a review. Conscious Cogn 19: 364–379. pmid:20056554
- 47. Yates MJ, Nicholls MER (2011) Somatosensory prior entry assessed with temporal order judgments and simultaneity judgments. Atten Percept Psychophys 73: 1586–1603. pmid:21487928
- 48. Barrett DJK, Krumbholz K (2012) Evidence for multisensory integration in the elicitation of prior entry by bimodal cues. Exp Brain Res 222: 11–20. pmid:22975896
- 49. Zampini M, Bird KS, Bentley DE, Watson A, Barrett G, Jones AK, et al. (2007) Prior entry ‘ for pain: attention speeds the perceptual processing of painful stimuli. Neurosci Lett 414: 75–79. pmid:17197082
- 50. Zampini M, Shore DI, Spence C (2005) Audiovisual prior entry. Neurosci Lett 381: 217–222. pmid:15896473
- 51. Shore DI, Spence C, Klein RM (2001) Visual prior entry. Psychol Sci 12: 205–212. pmid:11437302
- 52. Vibell J, Klinge C, Zampini M, Spence C, Nobre AC (2007) Temporal order is coded temporally in the brain: early event-related potential latency shifts underlying prior entry in a cross-modal temporal order judgment task. J Cogn Neurosci 19: 109–120. pmid:17214568
- 53. Vatakis A, Spence C (2006) Temporal order judgments for audiovisual targets embedded in unimodal and bimodal distractor streams. Neurosci Lett 408: 5–9. pmid:17010520
- 54. Van der Burg E, Cass J, Alais D (2014) Window of audio-visual simultaneity is unaffected by spatio-temporal visual clutter. Sci Rep 4: 5098. pmid:24872325
- 55. Donohue SE, Green JJ, Woldorff MG (2015) The effects of attention on the temporal integration of multisensory stimuli. Front Integr Neurosci 9: 32. pmid:25954167
- 56. Alsius A, Möttönen R, Sams ME, Soto-Faraco S, Tiippana K (2014) Effect of attentional load on audiovisual speech perception: evidence from ERPs. Front Psychol 5: 727. pmid:25076922
- 57. Bonato M, Spironelli C, Lisi M, Priftis K, Zorzi M (2015) Effects of Multimodal Load on Spatial Monitoring as Revealed by ERPs. PLoS ONE 10: e0136719. pmid:26335779
- 58. Lavie N, Ro T, Russell C (2003) The role of perceptual load in processing distractor faces. Psychol Sci 14: 510–515. pmid:12930485
- 59. Stolte M, Bahrami B, Lavie N (2014) High perceptual load leads to both reduced gain and broader orientation tuning. J Vis 14: 9. pmid:24610952
- 60. Santangelo V, Spence C (2007) Multisensory cues capture spatial attention regardless of perceptual load. J Exp Psychol Hum Percept Perform 33: 1311–1321. pmid:18085945
- 61. Santangelo V, Ho C, Spence C (2008) Capturing spatial attention with multisensory cues. Psychon Bull Rev 15: 398–403. pmid:18488658
- 62. Parks NA, Hilimire MR, Corballis PM (2011) Steady-state signatures of visual perceptual load, multimodal distractor filtering, and neural competition. J Cogn Neurosci 23: 1113–1124. pmid:20146614
- 63. Asanowicz D, Smigasiewicz K, Verleger R (2013) Differences between visual hemifields in identifying rapidly presented target stimuli: letters and digits, faces, and shapes. Front Psychol 4: 452. pmid:23882249
- 64. Duncan J, Martens S, Ward R (1997) Restricted attentional capacity within but not between sensory modalities. Nature 387: 808–810. pmid:9194561
- 65. Alain C, Izenberg A (2003) Effects of attentional load on auditory scene analysis. J Cogn Neurosci 15: 1063–1073. pmid:14614816
- 66. Rees G, Frith C, Lavie N (2001) Processing of irrelevant visual motion during performance of an auditory attention task. Neuropsychologia 39: 937–949. pmid:11516446
- 67. Jacoby O, Hall SE, Mattingley JB (2012) A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli. Neuroimage 61: 1050–1058. pmid:22465299
- 68. Treutwein B, Strasburger H (1999) Fitting the psychometric function. Percept Psychophys 61: 87–106. pmid:10070202
- 69. Wichmann FA, Hill NJ (2001) The psychometric function: I. Fitting, sampling, and goodness of fit. Percept Psychophys 63: 1293–1313. pmid:11800458
- 70. Stevenson RA, Wallace MT (2013) Multisensory temporal integration: task and stimulus dependencies. Exp Brain Res 227: 249–261. pmid:23604624
- 71. Klemen J, Büchel C, Rose M (2009) Perceptual load interacts with stimulus processing across sensory modalities. Eur J Neurosci 29: 2426–2434. pmid:19490081
- 72. Molloy K, Griffiths TD, Chait M, Lavie N (2015) Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. J Neurosci 35: 16046–16054. pmid:26658858
- 73. Colonius H, Diederich A (2004) Multisensory interaction in saccadic reaction time: a time-window-of-integration model. J Cogn Neurosci 16: 1000–1009. pmid:15298787
- 74. Noesselt T, Rieger JW, Schoenfeld MA, Kanowski M, Hinrichs H, Heinze HG, et al. (2007) Audiovisual temporal correspondence modulates human multisensory superior temporal sulcus plus primary sensory cortices. J Neurosci 27: 11431–11441. pmid:17942738
- 75. Stevenson RA, Altieri NA, Kim S, Pisoni DB, James TW (2010) Neural processing of asynchronous audiovisual speech perception. Neuroimage 49: 3308–3318. pmid:20004723
- 76. Marchant JL, Ruff CC, Driver J (2012) Audiovisual synchrony enhances BOLD responses in a brain network including multisensory STS while also enhancing target-detection performance for both modalities. Hum Brain Mapp 33: 1212–1224. pmid:21953980
- 77. Powers AR, Hevey MA, Wallace MT (2012) Neural correlates of multisensory perceptual learning. J Neurosci 32: 6263–6274. pmid:22553032
- 78. Senkowski D, Schneider TR, Foxe JJ, Engel AK (2008) Crossmodal binding through neural coherence: implications for multisensory processing. Trends Neurosci 31: 401–409. pmid:18602171
- 79. Senkowski D, Talsma D, Grigutsch M, Herrmann CS, Woldorff MG (2007) Good times for multisensory integration: Effects of the precision of temporal synchrony as revealed by gamma-band oscillations. Neuropsychologia 45: 561–571. pmid:16542688
- 80. Greenaway R, Plaisted K (2005) Top-down attentional modulation in autistic spectrum disorders is stimulus-specific. Psychol Sci 16: 987–994. pmid:16313664
- 81. Krause MB (2015) Pay Attention!: Sluggish Multisensory Attentional Shifting as a Core Deficit in Developmental Dyslexia. Dyslexia 21: 285–303. pmid:26338085
- 82. Facoetti A, Trussardi AN, Ruffino M, Lorusso ML, Cattaneo C, Galli R et al. (2010) Multisensory spatial attention deficits are predictive of phonological decoding skills in developmental dyslexia. J Cogn Neurosci 22: 1011–1025. pmid:19366290
- 83. Stevenson RA, Siemann JK, Schneider BC, Eberly HE, Woynaroski TG, Camarata SM, et al. (2014) Multisensory temporal integration in autism spectrum disorders. J Neurosci 34: 691–697. pmid:24431427
- 84. Venezia JH, Thurman SM, Matchin W, George SE, Hickok G (2016) Timing in audiovisual speech perception: A mini review and new psychophysical data. Atten Percept Psychophys 78: 583–601. pmid:26669309
- 85. Murata A, Kuroda T, Karwowski W (2017) Effects of auditory and tactile warning on response to visual hazards under a noisy environment. Appl Ergon 60: 58–67. pmid:28166900