Evidence from human neuroimaging and animal electrophysiological studies suggests that signals from different sensory modalities interact early in cortical processing, including in primary sensory cortices. The present study aimed to test whether functional near-infrared spectroscopy (fNIRS), an emerging, non-invasive neuroimaging technique, is capable of measuring such multisensory interactions. Specifically, we tested for a modulatory influence of sounds on activity in visual cortex, while varying the temporal synchrony between trains of transient auditory and visual events. Related fMRI studies have consistently reported enhanced activation in response to synchronous compared to asynchronous audiovisual stimulation. Unexpectedly, we found that synchronous sounds significantly reduced the fNIRS response from visual cortex, compared both to asynchronous sounds and to a visual-only baseline. It is possible that this suppressive effect of synchronous sounds reflects the use of an efficacious visual stimulus, chosen for consistency with previous fNIRS studies. Discrepant results may also be explained by differences between studies in how attention was deployed to the auditory and visual modalities. The presence and relative timing of sounds did not significantly affect performance in a simultaneously conducted behavioral task, although the data were suggestive of a positive relationship between the strength of the fNIRS response from visual cortex and the accuracy of visual target detection. Overall, the present findings indicate that fNIRS is capable of measuring multisensory cortical interactions. In multisensory research, fNIRS can offer complementary information to the more established neuroimaging modalities, and may prove advantageous for testing in naturalistic environments and with infant and clinical populations.
Citation: Wiggins IM, Hartley DEH (2015) A Synchrony-Dependent Influence of Sounds on Activity in Visual Cortex Measured Using Functional Near-Infrared Spectroscopy (fNIRS). PLoS ONE 10(3): e0122862. https://doi.org/10.1371/journal.pone.0122862
Academic Editor: Virginie van Wassenhove, CEA.DSV.I2BM.NeuroSpin, FRANCE
Received: November 21, 2014; Accepted: February 15, 2015; Published: March 31, 2015
Copyright: © 2015 Wiggins, Hartley. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: This work was supported through intramural funding from the National Institute for Health Research and the University of Nottingham. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Perceptual judgments often reflect a combination of inputs from multiple senses . The neural mechanisms that support this combining of information across the sensory modalities are not fully understood. Over recent years, converging evidence from human neuroimaging studies and single-unit recordings in animals has revealed that interactions among the senses arise early in cortical processing, including in primary sensory cortices, which were traditionally considered to be strictly unisensory (for reviews see [2–5]). Our focus here is on the human neuroimaging results. Whilst the most appropriate statistical criterion for identifying multisensory interactions in neuroimaging data is a subject of ongoing debate [6–8], it is widely accepted that multisensory effects can modulate both the amplitude [9, 10] and temporal dynamics  of the blood oxygenation level-dependent (BOLD) fMRI signal in primary sensory cortices. Other studies have exploited the temporal resolution of electroencephalography (EEG) and magnetoencephalography (MEG) to confirm that multisensory interactions occur early in cortical processing [12–15], and, more recently, to study the putative role of synchronized oscillatory brain activity in mediating these effects [16, 17].
In the present study, we investigated whether functional near-infrared spectroscopy (fNIRS) is capable of measuring multisensory interactions in what was traditionally considered ‘sensory-specific’ cortex. As a relatively quiet, non-invasive, and low-cost technique, fNIRS continues to gain popularity as a neuroimaging tool with a number of practical advantages [18, 19]. Although a recent fNIRS study reported multisensory interactions in 3-month-old infants , to our knowledge this is the first application of fNIRS to the study of multisensory processing in adults. Specifically, we investigated a modulatory influence of sounds on activity in visual cortex. Visual cortex, located in the occipital lobe, is relatively accessible using fNIRS , and has been the target of several fNIRS studies that used unisensory visual stimulation [22–28]. Furthermore, robust auditory influences on activity in visual cortex have been well documented, both in animal electrophysiological studies [29–31], and using a variety of imaging techniques in humans [10–12, 14, 32–34]. It has been suggested that the primary role of such auditory influences in non-auditory cortices may be to modulate response gain within the relevant modality, in this case vision .
Temporal, spatial, and semantic congruency are known to be key factors in determining whether the brain integrates multimodal sensory inputs [36, 37]. We studied temporal congruency, minimizing semantic and spatial influences through the use of simple, stationary stimuli. Specifically, we investigated the role of temporal synchrony between trains of transient auditory and visual events, following the example of several previous fMRI studies [9, 38–40]. A consistent finding across these fMRI studies was that synchronous audiovisual stimulation generally enhanced the strength of stimulus-related cortical activations, while asynchronous stimulation generally had a suppressive effect. This pattern of enhancement and suppression was observed not only in established multisensory areas (e.g., superior temporal sulcus, STS), but also in primary auditory and visual cortices, using a variety of experimental paradigms. Enhancement of cortical responses to synchronous versus asynchronous audiovisual stimulation was typically also associated with improved behavioral performance, even when task-relevant information was provided in only one of the stimulated modalities [39, 40].
In the present study, we adapted the fMRI paradigms employed by Noesselt et al.  and Marchant et al.  to suit the fNIRS imaging modality. Specifically, we used visual stimuli that have been shown in previous studies to produce robust fNIRS responses from visual cortex [24, 25]. We measured visual cortical responses using fNIRS while participants were presented with trains of unpredictably timed, transient auditory and visual events that were either synchronous or asynchronous. We also measured responses to matching unisensory baseline conditions. Following Marchant et al. , we simultaneously measured behavioral performance in detecting occasional higher-intensity targets. In short, we aimed to measure a synchrony-dependent influence of sounds on activity in visual cortex using the emerging brain-imaging technique fNIRS. Based on the aforementioned fMRI studies, we anticipated enhancement of responses by synchronous sounds and suppression by asynchronous sounds.
Materials and Methods
Functional NIRS measurements were collected simultaneously while participants performed a behavioral target-detection task. Trains of transient auditory and/or visual events with unpredictable, arrhythmic timing were presented in four stimulation conditions (Fig 1): auditory-only (A-ONLY), visual-only (V-ONLY), audiovisual with synchronous events (AV-SYNC), and audiovisual with asynchronous events (AV-ASYNC). These four conditions were each presented five times in random order in a block design. Our motivation for using a block design was twofold: Firstly, following previous fMRI studies [9, 38–40], we aimed to assess the influence of temporal synchrony on the aggregate response to trains of auditory and visual events, rather than the response to isolated individual events; secondly, given that fNIRS generally has significantly poorer signal-to-noise ratio than fMRI [41, 42], we wished to maximize our ability to measure robust cortical responses, which is facilitated by the high efficiency of a block design . Stimulation blocks were 20 s in duration and the intervening rest periods had random durations in the range 20–40 s. The total measurement time was approximately 17 minutes. Participants were given the opportunity to practice the behavioral task before measurements began, experiencing two repetitions of each stimulation condition in random order. Testing took place in a double-walled sound booth with dimmed lighting.
The exact timing of events was in all cases arrhythmic and unpredictable. Events marked with a “T” represent higher-intensity (A-ONLY condition) or higher-contrast (V-ONLY, AV-SYNC, and AV-ASYNC) targets, which occurred every 2–5 s. In the AV-ASYNC condition, onsets between adjacent auditory and visual events were separated by a minimum of 200 ms.
Participants and ethical approval
Twenty-four participants (mean age 27 years, 9 males) with no history of any neurological disorder took part in the study after giving written informed consent. All participants had normal hearing (< = 25 dB HL at audiometric test frequencies between 500 Hz and 4 kHz) and normal or corrected-to-normal vision. Most participants (N = 21) were right-handed as assessed using the Edinburgh Handedness Inventory . The study was approved by the Nottingham 1 Research Ethics Committee.
Care was taken to ensure that the temporal statistics of the event sequences were closely matched across all stimulation conditions. In the AV-SYNC condition, each visual event was accompanied by an auditory event with a synchronous onset. In the AV-ASYNC condition, the timings of visual and auditory events were derived independently, with an additional constraint that all visual event onsets were separated from the nearest auditory event onset by at least 200 ms. Stimulus timing was confirmed by measurement using a response-time box that provided inputs for a photodiode and a direct audio connection (RTBox ).
The precise method used to generate the event timings was as follows: First, a random sequence of visual event times was generated, in which the stimulus onset asynchrony (SOA) between successive events was between 100 and 500 ms (randomly selected from a uniform distribution). The event times were then quantized so as to coincide exactly with the refresh rate (60 Hz) of the visual display unit (VDU), to ensure precise timing of visual stimuli. A sequence of auditory event times was generated in a similar manner, but without quantization to the VDU refresh rate. The following procedure was then used to remove any cases in which, by chance, visual and auditory events fell within 200 ms of one another. The visual event times were stepped through one by one, and for each visual event the nearest auditory event was determined. If the SOA between these was less than 200 ms, then either the visual event or the auditory event was removed from the corresponding sequence with equal probability. The scan was then restarted from the first remaining visual event. This procedure was repeated until no visual and auditory events had onsets within 200 ms of one another. In the resulting sequences, events occurred at a mean rate of 1.68 Hz (SD = 0.25) in each modality. To enforce synchronicity in the AV-SYNC condition, the sequence of auditory event times was set equal to the sequence of visual event times. In the AV-ASYNC condition, we did not actively constrain the number of consecutive events that could occur in one modality before the next event in the other modality.
Following Marchant et al. , we used a suprathreshold target-detection task, in which participants responded as quickly as possible to occasional “target” events presented within a train of “standard” events. Randomly selected events from the pre-calculated sequences were designated as targets, with a target occurring every 2–5 s. In total, each participant was presented with an average of 19.7 (SD = 1.59) targets in each stimulation condition, with no systematic difference in the number of targets between conditions. Because our fNIRS measurements targeted visual cortex, we focused on the visual domain in the behavioral task also. Thus, in all conditions that included visual stimulation (V-ONLY, AV-SYNC, and AV-ASYNC), the task was to identify occasional higher-contrast visual targets. Participants were instructed to press a button as quickly as possible whenever they saw a higher-contrast checkerboard, while maintaining accuracy. In the same conditions, all auditory events were presented at an identical sound pressure level, and so provided no task-relevant information. Participants were advised that they could ignore any sounds that happened to be presented alongside the visual stimuli. To ensure that participants remained attentive and to control for any effect of motor responses, a control task was implemented in the A-ONLY condition: auditory targets were distinguished from the standard events by an increase in sound pressure level, and participants were instructed to respond as quickly as possible whenever they heard a louder sound.
Prior to commencing the main task, the increase in contrast ratio that differentiated visual targets from standard events (V-ONLY, AV-SYNC, and AV-ASYNC conditions), and the increase in sound pressure level that served an equivalent role in the A-ONLY condition, were set on an individual basis to ensure approximately 70% target-detection accuracy. Individual thresholds for these target intensity increments were determined under unimodal conditions using a two-down, one-up adaptive staircase procedure, first for the auditory modality and then for vision. A correct response was defined as a button press occurring within a 1-s period following presentation of a target; a button press at any other time was interpreted as an incorrect response, as was a missed target. Before each adaptive procedure began, participants were given a few minutes practice in a condition in which the target events could easily be distinguished from the standard events.
Responses were collected using a dedicated button box that contained its own microprocessor and high-resolution clock to ensure accurate timing (RTBox ). Behavioral performance on the main task was quantified by accuracy and response-time measures. A button press occurring between 150 and 1000 ms after the onset of a target was considered a hit. Button presses at other times, including multiple presses following a single target, were classified as false alarms. A target which was not followed by a button press within the valid time window was classified as a miss. Response time was assessed for hits only. Similarly to Johnson and Zatorre (46], we calculated accuracy using the formula: 100 × [(hits − false alarms) ÷ (hits + misses)].
Each visual event was a transiently presented polar checkerboard (nominal duration 33.3 ms). The checkerboard contrast was reversed between successive presentations. The centrally located checkerboards subtended a visual angle of 16° and were divided into 8 rings and 24 wedges. The light and dark elements of the standard checkerboards had luminance 69 cd/m2 and 51 cd/m2, respectively (measured with a MAVO-SPOT 2 USB meter, Gossen, Germany), giving a Michelson contrast of 15%. The target stimuli in the V-ONLY, AV-SYNC, and AV-ASYNC conditions had a higher contrast achieved by simultaneously increasing the luminance of the light elements and decreasing the luminance of the dark elements, according to the prior measurement of individual target-detection sensitivity. The checkerboards were presented against a uniform gray background (58 cd/m2). A central white cross subtending a visual angle of 1° was presented continuously throughout the experiment. Participants were instructed to maintain fixation on this central cross at all times. Visual stimuli were presented on a 22” liquid crystal display viewed from a distance of 75 cm.
Note that our use of centrally located visual stimuli covering a relatively large proportion of the visual field was at odds with the studies of Noesselt et al.  and Marchant et al. , in which smaller, peripherally presented (~8–18° eccentricity) stimuli were used (simple colored shapes and rectangular checkerboards, respectively). While numerous studies have demonstrated that multisensory enhancement can occur for centrally presented stimuli [38, 47–50], it has been suggested that the use of peripheral visual stimuli might maximize the opportunity for interaction between the auditory and visual modalities . These interactions could be mediated by direct cortico-cortical connections between early auditory and visual cortices, which neuroanatomical studies in non-human primates have found to terminate predominantly in areas that represent the peripheral visual field [52, 53]. In the present study, we used larger, centrally located visual stimuli based on previous fNIRS studies that reported robust responses from visual cortex [24, 25]. Pilot testing conducted in our laboratory suggested that we could not measure robust responses to peripheral visual stimuli, possibly owing to the limited depth penetration of fNIRS , combined with the fact that the peripheral visual field maps to anterior areas of visual cortex that are further from the surface of the head . Indeed, measuring responses to stimulation in the peripheral visual field using optical imaging can be challenging, even using a state-of-the-art high-density diffuse optical tomography system .
Each auditory event was a brief (10-ms duration including 1-ms linear ramps) 1-kHz tone-pip. The level of the standard events was 76 dB SPL. The level of the target stimuli in the A-ONLY condition was increased from this baseline according to the prior measurement of individual target-detection sensitivity. Auditory stimuli were presented from a pair of small loudspeakers positioned on either side of, and vertically centered with, the VDU. Site constraints meant that it was not possible to house the fNIRS equipment outside the sound booth during testing, although a dense sound-absorbing screen was placed between the equipment and the area in which participants were seated. The steady ambient noise level at the participant’s position was 38 dB SPL (A-weighted), dominated by low-to-mid-frequency fan noise from the fNIRS equipment.
fNIRS measurements and analyses
Measurements were made using a continuous-wave fNIRS system (ETG-4000, Hitachi Medical Co., Japan). A 3 x 5 optode array (comprising 8 emitters and 7 detectors) was used, giving 22 measurement channels in total. The inter-optode spacing was 30 mm. The ETG-4000 measures simultaneously at wavelengths of 695 nm and 830 nm (sampling rate 10 Hz), and uses frequency modulation to minimize crosstalk between wavelengths and optodes. For a comprehensive review of the principles and practicalities of continuous-wave fNIRS, see .
The optode array was initially placed over the occipital lobe with the center optode aligned roughly over position Oz of the international 10–20 coordinate system . To ensure (as far as possible) that the measurement channels were positioned consistently across individuals, we followed the approach of Plichta et al.  and conducted a short functional localizer experiment. This comprised three cycles of high-contrast checkerboard stimulation (96% Michelson contrast, 16-Hz reversal rate, 10-s stimulation blocks, 20-s rest periods). The resulting activation pattern was viewed as a topographic 2D map using the ETG-4000’s built-in analysis software. If necessary, the position of the array was adjusted and the localizer experiment repeated until the activation pattern was centered and symmetrical. Fig 2A shows the final optode positions on one participant, who provided written informed consent for publication of this image. Once the position of the array was finalized, an elastic cotton bandage was gently wrapped around the participant’s head to help maintain secure contact between the optodes and the scalp. Participants were asked to sit as still as possible to reduce motion artifacts, although for comfort no head restraint was used.
(a) Photograph of the optode array on one participant and corresponding optode positions registered to an atlas brain. (b) Representative examples of the effect of two key signal processing stages: (upper panel) wavelet filtering successfully removes spikes from the optical signal, while leaving portions of the signal that are not contaminated by motion artifacts unchanged; (lower panel) for a hemodynamic response function showing substantial contamination by physiological noise, the hemodynamic signal separation technique successfully recovers a plausible functional response, plus an estimate of the systemic interference. (c) Group-average patterns of activation (normalized mean change in HbO between 6 and 20 s post onset) across the array for each stimulation condition. Channels within the predefined ROI (Ch# 6, 8, 10–13, 15, and 17) are highlighted. (d) Group-average hemodynamic response functions within the ROI for each stimulation condition. The shaded gray area shows the stimulation period. (e) Mean beta weights from the GLM analysis. Error bars show ±1 standard error, corrected to account for the repeated-measures nature of the design . Asterisks denote a statistically significant difference between the conditions indicated (* p <. 05; ** p <. 01; *** p <. 001; Bonferroni corrected). (f) Assessment of a possible relationship between the suppressive effect of synchronous sounds on the fNIRS response from visual cortex (BetaV-ONLY—BetaAV-SYNC) and a corresponding reduction in the accuracy of visual target detection (AccuracyV-ONLY − AccuracyAV-SYNC). While Pearson’s correlation suggested a significant linear relationship (r = .46, p = .023, two-tailed), this result could not be confirmed by a robust regression analysis (p = .136).
Analysis of the fNIRS data was performed in MATLAB (MathWorks, Natick, MA). First, we aimed to exclude any “bad channels” that were clearly influenced by unstable or weak optode contact. Guided by Umeyama and Yamada , we identified channels suffering from “unstable” optode contact by unusually high variance in the signal baseline (low-pass filtered at 0.1 Hz), and channels suffering from “weak” optode contact by unusually high variance due to high-frequency noise (high-pass filtered at 1.0 Hz). We calculated the long-term variance in these two frequency regions and assessed each measurement in relation to the pooled distribution across both wavelengths and all participants. By visual inspection of the distributions, we set as a threshold for exclusion a variance further than one standard deviation from the mean. A channel was excluded if this threshold was exceeded at either wavelength and in either frequency region. For 19 of the 24 participants, no channels were excluded. For the remaining five participants, between 4 and 12 channels (out of a total of 22) were excluded. The excluded channels generally corresponded to ones that had been noted as problematic at the time of testing, usually because of issues with hair obscuring good optode contact. We separately confirmed that excluding these five participants outright, instead of excluding only the problematic channels, did not alter the conclusions reported in this manuscript.
After excluding bad channels, the raw intensity signals were converted to changes in optical density . We then applied wavelet filtering to correct for motion artifacts. Motion artifacts are frequently encountered in fNIRS data, usually resulting from differential movement between the optical fibres and the scalp, and typically manifesting as abrupt spikes or jumps in intensity in the optical signal. Wavelet filtering has emerged in recent years as a promising approach to correcting for these artifacts [60, 61]. We used the hmrMotionCorrectWavelet function included in the HOMER2 fNIRS processing package , which performs a simplified version of the algorithm described by Molavi and Dumont . The algorithm applies a probability threshold to remove outlying wavelet coefficients, which are assumed to correspond to motion artifacts. We used a threshold of 1.219 times the inter-quartile range, equivalent (assuming a Gaussian distribution of wavelet coefficients) to the α = 0.1 threshold adopted in past studies [60, 61]. Fig 2B shows a representative example of how this approach removed motion artifacts without affecting uncontaminated portions of the signal.
Following motion-artifact correction, the optical density signals were band-pass filtered between 0.01 and 0.5 Hz to attenuate low-frequency drift and cardiac oscillations, and then converted into estimates of changes in the concentration of oxygenated (HbO) and de-oxygenated (HbR) hemoglobin using the modified Beer-Lambert law . We used a default value of 6 for the differential path-length factor at both wavelengths, noting that this may diminish the accuracy of the estimated absolute concentration changes because it does not account for the partial volume effect associated with focal cortical activation . Because band-pass filtering is only partially effective in removing physiological noise from fNIRS measurements , we additionally employed the hemodynamic signal separation method described by Yamada et al. . This algorithm separates the hemodynamic signal into estimated functional and systemic components based on the assumption that changes in HbO and HbR will be negatively correlated in the functional component, but positively correlated in the systemic component (see Fig 2B for a representative example). Only the functional component was retained for further analysis.
We defined an a priori region of interest (ROI) based on the results of past studies that used the same fNIRS system, similar visual stimuli, and similar optode-array placement [24, 25]. The ROI comprised eight measurement channels (four on the left side and four on the right) covering the areas in which we expected to observe visual activation (see Fig 2C). We did not define a non-ROI, since our 3 x 5 array did not offer any measurement channels that were sufficiently far from the expected areas of activation. The more distant non-ROI channels of the 3 x 11 optode array used by Plichta et al. [24, 25] were not available in our setup. While the lack of an anatomical image and limited spatial resolution of fNIRS preclude the accurate localization of responses to specific sub-areas of visual cortex, the available evidence from combined fNIRS–fMRI studies [42, 66] suggests that our measurements most probably sampled the superficial portion of primary visual cortex (area V1) and the surrounding extrastriate cortex (areas V2/V3).
For visualizing hemodynamic responses (Fig 2C and 2D), the time course of HbO and HbR concentration changes in each measurement channel was block-averaged using the HOMER2 hmrBlockAvg function . We used a general linear model (GLM) to quantify the strength of the response to each stimulation condition. The design matrix included four boxcar regressors (one for each condition), which were convolved with the canonical hemodynamic response function provided in SPM8 [http://www.fil.ion.ucl.ac.uk/spm]. Beta weights were extracted and averaged across the measurement channels included in the predefined ROI, before being subjected to statistical analysis. We quantified the magnitude of the fNIRS response across the ROI as a whole, rather than on a channel-by-channel basis, as we did not anticipate substantial spatial variation amongst the channels included in the ROI; nor did we predict any substantial difference between left and right hemispheres. Preliminary analysis of our data (not shown) supported these predictions. Although we present results primarily in terms of the HbO parameter for simplicity, because the hemodynamic signal separation method assumes a fixed linear relationship between HbO and HbR in the functional response, the results of all statistical analyses were identical regardless of whether conducted on the HbO or HbR data.
Activation of visual cortex measured using fNIRS
We first assessed the spatial distribution of activation across the optode array for each stimulation condition to confirm the suitability of the predefined ROI (Fig 2C). Group-average activation patterns were derived from the mean change in HbO concentration from 6 to 20 s post stimulation-block onset. The activation patterns were normalized independently for each condition to better illustrate the spatial distribution of activation, irrespective of overall response strength. The activation pattern was similar in the three conditions that included visual stimulation (V-ONLY, AV-SYNC, and AV-ASYNC). The pattern exhibited two clear lobes, one to the left and the other to the right of midline, which aligned well with the predefined ROI. Activation in channels outside the ROI was markedly weaker, though not qualitatively different from activation within the ROI, presumably because of the close spatial proximity of all channels to the ROI, the limited spatial resolution of fNIRS, and differences in optode-array placement relative to underlying anatomy between individuals. In the A-ONLY condition, there was some evidence of positive activation in the ROI, while an isolated channel located in the top-right corner of the array (Ch# 4) appeared to show deactivation compared to baseline. Overall, these results satisfied us that the predefined ROI was appropriately located in order to calculate summary measures of visual-cortex activation by averaging across the included channels.
To confirm that plausible hemodynamic responses had been obtained, we plotted the group-average time course of concentration changes in HbO and HbR within the ROI for each stimulation condition (Fig 2D). The conditions that included visual stimulation clearly exhibited the archetypal pattern of a stimulus-locked increase in HbO and a corresponding decrease in HbR, with sluggish dynamics characteristic of neurovascular coupling. The response peaked approximately 13 s into the stimulation block, with no significant difference in peak response latency between these three conditions (Repeated-measures ANOVA, F(2, 46) = 0.20, p = .823). During the return to baseline after cessation of stimulation, the HbO/HbR traces exhibited a small under/overshoot, reminiscent of the BOLD post-stimulus undershoot commonly reported in fMRI studies . The group-average response in the A-ONLY condition was much weaker, and did not obviously follow the shape of a canonical hemodynamic response.
We used the GLM beta weights to quantify differences in fNIRS response strength between stimulation conditions (Fig 2E). A repeated-measures ANOVA, with Greenhouse-Geisser correction for non-sphericity, confirmed a significant effect of condition (F(1.64, 37.66) = 31.50, p <. 001). As expected, the fNIRS response from visual cortex was significantly stronger in all conditions that included visual stimulation, compared to the auditory-only condition (all p <. 001, Bonferroni-corrected pairwise comparisons). Nonetheless, a single-sample t-test on the A-ONLY beta weights indicated that the auditory-only condition did result in significant activation in the visual ROI compared to rest (t(23) = 3.26, p = .003). Based on the findings of related fMRI studies [9, 38–40], our a priori hypothesis was that, compared to visual-only stimulation, activity in visual cortex would be enhanced by synchronous sounds and suppressed by asynchronous sounds. The data did not support this prediction. The fNIRS response from visual cortex was, in fact, weaker in the AV-SYNC condition than in both the V-ONLY (p = .001) and AV-ASYNC (p = .030) conditions (Bonferroni-corrected pairwise comparisons). That is, the presence of synchronous sounds led to a suppression of the fNIRS response from visual cortex. In contrast, the presence of asynchronous sounds had little effect (AV-ASYNC versus V-ONLY, n.s., p = .439 uncorrected).
Behavioral performance and its relationship with fNIRS response strength
Fig 3 shows mean accuracy and response time for each stimulation condition. Accuracy and response time were analyzed using separate repeated-measures ANOVAs, with the Greenhouse-Geisser correction applied to account for non-sphericity. There was no significant effect of stimulation condition on either accuracy (F(1.46, 33.60) = 2.35, p >. 05) or response time (F(1.49, 34.35) = 1.04, p >. 05). These data contrast with the results of Marchant et al. , who found a robust behavioral advantage of audiovisual synchrony in a similar type of target-detection task. While no firm conclusions can be drawn from the present null result, our accuracy data actually suggest, if anything, a trend in the opposite direction, i.e., towards poorer performance in the presence of synchronous sounds. Further interrogation of the data revealed that this was due to small (not statistically significant) increases in both the mean number of missed targets and the mean number of false alarms in the AV-SYNC condition. The mean number of button presses (hits + false alarms) was consistent across the four stimulation conditions (A-ONLY: 20.9; V-ONLY: 20.9; AV-SYNC: 21.7; AV-ASYNC: 21.1; RM-ANOVA F(1.75, 40.15) = 0.33, p >. 05).
Mean accuracy (upper panel) and response time (lower panel) for detecting occasional higher-intensity (A-ONLY condition) or higher-contrast (V-ONLY, AV-SYNC, and AV-ASYNC conditions) targets embedded within a train of standard events. Error bars show ±1 standard error, corrected to account for the repeated-measures nature of the design . There was no significant effect of stimulation condition on either accuracy or response time.
Interestingly, the pattern of accuracy scores for visual target detection (Fig 3, upper panel) mirrored that of the fNIRS beta weights (Fig 2E). That is, for both accuracy scores and beta weights, mean values were similar in the V-ONLY and AV-ASYNC conditions, but lower in the AV-SYNC condition. Based on the findings of combined fMRI–behavioral studies into the effects of audiovisual timing [39, 40, 68], it was our a priori expectation that behavioral performance would be positively related to the strength of the fNIRS response from visual cortex. To test for a relationship between the strength of the fNIRS response and target-detection accuracy, while accounting for individual differences in the global strength of the fNIRS response, we calculated the Pearson correlation coefficient between the magnitude of the suppressive effect of synchronous sounds on the fNIRS response (BetaV-ONLY − BetaAV-SYNC) and any corresponding reduction in target-detection accuracy (AccuracyV-ONLY − AccuracyAV-SYNC). A moderate, positive correlation was found (r = .46, p = .023, two-tailed). However, this correlation may have been driven primarily by a handful of participants who showed a relatively large effect in one direction or the other (Fig 2F). A robust regression analysis (MATLAB robustfit function, default ‘bisquare’ weighting function), which provides protection against an undue influence of potentially unrepresentative outlying data points, failed to confirm a significant relationship (p = .136). This null result, together with the absence of any correlation between the magnitude of fNIRS response suppression and a slowing of behavioral response time (r = .02, p >. 05), suggests that further study is needed to confirm whether there is a direct correspondence between fNIRS response strength and behavioral performance in this task.
To our knowledge, this is the first fNIRS study to examine multisensory interactions in ‘sensory-specific’ adult cortex. We showed that fNIRS is capable of measuring a modulatory influence of sounds on activity in visual cortex. This modulatory effect depended critically on the relative timing of auditory and visual events: only synchronous sounds modulated the visual response compared to a visual-only baseline; asynchronous sounds did not. At the group level, sounds had no significant effect on either the speed or accuracy with which participants were able to detect occasional higher-contrast visual targets.
An unexpected suppressive effect of audiovisual synchrony
Previous fMRI studies have consistently reported stronger cortical activation to synchronous than to asynchronous trains of auditory and visual events [9, 38–40]. Across studies, this pattern has been observed in a range of cortical areas, including low-level auditory and visual cortices. Thus, the suppressive effect of synchronous sounds on visual-cortex activation (compared to both a visual-only baseline and to asynchronous audiovisual stimulation) that we observed here using fNIRS was contrary to our predictions. While sub-additive multisensory interactions (a response to bimodal stimulation that is smaller than the sum of the responses to the constituent unisensory stimuli) have often been reported in human neuroimaging studies [6, 11, 14, 47], we had not expected synchronous sounds to have a truly suppressive effect (a response to bimodal stimulation that is smaller than the dominant unisensory response). Naturally, the question arises as to whether differences between the imaging techniques used might have been critical. Previous studies that directly compared fNIRS and fMRI measurements in visual cortex have consistently found fNIRS responses to be strongly correlated with the fMRI BOLD signal, both temporally and spatially, albeit with poorer signal-to-noise ratio and a bias towards superficial cortical regions in fNIRS data [42, 66, 69, 70]. Thus, notwithstanding that our current fNIRS responses cannot be accurately localized to specific sub-areas of visual cortex, it seems unlikely that the discrepant result between the present fNIRS study and previous fMRI studies can be attributed to inherent differences between the two imaging modalities. Furthermore, contrary to previous studies [39, 40], our behavioral data suggested, if anything, a trend towards poorer performance in the presence of synchronous sounds, consistent in direction with the suppressive effect on the fNIRS response. This suggests that stimulus and/or procedural differences between studies may have been critical.
Considering stimulus differences first, it seems unlikely that our use of centrally, as opposed to peripherally, presented visual stimuli can directly account for the current findings, given that numerous other studies have demonstrated multisensory enhancement for centrally presented stimuli [38, 47–50]. However, our data could plausibly reflect an influence of stimulus salience, potentially exacerbated by the central presentation. Multisensory integration typically obeys the principle of inverse effectiveness, which was derived from single-unit studies in animals, and which states that multisensory enhancement is strongest when responses to the constituent unimodal stimuli are weak . This principle has also been found to apply in human neuroimaging studies of multisensory integration [33, 50, 71] (although see  for some statistical considerations). There is some evidence to suggest that multisensory enhancement not only becomes weaker when the constituent unimodal stimuli are made more salient, but that it might even reverse direction and become suppressive. For instance, Noesselt et al.  demonstrated significant enhancement of BOLD activation when a lower-intensity visual target was paired with a co-occurring sound, but noted that, when the same sound was added to a higher-intensity visual target, any trends were if anything suppressive instead. This observation based on human fMRI data is consistent with animal electrophysiology data from Kayser et al. , who found that synchronous visual stimulation tends to enhance neuronal firing rates in auditory cortex when the auditory input is weak, but to suppress activity when the auditory input is more efficacious in driving neuronal responses. Thus, the suppressive impact of synchronous sounds on visual processing observed in the present study could conceivably reflect that our visual stimulus was capable of eliciting an overly strong unisensory response, especially in the superficial cortical regions (which represent the central visual field ) presumed to have contributed dominantly to our fNIRS measurements. Indeed, although our standard visual stimulus had a moderate contrast ratio of 15%, reversing checkerboards with contrast as low as 8% have been found to elicit an fNIRS response from visual cortex that already reaches about two-thirds the amplitude of the maximal response to high-contrast stimuli . As in past fMRI and EEG studies [33, 50, 71], future fNIRS studies should consider parametrically varying stimulus salience/intensity, in order to determine how this affects the magnitude and direction of any multisensory interactions.
An emerging theme in the multisensory literature is that, instead of following a strict set of rules and principles, multisensory integration is in fact highly flexible and adaptive to stimulus context and behavioral goal (see  for a recent review). Although the behavioral task used in the present study was similar to that used by Marchant et al. , procedural differences may have contributed to the conflicting results between studies. One potentially important difference is that, in , targets could occur in either the auditory or visual modality with equal probability, and so participants had to continuously divide their attention between both modalities. In contrast, in the present study, targets occurred in only one modality at a time (dependent on the stimulation condition), and so, in effect, participants were encouraged to selectively attend to a single modality. It has previously been demonstrated that selectively attending to a single modality can, in some circumstances, prevent the integration of congruent multisensory stimuli [49, 75]. However, it is worth noting that the study of Lewis and Noppeney  also required participants to attend only to the visual modality, and yet they found synchronous, task-irrelevant sounds to enhance BOLD activation in visual and auditory areas, and to improve behavioral performance in a visual motion discrimination task. As such, differences in how attention was deployed between the auditory and visual modalities may be insufficient to fully explain the present results. Nonetheless, future fNIRS studies of multisensory processing may wish to explicitly assess the influence of selective versus divided attention across sensory modalities.
A further possibility is that the differences in fNIRS response strength between stimulation conditions might have been driven by a residual task-related systemic effect. For example, differences in behavioral performance could have led to changes in heart rate, which could in turn have affected the magnitude of the fNIRS response. To assess this possibility, we ran a series of supplementary analyses (see S1 Appendix) based on the marginal linear model , testing for differences in fNIRS response strength after controlling for accuracy, response time, and the total number of button presses in each condition. Significant differences between stimulation conditions remained after controlling for these behavioral measures, suggesting that the differences in fNIRS response strength are unlikely to have been mediated by changes in behavior. Interestingly, these analyses did reveal weak, yet statistically significant, overall effects of accuracy and the total number of button presses on the fNIRS response. The strength of the fNIRS response generally increased with increasing accuracy, and decreased with increasing number of button presses. However, rather than indicating causal effects of behavior on the fNIRS response, these relationships may simply reflect the influence of an unobserved common factor that affected both imaging and behavioral outcomes similarly, e.g., differences in overall attentiveness between participants.
Cross-modal activation of visual cortex by sounds
The principal finding of the present study was of a robust modulatory effect of synchronous sounds on visually evoked activation in visual cortex. However, it is interesting to note that we also observed significant, albeit relatively weak, activation of visual cortex in the ROI to auditory stimulation alone. Unfortunately, the experimental design does not allow us to say for certain whether this reflects genuine sound-evoked activation of visual cortex, or rather a general arousal effect associated with performing a task versus resting. Functional NIRS may be particularly susceptible to such non-specific effects because of its high sensitivity to systemic hemodynamic fluctuations in the scalp . Nevertheless, our study joins a collection of human neuroimaging studies that have reported positive sound-evoked activation in visual cortex, e.g., [11, 15, 78]. However, other studies have shown deactivation of visual cortex during auditory-only stimulation compared to rest [46, 79, 80], with the strength of the deactivation thought to increase with auditory task difficulty . The positive activation in visual cortex in response to auditory-only stimulation observed in the present study could conceivably have been influenced by experimental context: the auditory-only blocks were interleaved with blocks in which auditory and visual stimuli were paired, which may have set up an expectancy in participants regarding an association between events in the two modalities [82, 83].
A role for fNIRS in multisensory research?
We have shown that fNIRS is capable of measuring a modulatory influence of sounds on activity in visual cortex, suggesting that the technique may find useful application in the field of multisensory research. Specific contexts in which fNIRS might offer practical advantages over alternative neuroimaging modalities include studying the development of multisensory processing in infants [18, 20], and exploring the consequences for multisensory processing of neurological damage and/or sensory deprivation in clinical populations . For instance, we are exploiting the compatibility of fNIRS with cochlear implantation  to study cross-modal reorganization associated with deafness and subsequent restoration of hearing . The practical advantages of fNIRS may open further avenues in multisensory research, as the technique is well suited for use outside the traditional laboratory environment [87, 88] and for performing brain imaging during natural social interactions .
Functional NIRS is not, however, without its challenges. Obtaining reliable contact between the optodes and the scalp can be problematic on participants with thick or dark hair, although recent developments in optode design may help to mitigate this issue . Even when good optode contact has been achieved, the limited depth penetration, moderate spatial resolution, and lack of an anatomical image can pose challenges in fNIRS. With current technology, fNIRS is not suitable for imaging brain areas deeper than the outermost 10–15 mm of intracranial space , nor for differentiating activation in cortical areas separated by less than the source–detector optode spacing, typically on the order of a few centimeters. In the context of multisensory research, then, fNIRS may be best suited to studying the hemodynamic consequences of effects that are thought to occur in a fairly widespread manner across a given sensory cortex, such as cross-modal phase reset of ongoing oscillatory activity [17, 92]. Such studies would stand to benefit from the simultaneous measurement of neuronal phase dynamics, given the relative ease with which fNIRS can be combined with EEG . However, fNIRS is unlikely to be suitable for addressing research questions that require fine-grained spatial resolution, for example, in the study of distinct sub-regions of multisensory STS that preferentially respond to multimodal stimuli with specific timing relationships .
It is worth reiterating that fNIRS is a rapidly developing technique, with continued improvements in image quality and spatial resolution being achieved through advancements in system design [27, 95], methods for spatial registration of fNIRS data that facilitate accurate image reconstruction on anatomical brain models , and signal processing algorithms that specifically target the various sources of noise encountered in fNIRS recordings . In the present study, we successfully took advantage of two such algorithms: a wavelet filtering approach to motion artifact correction , and a correlation-based approach to extracting estimates of the functional and systemic components of the hemodynamic response . Repeating the analysis with these two processing stages bypassed did not alter the pattern of the results, but did result in more variable responses which, in turn, reduced the observed statistical power. The significant difference between synchronous and asynchronous audiovisual conditions, a critical finding in this experiment, no longer reached statistical significance when these two steps were omitted. The present study therefore demonstrates the practical benefit that these recently developed signal processing algorithms can provide when applied to a real fNIRS dataset.
The present study demonstrated that fNIRS is capable of measuring a modulatory influence of sounds on activity in adult visual cortex. The data suggested a positive relationship between the strength of the fNIRS response and visual target-detection accuracy, although this requires confirmation in future studies. Contrary to previous fMRI studies, we found synchronous sounds to have a suppressive effect on the visual response. Given the known sensitivity of multisensory interactions to contextual factors, this discrepancy may be attributable to stimulus-related and/or procedural differences between studies, possibly related to adaptations made here to suit the fNIRS imaging modality (e.g., the use of efficacious, centrally presented visual stimuli). As the technique continues to develop, fNIRS may open new avenues in multisensory research, particularly in relation to testing in naturalistic environments and with infant and clinical populations.
The authors thank Dr Toru Yamada and Dr Shinji Umeyama for kindly providing the code for the hemodynamic signal separation algorithm, the developers of the open-source fNIRS analysis package HOMER2, and two anonymous reviewers for their insightful comments.
Conceived and designed the experiments: IW DH. Performed the experiments: IW. Analyzed the data: IW DH. Wrote the paper: IW DH.
- 1. Welch RB, Warren DH. Intersensory interactions. In: Boff KR, Kaufmann L, Thomas JP, editors. Handbook of perception and human performance. 1. New York: Wiley; 1986. p. 25.1–36.
- 2. Shimojo S, Shams L. Sensory modalities are not separate modalities: plasticity and interactions. Curr Opin Neurobiol. 2001;11(4):505–9. pmid:11502399
- 3. Schroeder CE, Foxe J. Multisensory contributions to low-level, 'unisensory' processing. Curr Opin Neurobiol. 2005;15(4):454–8. pmid:16019202
- 4. Ghazanfar AA, Schroeder CE. Is neocortex essentially multisensory? Trends Cogn Sci. 2006;10(6):278–85. pmid:16713325
- 5. Driver J, Noesselt T. Multisensory interplay reveals crossmodal influences on 'sensory-specific' brain regions, neural responses, and judgments. Neuron. 2008;57(1):11–23. pmid:18184561
- 6. Beauchamp M. Statistical criteria in fMRI studies of multisensory integration. Neuroinform. 2005;3(2):93–113.
- 7. Laurienti P, Perrault T, Stanford T, Wallace M, Stein B. On the use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging studies. Exp Brain Res. 2005;166(3–4):289–97. pmid:16151776
- 8. James TW, Stevenson RA. The Use of fMRI to Assess Multisensory Integration. In: Murray MM, Wallace MT, editors. The Neural Bases of Multisensory Processes. Boca Raton (FL): CRC Press; 2012.
- 9. Noesselt T, Rieger JW, Schoenfeld MA, Kanowski M, Hinrichs H, Heinze HJ, et al. Audiovisual temporal correspondence modulates human multisensory superior temporal sulcus plus primary sensory cortices. J Neurosci. 2007;27(42):11431–41. pmid:17942738
- 10. Watkins S, Shams L, Tanaka S, Haynes JD, Rees G. Sound alters activity in human V1 in association with illusory visual perception. Neuroimage. 2006;31(3):1247–56. pmid:16556505
- 11. Martuzzi R, Murray MM, Michel CM, Thiran JP, Maeder PP, Clarke S, et al. Multisensory interactions within human primary cortices revealed by BOLD dynamics. Cereb Cortex. 2007;17(7):1672–9. pmid:16968869
- 12. Giard MH, Peronnet F. Auditory-visual integration during multimodal object recognition in humans: A behavioral and electrophysiological study. J Cognitive Neurosci. 1999;11(5):473–90. pmid:10511637
- 13. Molholm S, Ritter W, Murray MM, Javitt DC, Schroeder CE, Foxe JJ. Multisensory auditory-visual interactions during early sensory processing in humans: a high-density electrical mapping study. Cognitive Brain Res. 2002;14(1):115–28. pmid:12063135
- 14. Cappe C, Thut G, Romei V, Murray MM. Auditory–Visual Multisensory Interactions in Humans: Timing, Topography, Directionality, and Sources. J Neurosci. 2010;30(38):12572–80. pmid:20861363
- 15. Raij T, Ahveninen J, Lin FH, Witzel T, Jaaskelainen IP, Letham B, et al. Onset timing of cross-sensory activations and multisensory interactions in auditory and visual sensory cortices. Eur J Neurosci. 2010;31(10):1772–82. pmid:20584181
- 16. Naue N, Rach S, Struber D, Huster RJ, Zaehle T, Korner U, et al. Auditory Event-Related Response in Visual Cortex Modulates Subsequent Visual Responses in Humans. J Neurosci. 2011;31(21):7729–36. pmid:21613485
- 17. Romei V, Gross J, Thut G. Sounds Reset Rhythms of Visual Cortex and Corresponding Human Visual Perception. Curr Biol. 2012;22(9):807–13. pmid:22503499
- 18. Lloyd-Fox S, Blasi A, Elwell CE. Illuminating the developing brain: The past, present and future of functional near infrared spectroscopy. Neurosci Biobehav R. 2010;34(3):269–84. pmid:19632270
- 19. Boas DA, Elwell CE, Ferrari M, Taga G. Twenty years of functional near-infrared spectroscopy: introduction for the special issue. Neuroimage. 2014;85:1–5. pmid:24321364
- 20. Watanabe H, Homae F, Nakano T, Tsuzuki D, Enkhtur L, Nemoto K, et al. Effect of auditory input on activations in infant diverse cortical regions during audiovisual processing. Hum Brain Mapp. 2013;34(3):543–65. pmid:22102331
- 21. Strangman GE, Zhang Q, Li Z. Scalp and skull influence on near infrared photon propagation in the Colin27 brain template. Neuroimage. 2014;85:136–49. pmid:23660029
- 22. Colier WN, Quaresima V, Wenzel R, van der Sluijs MC, Oeseburg B, Ferrari M, et al. Simultaneous near-infrared spectroscopy monitoring of left and right occipital areas reveals contra-lateral hemodynamic changes upon hemi-field paradigm. Vision Res. 2001;41(1):97–102. pmid:11163619
- 23. Schroeter ML, Bucheler MM, Muller K, Uludag K, Obrig H, Lohmann G, et al. Towards a standard analysis for functional near-infrared imaging. Neuroimage. 2004;21(1):283–90. pmid:14741666
- 24. Plichta MM, Herrmann MJ, Baehne CG, Ehlis AC, Richter MM, Pauli P, et al. Event-related functional near-infrared spectroscopy (fNIRS): are the measurements reliable? Neuroimage. 2006;31(1):116–24. pmid:16446104
- 25. Plichta MM, Heinzel S, Ehlis AC, Pauli P, Fallgatter AJ. Model-based analysis of rapid event-related functional near-infrared spectroscopy (NIRS) data: a parametric validation study. Neuroimage. 2007;35(2):625–34. pmid:17258472
- 26. Zeff BW, White BR, Dehghani H, Schlaggar BL, Culver JP. Retinotopic mapping of adult human visual cortex with high-density diffuse optical tomography. P Natl Acad Sci USA. 2007;104(29):12169–74 pmid:17616584
- 27. White BR, Culver JP. Phase-encoded retinotopy as an evaluation of diffuse optical neuroimaging. Neuroimage. 2010;49(1):568–77. pmid:19631755
- 28. Bastien D, Gallagher A, Tremblay J, Vannasing P, Theriault M, Lassonde M, et al. Specific functional asymmetries of the human visual cortex revealed by functional near-infrared spectroscopy. Brain Res. 2012;1431:62–8. pmid:22137561
- 29. Allman BL, Meredith MA. Multisensory processing in "unimodal" neurons: Cross-modal subthreshold auditory effects in cat extrastriate visual cortex. J Neurophysiol. 2007;98(1):545–9. pmid:17475717
- 30. Wang Y, Celebrini S, Trotter Y, Barone P. Visuo-auditory interactions in the primary visual cortex of the behaving monkey: Electrophysiological evidence. BMC Neurosci. 2008;9. pmid:18215277
- 31. Iurilli G, Ghezzi D, Olcese U, Lassi G, Nazzaro C, Tonini R, et al. Sound-driven synaptic inhibition in primary visual cortex. Neuron. 2012;73(4):814–28. pmid:22365553
- 32. Shams L, Iwaki S, Chawla A, Bhattacharya J. Early modulation of visual cortex by sound: an MEG study. Neurosci Lett. 2005;378(2):76–81. pmid:15774261
- 33. Noesselt T, Tyll S, Boehler CN, Budinger E, Heinze HJ, Driver J. Sound-induced enhancement of low-intensity vision: multisensory influences on human sensory-specific cortices and thalamic bodies relate to perceptual enhancement of visual detection sensitivity. J Neurosci. 2010;30(41):13609–23. pmid:20943902
- 34. Romei V, Murray MM, Cappe C, Thut G. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Curr Biol. 2009;19(21):1799–805. pmid:19836243
- 35. Meredith MA, Allman BL, Keniston LP, Clemo HR. Auditory influences on non-auditory cortices. Hearing Res. 2009;258(1–2):64–71.
- 36. Stein BE, Meredith A. The Merging of the Senses: Mit Press; 1993.
- 37. Doehrmann O, Naumer MJ. Semantics and the multisensory brain: How meaning modulates processes of audio-visual integration. Brain Res. 2008;1242:136–50. pmid:18479672
- 38. Calvert GA, Hansen PC, Iversen SD, Brammer MJ. Detection of audio-visual integration sites in humans by application of electrophysiological criteria to the BOLD effect. Neuroimage. 2001;14(2):427–38. pmid:11467916
- 39. Lewis R, Noppeney U. Audiovisual synchrony improves motion discrimination via enhanced connectivity between early visual and auditory areas. J Neurosci. 2010;30(37):12329–39. pmid:20844129
- 40. Marchant JL, Ruff CC, Driver J. Audiovisual synchrony enhances BOLD responses in a brain network including multisensory STS while also enhancing target-detection performance for both modalities. Hum Brain Mapp. 2012;33(5):1212–24. pmid:21953980
- 41. Cui X, Bray S, Bryant DM, Glover GH, Reiss AL. A quantitative comparison of NIRS and fMRI across multiple cognitive tasks. Neuroimage. 2011;54(4):2808–21. pmid:21047559
- 42. Toronov VY, Zhang X, Webb AG. A spatial and temporal comparison of hemodynamic signals measured using optical and functional magnetic resonance imaging during activation in the human primary visual cortex. Neuroimage. 2007;34(3):1136–48. pmid:17134913
- 43. Friston KJ, Zarahn E, Josephs O, Henson RN, Dale AM. Stochastic designs in event-related fMRI. Neuroimage. 1999;10(5):607–19. pmid:10547338
- 44. Oldfield RC. The Assessment and Analysis of Handedness: The Edinburgh Inventory. Neuropsychologia. 1971;9(1):97–113. pmid:5146491
- 45. Li X, Liang Z, Kleiner M, Lu ZL. RTbox: a device for highly accurate response time measurements. Behav Res Methods. 2010;42(1):212–25. pmid:20160301
- 46. Johnson JA, Zatorre RJ. Attention to simultaneous unrelated auditory and visual events: Behavioral and neural correlates. Cereb Cortex. 2005;15(10):1609–20. pmid:15716469
- 47. Cappe C, Thelen A, Romei V, Thut G, Murray MM. Looming signals reveal synergistic principles of multisensory integration. J Neurosci. 2012;32(4):1171–82. pmid:22279203
- 48. Fiebelkorn IC, Foxe JJ, Butler JS, Molholm S. Auditory facilitation of visual-target detection persists regardless of retinal eccentricity and despite wide audiovisual misalignments. Exp Brain Res. 2011;213(2–3):167–74. pmid:21800256
- 49. Talsma D, Doty TJ, Woldorff MG. Selective attention and audiovisual integration: is attending to both modalities a prerequisite for early integration? Cereb Cortex. 2007;17(3):679–90. pmid:16707740
- 50. Senkowski D, Saint-Amour D, Höfle M, Foxe JJ. Multisensory interactions in early evoked brain activity follow the principle of inverse effectiveness. Neuroimage. 2011;56(4):2200–8. pmid:21497200
- 51. Gleiss S, Kayser C. Eccentricity dependent auditory enhancement of visual stimulus detection but not discrimination. Front Integrative Neurosci. 2013;7:52. pmid:23882195
- 52. Falchier A, Clavagnier S, Barone P, Kennedy H. Anatomical evidence of Multimodal integration in primate striate cortex. J Neurosci. 2002;22(13):5749–59. pmid:12097528
- 53. Rockland KS, Ojima H. Multisensory convergence in calcarine visual areas in macaque monkey. Int J Psychophysiol. 2003;50(1–2):19–26. pmid:14585488
- 54. Wandell BA, Winawer J. Imaging retinotopic maps in the human brain. Vision Res. 2011;51(7):718–37. pmid:20692278
- 55. Scholkmann F, Kleiser S, Metz AJ, Zimmermann R, Pavia JM, Wolf U, et al. A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology. Neuroimage. 2014;85:6–27. pmid:23684868
- 56. Jasper HH. The ten twenty electrode system of the international federation. Electroen Clin Neuro. 1958;10:371–5.
- 57. Field A. Discovering Statistics Using SPSS. 3rd ed. London: SAGE Publications; 2009.
- 58. Umeyama S, Yamada T. Detection of an unstable and/or a weak probe contact in a multichannel functional near-infrared spectroscopy measurement. J Biomed Opt. 2013;18(4):047003. pmid:23552638
- 59. Huppert TJ, Diamond SG, Franceschini MA, Boas DA. HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain. Appl Optics. 2009;48(10):D280–98. pmid:19340120
- 60. Cooper RJ, Selb J, Gagnon L, Phillip D, Schytz HW, Iversen HK, et al. A systematic comparison of motion artifact correction techniques for functional near-infrared spectroscopy. Front Neurosci. 2012;6:147. pmid:23087603
- 61. Brigadoi S, Ceccherini L, Cutini S, Scarpa F, Scatturin P, Selb J, et al. Motion artifacts in functional near-infrared spectroscopy: A comparison of motion correction techniques applied to real cognitive data. Neuroimage. 2014;85(1):181–91.
- 62. Molavi B, Dumont GA. Wavelet-based motion artifact removal for functional near-infrared spectroscopy. Physiol Meas. 2012;33(2):259–70. pmid:22273765
- 63. Delpy DT, Cope M, van der Zee P, Arridge S, Wray S, Wyatt J. Estimation of optical pathlength through tissue from direct time of flight measurement. Phys Med Biol. 1988;33(12):1433–42. pmid:3237772
- 64. Boas DA, Gaudette T, Strangman G, Cheng X, Marota JJ, Mandeville JB. The accuracy of near infrared spectroscopy and imaging during focal changes in cerebral hemodynamics. Neuroimage. 2001;13(1):76–90. pmid:11133311
- 65. Yamada T, Umeyama S, Matsuda K. Separation of fNIRS signals into functional and systemic components based on differences in hemodynamic modalities. PLoS One. 2012;7(11):e50271. pmid:23185590
- 66. Minati L, Visani E, Dowell NG, Medford N, Critchley HD. Variability comparison of simultaneous brain near-infrared spectroscopy and functional magnetic resonance imaging during visual stimulation. J Med Eng Technol. 2011;35(6–7):370–6. pmid:22059799
- 67. van Zijl PC, Hua J, Lu H. The BOLD post-stimulus undershoot, one of the most debated issues in fMRI. Neuroimage. 2012;62(2):1092–102. pmid:22248572
- 68. Marchant JL, Driver J. Visual and Audiovisual Effects of Isochronous Timing on Visual Perception and Brain Activity. Cereb Cortex. 2013;23(6):1290–8. pmid:22508766
- 69. Eggebrecht AT, White BR, Ferradal SL, Chen CX, Zhan YX, Snyder AZ, et al. A quantitative spatial comparison of high-density diffuse optical tomography and fMRI cortical mapping. Neuroimage. 2012;61(4):1120–8. pmid:22330315
- 70. Ferradal SL, Eggebrecht AT, Hassanpour M, Snyder AZ, Culver JP. Atlas-based head modeling and spatial normalization for high-density diffuse optical tomography: In vivo validation against fMRI. Neuroimage. 2014;85:117–26. pmid:23578579
- 71. Stevenson RA, James TW. Audiovisual integration in human superior temporal sulcus: Inverse effectiveness and the neural processing of speech and object recognition. Neuroimage. 2009;44(3):1210–23. pmid:18973818
- 72. Holmes N. The Principle of Inverse Effectiveness in Multisensory Integration: Some Statistical Considerations. Brain Topogr. 2009;21(3–4):168–76. pmid:19404728
- 73. Kayser C, Petkov CI, Logothetis NK. Visual Modulation of Neurons in Auditory Cortex. Cereb Cortex. 2008;18(7):1560–74. pmid:18180245
- 74. van Atteveldt N, Murray Micah M, Thut G, Schroeder Charles E. Multisensory Integration: Flexible Use of General Operations. Neuron. 2014;81(6):1240–53. pmid:24656248
- 75. Mozolic J, Hugenschmidt C, Peiffer A, Laurienti P. Modality-specific selective attention attenuates multisensory integration. Exp Brain Res. 2008;184(1):39–52. pmid:17684735
- 76. West BT, Welch KB, Galecki AT. Linear Mixed Models: A Practical Guide Using Statistical Software. Boca Raton, FL: CRC Press; 2006.
- 77. Kirilina E, Jelzow A, Heine A, Niessing M, Wabnitz H, Bruhl R, et al. The physiological origin of task-evoked systemic artefacts in functional near infrared spectroscopy. Neuroimage. 2012;61(1):70–81. pmid:22426347
- 78. McDonald JJ, Stormer VS, Martinez A, Feng WF, Hillyard SA. Salient Sounds Activate Human Visual Cortex Automatically. J Neurosci. 2013;33(21):9194–201. pmid:23699530
- 79. Laurienti PJ, Burdette JH, Wallace MT, Yen YF, Field AS, Stein BE. Deactivation of sensory-specific cortex by cross-modal stimuli. J Cognitive Neurosci. 2002;14(3):420–9. pmid:11970801
- 80. Mozolic JL, Joyner D, Hugenschmidt CE, Peiffer AM, Kraft RA, Maldjian JA, et al. Cross-modal deactivations during modality-specific selective attention. BMC Neurol. 2008;8:35. pmid:18817554
- 81. Hairston WD, Hodges DA, Casanova R, Hayasaka S, Kraft R, Maldjian JA, et al. Closing the mind's eye: deactivation of visual cortex related to auditory task difficulty. Neuroreport. 2008;19(2):151–4. pmid:18185099
- 82. Baier B, Kleinschmidt A, Muller NG. Cross-modal processing in early visual and auditory cortices depends on expected statistical relationship of multisensory information. J Neurosci. 2006;26(47):12260–5. pmid:17122051
- 83. Zangenehpour S, Zatorre RJ. Crossmodal recruitment of primary visual cortex following brief exposure to bimodal audiovisual stimuli. Neuropsychologia. 2010;48(2):591–600. pmid:19883668
- 84. Obrig H. NIRS in clinical neurology a 'promising' tool? Neuroimage. 2014;85:535–46. pmid:23558099
- 85. Sevy AB, Bortfeld H, Huppert TJ, Beauchamp MS, Tonini RE, Oghalai JS. Neuroimaging with near-infrared spectroscopy demonstrates speech-evoked activity in the auditory cortex of deaf children following cochlear implantation. Hearing Res. 2010;270(1–2):39–47.
- 86. Lawler CA, Wiggins IM, Dewey RS, Hartley DEH. The use of functional near-infrared spectroscopy for measuring cortical reorganisation in cochlear implant users: A possible predictor of variable speech outcomes? Cochlear Implants Int. 2015;16(S1):S30–S32.
- 87. Yoshino K, Oka N, Yamamoto K, Takahashi H, Kato T. Functional brain imaging using near-infrared spectroscopy during actual driving on an expressway. Front Hum Neurosci. 2013;7:882. pmid:24399949
- 88. Piper SK, Krueger A, Koch SP, Mehnert J, Habermehl C, Steinbrink J, et al. A wearable multi-channel fNIRS system for brain imaging in freely moving subjects. Neuroimage. 2014;85:64–71. pmid:23810973
- 89. Cui X, Bryant DM, Reiss AL. NIRS-based hyperscanning reveals increased interpersonal coherence in superior frontal cortex during cooperation. Neuroimage. 2012;59(3):2430–7. pmid:21933717
- 90. Khan B, Wildey C, Francis R, Tian FH, Delgado MR, Liu HL, et al. Improving optical contact for functional near-infrared brain spectroscopy and imaging with brush optodes. Biomed Opt Express. 2012;3(5):878–98. pmid:22567582
- 91. Strangman GE, Li Z, Zhang Q. Depth Sensitivity and Source-Detector Separations for Near Infrared Spectroscopy Based on the Colin27 Brain Template. PLoS One. 2013;8(8).
- 92. Mercier MR, Foxe JJ, Fiebelkorn IC, Butler JS, Schwartz TH, Molholm S. Auditory-driven phase reset in visual cortex: Human electrocorticography reveals mechanisms of early multisensory integration. Neuroimage. 2013;79:19–29. pmid:23624493
- 93. Moosmann M, Ritter P, Krastel I, Brink A, Thees S, Blankenburg F, et al. Correlates of alpha rhythm in functional magnetic resonance imaging and near infrared spectroscopy. Neuroimage. 2003;20(1):145–58. pmid:14527577
- 94. Noesselt T, Bergmann D, Heinze HJ, Munte T, Spence C. Coding of multisensory temporal patterns in human superior temporal sulcus. Front Integrative Neurosci. 2012;6:64.
- 95. Torricelli A, Contini D, Pifferi A, Caffini M, Re R, Zucchelli L, et al. Time domain functional NIRS imaging for human brain mapping. Neuroimage. 2014;85(1):28–50.
- 96. Tsuzuki D, Dan I. Spatial registration for functional near-infrared spectroscopy: from channel position on the scalp to cortical location in individual and group analyses. Neuroimage. 2014;85(1):92–103.
- 97. Tak S, Ye JC. Statistical analysis of fNIRS data: A comprehensive review. Neuroimage. 2014;85(1):72–91.