Figures
Abstract
Perception of rhythm significantly impacts various aspects of daily life, including engaging with music, discerning speech prosody nuances, and coordinating physical activities like walking and sports. Numerous studies in cognitive sciences have highlighted that human rhythmic synchronization is more precise when responding to auditory rhythmic stimuli than to visual ones when the timing cues are identical. However, deaf individuals were shown to display a heightened proficiency in synchronizing their movements with visual timing cues, outperforming hearing controls (HC). Furthermore, it was demonstrated that cochlear implant (CI) users can synchronize their movements with the rhythm of unpitched drum tones. These findings raise an important question: do CI users possess a visual synchronization advantage from their pre-implant deafness, while maintaining auditory synchronization skills comparable to those of HC? Alternatively, does the neural reorganization post-implantation negate the visual synchronization advantage acquired before the implant? This study aims to answer these questions by using a sensorimotor synchronization task to probe multisensory processing abilities in CI users. Specifically, we assessed unimodal and multimodal auditory and visual abilities in CI users compared to HC using a finger tapping synchrony task with four isochronous stimulus conditions: an auditory metronome, a visual metronome, a synchronous presentation of both the auditory and visual metronomes at the same tempo, and an asynchronous presentation of the auditory and visual stimuli at differing tempos. Synchronization to auditory stimuli surpassed synchronization to visual stimuli in both groups. CI users and HC demonstrated similar unisensory synchronization consistency within the visual and auditory conditions. While HC enhanced their consistency in the audio-visual synchronous condition compared to the unisensory visual condition, CI users did not display the same improvement. Furthermore, the interference from incongruent auditory information in the asynchronous condition was comparable in HC and CI users. This study highlights that, although pitch processing is known to be impaired in CI users, our findings suggest that rhythm processing remains relatively spared. As anticipated, CI users demonstrate similar auditory rhythmic synchronization skills to those of HC, in line with existing research. Moreover, we find that, unlike deaf individuals, CI users do not exhibit an advantage in visual rhythmic synchronization, which may be due to the relatively few CI users in the study who had early prolonged pre-implantation deafness. The observed shift in audio-visual integration among CI users suggests that post-deafness or post-implantation reorganization of their auditory cortex may impede the effective integration of temporal auditory stimulation from the implant and visual information.
Citation: Valentin O, Foster NEV, Intartaglia B, Prud’homme M-A, Schönwiesner M, Nozaradan S, et al. (2025) Intact sensorimotor rhythm abilities but altered audiovisual integration in cochlear implant users. PLoS ONE 20(4): e0320815. https://doi.org/10.1371/journal.pone.0320815
Editor: Patrick Bruns, University of Hamburg, GERMANY
Received: February 5, 2024; Accepted: February 24, 2025; Published: April 2, 2025
Copyright: © 2025 Valentin et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data that support the findings of this study are not openly available because the participants of this study did not give written consent for their data to be shared publicly. Data are available upon reasonable request by contacting the corresponding author, the Principal Investigator, or the Research Ethics Board of the Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal (CRIR), which approved this study, at cer.rdp.ccsmtl@ssss.gouv.qc.ca.
Funding: Dr. Alexandre Lehmann received funding from the the Natural Sciences and Engineering Research Council of Canada (Discovery Grant RGPIN-2016-04721) and the Fonds de Recherche du Québec - Santé (Financement Chercheur boursier Junior 1). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: No authors have competing interests.
Introduction
In music, rhythm provides a structural foundation that guides both musicians and listeners through the progression of a composition [1]. Rhythm is also a powerful tool for evoking emotions and engaging the audience, as rhythmic patterns can convey a wide range of emotions [2]. Rhythm in music also infuses in people the desire to synchronize their movement with the beat, adding a physical dimension to their music experience [3].
In speech, rhythm patterns facilitate in the communication and comprehension of language [4]. Rhythm patterns play a crucial role in phonological awareness, helping individuals to distinguish speech sounds and syllables, which is fundamental for language acquisition and reading development, especially in early childhood [5]. Rhythm variations in speech allow individuals to convey nuanced meaning and emotions effectively [6].
Beyond music and speech, rhythm is also integral to many daily physical activities. Rhythm provides a sense of timing and coordination that is essential for reaction time and decision-making in fast-paced sports and dynamic situations [7]. Additionally, rhythm plays a key role in motor skill development, especially during the early acquisition of walking, thanks to its ability to enhance balance and stability [8].
Rhythmic synchronization has been extensively studied in the past [7]. Interestingly, a consistent finding emerges from these studies: humans exhibit greater precision in synchronizing their movements with auditory rhythmic stimuli compared to with visual ones. This advantage is attributed to the superior temporal processing capabilities inherent to the auditory system, along with the distinct coupling that exists between auditory and motor systems [9]. However, this auditory advantage is not absolute and can be influenced by experience and stimulus characteristics [10–11]. Moreover, a study conducted by Iversen et al. in 2015 revealed that deaf individuals outperformed hearing controls (HC) in synchronizing their movements with visual timing cues [12]. Concurrently, another study performed by Phillips-Silver et al. in 2015 showed that cochlear implant (CI) users were able to move in time to the beat of music, although not as well as HC [13].
The current study aims to investigate whether CI users exhibit a visual synchronization advantage akin to that observed in deaf individuals, while also maintaining auditory synchronization skills comparable to those of HC. Specifically, we assessed unimodal and multimodal auditory and visual abilities in CI users compared to HC using a standard paradigm of one-to-one sensorimotor synchronization to periodic inputs. To assess the simultaneous integration of sensory inputs, temporally congruent audio-visual stimuli were used to determine how CI users and HC process information presented concurrently, while incongruent audio-visual stimuli were used to explore potential distinctions between CI users and HC in resolving interference arising from conflicting sensory inputs [14]. We hypothesized that, due to the hearing restoration from their implants, CI users would demonstrate synchronization consistency in the unisensory auditory condition comparable to that of HC. We postulated that CI users would not exhibit a synchronization advantage in the unisensory visual condition, either due to insufficient pre-implantation deafness duration, or as a result of post-implantation neural plasticity reversal.
Methods
Participants
The data presented in this study were collected between July 21st, 2015, and December 9, 2017. Twenty adult cochlear implant (CI) users with a mean age of 43.2 years (SD 15.0; 15 females) and seventeen paired hearing controls (HC) who were age- and gender-matched (mean age of 41.1 years; SD 15.5 years; 12 females) were recruited for this study. Three CI users did not have a pairwise HC match; there were no group differences on age or gender balance. CI users were recruited through the Raymond-Dewar Institute (Montreal, QC, Canada) and the MAB-MacKay Rehabilitation Center (Montreal, QC, Canada), two centers offering rehabilitation programs for the hard-of-hearing individuals. Table 1 details the clinical characteristics of the CI participants. All participants provided written informed consent in the study and were compensated for their participation. The study was approved by the Research Ethics Board of the Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal (CRIR-985–0714).
Stimuli
Auditory stimulation.
The auditory stimulus consisted of a metronome sequence containing a repeated 6 ms broadband percussive sound (200 Hz - 10 kHz) with a total sequence duration of 39.5 sec. The rate of the metronome was 2.4 Hz. The stimulus was generated using Matlab R2007a (MathWorks, Natick, MA, USA) and presented with two 8040A bi-amplified loudspeakers (Genelec, Natick, MA, USA) located at 1.5m on each side of the participant at a global sound pressure level (SPL) of 70 dB.
Visual stimulation.
The visual stimulus was produced by a square matrix (3.7 cm x 3.7 cm) of blue light-emitting diodes located in front of the subjects at a distance of one meter. The LED square produced flashes at a frequency of 2.4Hz (15 ms ON time) for the visual-only and synchronous audio-visual conditions. In the asynchronous audio-visual condition, the frequency was 2.6Hz. A RX6 signal processing system (Tucker Davis Technologies, Alachua, FL, USA) was used for sub-millisecond precision of stimulus presentation.
Task procedure
The experiment was divided into four blocks, each consisting of 20 trials. The first two blocks contained detection tasks which will not be discussed in this paper. The third block included 10 trials from each of the two single modality synchronization conditions, and the fourth included 10 trials from each of the two multi-modal synchronization conditions. There were four different conditions: auditory-only (condition A), visual-only (condition B), synchronous audio-visual (condition C) and asynchronous audio-visual (condition D). Trials within each block were either A and B trials in a randomized order, or C and D trials in a randomized order. See Fig 1 for a detailed condition matrix. The presentation order of the trials was randomized within each block. Participants were instructed to tap their finger once every two beats while synchronizing with the sequence. They were specifically instructed to initiate their taps on the 3rd beat, providing time for participants to begin perceptually tracking the beat before they started tapping. The experiment was self-paced, and participants had to press the ‘enter’ key on a keyboard to launch the next trial. Throughout the task, regardless of stimulus modality, participants were instructed to maintain their gaze on the center of the LED-square designed for presentation of the visual stimuli. Participants were also instructed to lift their index finger from the sensor between each tap, and to use the pad of their finger rather than the nail, when tapping. There was a mandatory break after the 10th trial. Participants’ tap responses were recorded using a force-sensitive resistor (FSR) interfaced with the RX6 signal processing system, while sitting comfortably inside a double-walled audiometric booth.
Data preprocessing
Tap response latencies were determined from the recorded FSR signal using a threshold-crossing algorithm (RLW_events_level_trigger function from the LetsWave 6 Matlab toolbox [15]). A technical malfunction in the FSR signal recording code resulted in a single 1.46s period of signal dropout at a variable location in most trials. To guard against incorrect tap detection time at the end of this period, any detected tap that was within a particular time window after the dropout was discarded; this window was 75% of the instructed inter-tap interval, i.e., 750ms x (2/2.4) = 625 ms. Additionally, each trial was manually inspected to remove any double taps or noise that had been flagged as a tap. All these countermeasures were effective in mitigating the impact of the signal dropout and ensured reliability in the results.
Although participants were instructed to begin tapping on the third stimulus and then continue tapping every other stimulus (i.e., on the odd stimuli), in practice the participants often ended up tapping on the even stimuli, and on occasional trials tapped at a rate of 1x or 3x the stimulus interval. For this reason, on each trial the overall tapping interval was detected based on the nearest stimulus interval to the trial’s median inter-tap interval, and trials in which participants tapped every stimulus or every third stimulus were excluded from analysis (number of trials excluded for HC: auditory-only 6, visual-only 18, synchronous audio-visual 17, asynchronous audio-visual 4; for CI users: auditory-only 25, visual-only 13, synchronous audio-visual 33, asynchronous audio-visual 17). The overall tapping phase (tapping on the odd or even stimuli) was detected by convolving the stimulus times (odd or even) and response times with a gaussian function, calculating the resulting correlation between stimulus and response convolved series, and selecting the phase (even or odd) having greater correlation. To allow participants to reach a stable state of synchronization, the first 5 response taps were discarded. Additionally, pauses during tapping were identified by calculating inter-tap intervals and detecting outliers (third quartile plus 3x the inter-quartile range); the tap at the end of any such outlier intervals was discarded.
Data scoring
Synchronization consistency was assessed from tap latencies using circular statistics [16], providing a measure of synchronization consistency/precision (vector length) and synchronization accuracy (vector direction) for each trial, using a comparable methodology as the one described in previous work from Iversen et al. [12] and Nozaradan et al. [17]. The choice to use circular statistics allowed us to focus on synchronization consistency for analysis, in contrast to asynchrony-based measures that conflate consistency and accuracy (i.e., phase preference). Tap latencies were converted to vector angles on a unit circle. The full diameter of the unit circle corresponds to one “beat”, i.e., twice the inter-stimulus interval with respect to the even or odd stimuli as detected in the previous step. Tapping consistency is measured by calculating the mean resultant vector length. The vector length varies between 0 and 1, with lower scores (near 0) representing lower precision and high scores (near 1) higher precision. Prior to analysis, vector length values underwent a logit transformation to reduce data skewness, which is typical of synchronization data [18,19].
Analyses
Statistical analysis was performed using R Statistical Software (3.6.1.; R Core Team 2019, RRID:SCR_001905) [20]. A mixed-effects model was used to test the effects of group and task condition on tapping consistency scores (logit vector length values), using lmerTest (Kuznetsova et al., 2017; RRID:SCR_015656) [21] with lme4 (Bates et al., 2015; RRID:SCR_015654) [22]. Group and condition were parameterized using sum-to-zero effects coding, and model terms were entered for their main effects and interaction. A random effect of participant was added to account for the hierarchical structure of the data, and a random slope for condition was added to account for unequal variance in scores across conditions. Omnibus tests of the main effects and their interaction were performed on this model via the car package [23], yielding F and p values calculated from Wald tests using Kenward-Roger approximated degrees of freedom [24] and type 3 sum of squares. Following a significant interaction effect, the emmeans R package [25] was used to test post-hoc contrasts for the following comparisons of interest: unisensory visual synchronization between groups (VCI – VHC); unisensory auditory synchronization between groups (ACI – AHC); difference between unisensory visual and auditory synchronization within groups (i.e., potential auditory advantage; ACI - VCI and AHC - VHC); difference in unisensory auditory advantage (auditory-visual difference) between groups ((ACI – VCI) - (AHC – VHC)); multisensory congruence effect of synchronous auditory stimulus compared to unisensory visual stimulus within groups (AVsyncCI – VCI and AVsyncHC - VHC); difference between groups in multisensory effect of synchronous auditory stimulus compared to unisensory visual stimulus ((AVsyncCI – VCI) - (AVsyncHC - VHC)); multisensory effect of interfering asynchronous auditory stimulus compared to unisensory visual stimulus within groups (AVasyncCI – VCI and AVasyncHC - VHC); difference between groups in multisensory effect of interfering asynchronous auditory stimulus compared to unisensory visual stimulus ((AVasyncCI – VCI) - (AVasyncHC - VHC)). A Bonferroni correction for 11 multiple comparisons was applied to the results of these contrasts.
To explore the contribution of the onset and duration of pre-implantation deafness to task consistency among CI users, linear regressions were performed with the dependent variables of unisensory visual consistency (VCI), unisensory auditory consistency (ACI), and the multisensory congruence effect of synchronous auditory stimulus compared to unisensory visual stimulus (AVsyncCI – VCI). For each of these dependent variables, separate regressions were tested for the following predictors: age of pre-implantation deafness in years (for participants who were recorded as “pre-linguistic”, the value of 2 years was entered in the regression), and duration of deafness in years.
Results
A summary of performance by condition and group is provided in Table 2. Synchronization consistency varied by group and task condition (group x task interaction, F(4, 32.923) = 4.2137, p = .013). The results are presented on the basis of post-hoc contrasts examining specific comparisons of interest. Fig 2 shows the unisensory synchronization consistency results obtained with auditory and visual stimuli.
Thick bars indicate the consistency difference between auditory and visual conditions for each group. Analyses were performed on logit-transformed circular vector length; untransformed vector length values are indicated on the right axis for comparison. Higher values represent greater synchronization consistency.
Both CI users and HC exhibited superior synchronization to auditory stimuli compared to visual stimuli (CI: t(34.83) = 4.67, p < .001; HC: t(35.18) = 7.99, p < .001). Consistency on the visual condition did not differ significantly between the two groups (t(35.01) = 1.04, p ≅ 1); nor did consistency differ between groups in the auditory condition (t(35.01) = -1.97, p = .630). A trend towards greater advantage for auditory over visual synchronization was observed in HC compared to CI, although this trend did not reach statistical significance following Bonferroni correction (t(35.02) = -2.72, p = .110).
Fig 3 presents the multisensory consistency results obtained in synchronous and asynchronous conditions. When the auditory timing was congruent with visual timing (i.e., synchronous condition), HC demonstrated improved consistency compared to unisensory visual timing (t(35.14) = 6.56, p < .001), while CI users did not exhibit a better consistency in this condition (t(34.72) = 2.14, p = .438); in a direct comparison, this multisensory congruence effect was greater in HC than CI users (t(34.95) = -3.39, p = .019). In the asynchronous condition, the impact of incongruent auditory information was similar for both groups (t(34.95) = 0.28, p ≅ 1).
The left pair of thick bars indicates the increase in consistency when synchronous (congruent) auditory stimulation was added, compared to visual-only, in each group. The right pair of thick bars indicates the decrease in consistency when asynchronous (incongruent) auditory stimulation was added. Analyses were performed on logit-transformed circular vector length; untransformed vector length values are indicated on the right axis for comparison. Higher values represent greater synchronization consistency. Bars represent 95% confidence intervals of the mean.
Regression analyses within the CI user group found a marginal, non-significant relation between unisensory visual consistency and the onset age of deafness (b = -0.04, t(18) = -1.97, p = .06). As shown in Fig 4, it may be noted that although CI users whose deafness began after the age of 25 generally exhibited low visual synchronization consistency regardless of duration of deafness, those who had a combination of both very early and prolonged deafness are among those with the highest consistency on the unisensory visual condition. No relation was found between consistency on the unisensory visual condition and years of pre-implantation deafness (t(18) = -0.02, p = .986), between consistency on the unisensory auditory condition and deafness age of onset (t(18) = 0.30, p = .769) or duration (t(18) = -1.27, p = .219), nor between the multisensory congruence effect and deafness age of onset (t(18) = 1.16, p = .262) or duration (t(18) = -0.74, p = .467).
Consistency values below ~ -0.8 can be considered chance performance (i.e., non-significant circular Rayleigh test). Regression of consistency vs onset age: b = -0.04, t(18) = -1.97, p = .06. Bars represent 95% confidence intervals of the mean.
Discussion
This study investigated the integration of auditory and visual sensory modalities in CI users and HC during rhythmic synchronization tasks. The existing literature has emphasized the advantage of auditory stimuli over visual stimuli in rhythmic synchronization [26–29]. Our results find that this advantage holds in CI users for unisensory synchronization, but also reveal a nuanced pattern of cross-modal congruence and incongruence effects on synchronization consistency between CI users and HC.
As anticipated, both CI users and HC demonstrated superior consistency in auditory synchronization when compared to visual synchronization. This result is in line with previous research reporting weaker consistency at synchronizing movement to visual flash-like stimuli [9]. We note that performance in the HC group was lower than studies using a typical tapping synchronization interval of 600 ms to auditory tone or visual flash stimuli [12,30], most likely because of our longer tapping interval and the additional requirement to tap every two stimuli; however, those details were common across all of our synchronization conditions. The difference in performance between the unisensory visual and auditory conditions suggests that the advantage in visual synchronization that deaf individuals typically develop in response to hearing deprivation due to their heavy reliance on the visual system in daily life [12] is generally absent in the CI participants of this study. However, the trend we observed for higher unisensory visual consistency in CI users with an early deafness onset suggests that this improved visual synchronization consistency may be limited to those for whom pre-implantation deafness is both early and prolonged. Any underlying neural reorganization could also be restricted to this specific group, but a larger sample size is required to more robustly assess the validity of this trend. Additionally, most participants in this study can be considered late deafened, in contrast to the participants in the cited study [12], who were either born deaf or became deaf before the age of three years, and used American Sign Language as their primary language, which, along with early deafness, has been shown to impact visual system synchronization with visual information [31] and reaction time to visual stimuli [32].
Furthermore, our results contribute to a more nuanced understanding of the impact of cross-modal congruence and incongruence on rhythmic synchronization consistency. HC displayed improved consistency when presented with audio-visual synchronous stimuli, consistent with the idea that cross-modal timing congruence enhances sensory integration and temporal processing [33,34]. In contrast, CI users did not exhibit a significant improvement in the audio-visual synchronous condition. Additionally, CI users exhibited a significantly higher rate of unsuccessful synchronization in both the unisensory auditory and multisensory synchronous conditions, while achieving a lower rate of unsuccessful synchronization in the unisensory visual and multisensory asynchronous conditions. This finding suggests that, while auditory integration poses challenges, visual processing may still confer advantages for CI users.
Previous research demonstrated that visual stimulations elicit activation in the auditory cortex of deaf individuals [35,36]. This phenomenon appears to extend to CI users, with several studies employing fMRI and EEG suggesting an incomplete reversal of the cortical reorganization induced by deafness [37–42]. Collectively, our findings, along with those from the literature, suggest that the incomplete reversal of neural organization after implantation alters the integration of auditory sensory information with visual information. This impairment may contribute to the diminished advantage of cross-modal congruence observed in CI users. However, evidence from animal models suggests that the presence of cross-modal plasticity in higher-order auditory areas does not compromise auditory responsiveness, indicating that cross-modal reorganization may be less detrimental to neurosensory restoration than previously believed [43].
The similarity in the interference from incongruent auditory information in the asynchronous condition between CI users and HC suggests that both groups are similarly susceptible to cross-modal incongruence, which can disrupt the precision of synchronization. This result underscores that the post-implantation neural reorganization does not confer immunity to incongruent auditory information but instead affects the prioritization and integration of sensory cues.
Since the CI electrodes can only stimulate a limited number of auditory nerve fibers, the spectral resolution of the information transmitted by the CI is limited [44]. However, as already reported in the literature CI users’ temporal auditory perception remains in the same range as HC [45]. The relatively spared rhythm processing in CI users, as demonstrated in their auditory synchronization skills comparable to HC, suggests that fine-grained timing is among the aspects of auditory processing may remain intact, even following auditory deprivation and subsequent implantation. Furthermore, the observed shift in audio-visual integration among CI users, with reduced cross-modal congruence benefits, suggests that the auditory cortex may reallocate resources to prioritize auditory cues and may not efficiently integrate temporal information from visual stimuli.
Although the number of participants is typical for studies of CI users, the sample size may limit the sensitivity to determine, for example, whether auditory synchronization consistency is diminished in CI users, or whether there is an improvement in visually-paced synchronization of CI users with the addition of congruent auditory information, where both effects were non-significant in the present analyses. These open questions may be resolved in a larger study.
Conclusion
This research aimed to determine whether CI users retain a visual synchronization advantage from their pre-implant deafness, while maintaining auditory synchronization skills comparable to those of HC, or if the neural reorganization post-implantation negates the visual synchronization advantage acquired pre-implantation. Despite impaired pitch processing [46], CI users exhibit relatively preserved rhythm processing. As expected, the auditory rhythmic synchronization abilities of CI users were found to be comparable to those of HC, corroborating existing research. Interestingly, unlike deaf individuals, CI users did not demonstrate a superior ability in synchronizing with visual rhythms, likely due to the relatively few numbers of individuals who had early prolonged pre-implantation deafness. This shift in audio-visual integration among CI users suggests that the post-deafness or post-implant reorganization of their auditory cortex might hinder the effective integration of temporal auditory input from the implant with visual information. Overall, this research provides valuable insights into the impact of cochlear implants on audio-visual synchronization abilities, suggesting a need for further investigation into the neural mechanisms involved in post-implantation reorganization and its effects on sensory integration. Despite changes in audio-visual integration following implantation, the broader perspective is that individuals with cochlear implants have access to the beautiful world of dance and rhythm, affirming their ability to engage and appreciate various forms of sensory experience.
Acknowledgments
The authors would like to thank all the participants for giving their time to take part in this study.
References
- 1. Lerdahl F, Jackendoff R. An overview of hierarchical structure in music. Music Percept. 1983;1(2):229–52.
- 2. Schaefer H-E. Music-evoked emotions-current studies. Front Neurosci. 2017;11:600. pmid:29225563
- 3. Marinberg N, Aviv V. Dancers’ somatic of musicality. Front Psychol. 2019;10:2681. pmid:31866897
- 4. Cumming R, Wilson A, Leong V, Colling LJ, Goswami U. Awareness of rhythm patterns in speech and music in children with specific language impairments. Front Hum Neurosci. 2015;9:672. pmid:26733848
- 5. Bonacina S, Krizman J, White-Schwoch T, Nicol T, Kraus N. Distinct rhythmic abilities align with phonological awareness and rapid naming in school-age children. Cogn Process. 2020;21(4):575–81. pmid:32607802
- 6. Kamiloğlu RG, Fischer AH, Sauter DA. Good vibrations: a review of vocal expressions of positive emotions. Psychon Bull Rev. 2020;27(2):237–65. pmid:31898261
- 7. Repp BH, Su Y-H. Sensorimotor synchronization: a review of recent research (2006-2012). Psychon Bull Rev. 2013;20(3):403–52. pmid:23397235
- 8. Thelen E. Rhythmical stereotypies in normal human infants. Anim Behav. 1979;27(Pt 3):699–715. pmid:556122
- 9. Patel AD, Iversen JR, Chen Y, Repp BH. The influence of metricality and modality on synchronization with a beat. Exp Brain Res. 2005;163(2):226–38. pmid:15654589
- 10. Whitton SA, Jiang F. Sensorimotor synchronization with visual, auditory, and tactile modalities. Psychol Res. 2023;87(7):2204–17. pmid:36773102
- 11. Braun Janzen T, Koshimori Y, Richard NM, Thaut MH. Rhythm and music-based interventions in motor rehabilitation: current evidence and future perspectives. Front Hum Neurosci. 2022;15:789467. pmid:35111007
- 12. Iversen JR, Patel AD, Nicodemus B, Emmorey K. Synchronization to auditory and visual rhythms in hearing and deaf individuals. Cognition. 2015;134:232–44.
- 13. Phillips-Silver J, Toiviainen P, Gosselin N, Turgeon C, Lepore F, Peretz I. Cochlear implant users move in time to the beat of drum music. Hear Res. 2015;321:25–34. pmid:25575604
- 14. Stein BE, Stanford TR. Multisensory integration: current issues from the perspective of the single neuron. Nat Rev Neurosci. 2008;9(4):255–66. pmid:18354398
- 15. Mouraux A, Iannetti GD. Across-trial averaging of event-related EEG responses and beyond. Magn Reson Imaging. 2008;26(7):1041–54. pmid:18479877
- 16.
Fisher N. Statistical analysis of circular data. Cambridge: Cambridge University Press; 1993. https://doi.org/10.1017/CBO9780511564345
- 17. Nozaradan S, Peretz I, Keller PE. Individual differences in rhythmic cortical entrainment correlate with predictive behavior in sensorimotor synchronization. Sci Rep. 2016;6:20612. pmid:26847160
- 18. Kirschner S, Tomasello M. Joint drumming: social context facilitates synchronization in preschool children. J Exp Child Psychol. 2009;102(3):299–314. pmid:18789454
- 19. Sowiński J, Dalla Bella S. Poor synchronization to the beat may result from deficient auditory-motor mapping. Neuropsychologia. 2013;51(10):1952–63. pmid:23838002
- 20.
R Development Core Team. R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2014 [cited 2023 Nov 08. ]. Available from: http://www.R-project.org/
- 21. Kuznetsova A, Brockhoff PB, Christensen RHB. lmerTest package: tests in linear mixed effects models. J Stat Soft. 2017;82(13):1–26.
- 22. Bates D, Mächler M, Bolker B, Walker S. Fitting linear mixed-effects models usinglme4. J Stat Soft. 2015;67(1):1–48.
- 23.
Fox J, Weisberg S. An R companion to applied regression. 3rd ed. Thousand Oaks (CA): Sage; 2019. Available from: https://socialsciences.mcmaster.ca/jfox/Books/Companion/
- 24. Halekoh U, Højsgaard S. A kenward-roger approximation and parametric bootstrap methods for tests in linear mixed models - TheRPackagepbkrtest. J Stat Soft. 2014;59(9):1–32.
- 25. Lenth R, Singmann H, Love J, Buerkner P, Herve M. R package version 4.0-3; 2019. Available from: http://cran.r-project.org/package=emmeans
- 26. Chen Y, Repp BH, Patel AD. Spectral decomposition of variability in synchronization and continuation tapping: comparisons between auditory and visual pacing and feedback conditions. Hum Mov Sci. 2002;21(4):515–32. pmid:12450682
- 27. Repp BH. Rate limits in sensorimotor synchronization with auditory and visual sequences: the synchronization threshold and the benefits and costs of interval subdivision. J Mot Behav. 2003;35(4):355–70. pmid:14607773
- 28. Repp BH, Penel A. Rhythmic movement is attracted more strongly to auditory than to visual rhythms. Psychol Res. 2004;68(4):252–70. pmid:12955504
- 29. Lorås H, Sigmundsson H, Talcott JB, Öhberg F, Stensdotter AK. Timing continuous or discontinuous movements across effectors specified by different pacing modalities and intervals. Exp Brain Res. 2012;220(3–4):335–47. pmid:22710620
- 30. Dalla Bella S, Foster NEV, Laflamme H, Zagala A, Melissa K, Komeilipoor N, et al. Mobile version of the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA): implementation and adult norms. Behav Res Methods. 2024;56(4):3737–56. pmid:38459221
- 31. Stroh A-L, Grin K, Rösler F, Bottari D, Ossandón J, Rossion B, et al. Developmental experiences alter the temporal processing characteristics of the visual cortex: evidence from deaf and hearing native signers. Eur J Neurosci. 2022;55(6):1629–44. pmid:35193156
- 32. Codina CJ, Pascalis O, Baseler HA, Levine AT, Buckley D. Peripheral visual reaction time is faster in deaf adults and british sign language interpreters than in hearing adults. Front Psychol. 2017;8:50. pmid:28220085
- 33. Hove MJ, Iversen JR, Zhang A, Repp BH. Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome. Psychol Res. 2013;77(4):388–98. pmid:22638726
- 34. Barutchu A, Crewther DP, Crewther SG. The race that precedes coactivation: development of multisensory facilitation in children. Dev Sci. 2009;12(3):464–73. pmid:19371371
- 35. Bavelier D, Neville HJ. Cross-modal plasticity: where and how? Nat Rev Neurosci. 2002;3(6):443–52. pmid:12042879
- 36. Merabet LB, Pascual-Leone A. Neural reorganization following sensory loss: the opportunity of change. Nat Rev Neurosci. 2010;11(1):44–52. pmid:19935836
- 37. Sandmann P, Dillier N, Eichele T, Meyer M, Kegel A, Pascual-Marqui RD, et al. Visual activation of auditory cortex reflects maladaptive plasticity in cochlear implant users. Brain. 2012;135(Pt 2):555–68. pmid:22232592
- 38. Neville HJ, Schmidt A, Kutas M. Altered visual-evoked potentials in congenitally deaf adults. Brain Res. 1983;266(1):127–32. pmid:6850339
- 39. Finney EM, Fine I, Dobkins KR. Visual stimuli activate auditory cortex in the deaf. Nat Neurosci. 2001;4(12):1171–3. pmid:11704763
- 40. Bavelier D, Dye MWG, Hauser PC. Do deaf individuals see better? Trends Cogn Sci. 2006;10(11):512–8. pmid:17015029
- 41. Karns CM, Dow MW, Neville HJ. Altered cross-modal processing in the primary auditory cortex of congenitally deaf adults: a visual-somatosensory fMRI study with a double-flash illusion. J Neurosci. 2012;32(28):9626–38. pmid:22787048
- 42. Fine I, Finney EM, Boynton GM, Dobkins KR. Comparing the effects of auditory deprivation and sign language within the auditory and visual cortex. J Cogn Neurosci. 2005;17(10):1621–37. pmid:16269101
- 43. Land R, Baumhoff P, Tillein J, Lomber SG, Hubka P, Kral A. Cross-modal plasticity in higher-order auditory cortex of congenitally deaf cats does not limit auditory responsiveness to cochlear implants. J Neurosci. 2016;36(23):6175–85. pmid:27277796
- 44. Nguyen DL, Valentin O. Hearing loss: etiology, diagnosis and interventions. Can Acoust. 2023;50(4). Available from: https://jcaa.caa-aca.ca/index.php/jcaa/article/view/3553
- 45. Cazals Y, Pelizzone M, Saudan O, Boex C. Low-pass filtering in amplitude modulation detection associated with vowel and consonant identification in subjects with cochlear implants. J Acoust Soc Am. 1994;96(4):2048–54. pmid:7963020
- 46. Zhou N, Mathews J, Dong L. Pulse-rate discrimination deficit in cochlear implant users: is the upper limit of pitch peripheral or central? Hear Res. 2019;371:1–10. pmid:30423498