Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Repetition Suppression for Speech Processing in the Associative Occipital and Parietal Cortex of Congenitally Blind Adults

  • Laureline Arnaud,

    Affiliation Centre for Research on Brain, Language and Music, McGill University, Montréal, Canada

  • Marc Sato,

    Affiliation Centre for Research on Brain, Language and Music, and GIPSA-lab, Centre national de la recherche scientifique and Grenoble Université, Grenoble, France

  • Lucie Ménard,

    Affiliations Centre for Research on Brain, Language and Music, McGill University, Montréal, Canada, Département de Linguistique, Université du Québec à Montréal, Montréal, Canada

  • Vincent L. Gracco

    Affiliations Centre for Research on Brain, Language and Music, McGill University, Montréal, Canada, School of Communication Sciences and Disorders, McGill University, Montréal, Canada, Haskins Laboratories, New Haven, Connecticut, United States of America

Repetition Suppression for Speech Processing in the Associative Occipital and Parietal Cortex of Congenitally Blind Adults

  • Laureline Arnaud, 
  • Marc Sato, 
  • Lucie Ménard, 
  • Vincent L. Gracco


In the congenitally blind (CB), sensory deprivation results in cross-modal plasticity, with visual cortical activity observed for various auditory tasks. This reorganization has been associated with enhanced auditory abilities and the recruitment of visual brain areas during sound and language processing. The questions we addressed are whether visual cortical activity might also be observed in CB during passive listening to auditory speech and whether cross-modal plasticity is associated with adaptive differences in neuronal populations compared to sighted individuals (SI). We focused on the neural substrate of vowel processing in CB and SI adults using a repetition suppression (RS) paradigm. RS has been associated with enhanced or accelerated neural processing efficiency and synchronous activity between interacting brain regions. We evaluated whether cortical areas in CB were sensitive to RS during repeated vowel processing and whether there were differences across the two groups. In accordance with previous studies, both groups displayed a RS effect in the posterior temporal cortex. In the blind, however, additional occipital, temporal and parietal cortical regions were associated with predictive processing of repeated vowel sounds. The findings suggest a more expanded role for cross-modal compensatory effects in blind persons during sound and speech processing and a functional transfer of specific adaptive properties across neural regions as a consequence of sensory deprivation at birth.


In the congenitally blind (CB), numerous neuroimaging studies have demonstrated visual cortical activation to a wide range of sensory processing tasks including auditory change detection [1], spatial sound localization and discrimination [2][3], spoken language processing [4][5] and Braille reading [6]. The functional nature of cross-modal activation of visual cortex in the blind comes from three different but related sources. Studies using transcranial magnetic stimulation of the visual cortex have demonstrated a causal link between occipital cortex activation and language tasks [7][8]. Studies of language processing have demonstrated graded activation patterns in response to increasing processing complexity [4], [6], [9] and behavioural results have yielded evidence of enhanced performance in tasks involving dichotic listening and attention [10], pitch detection [11], auditory localization [12], and speech perception [13][17]. From these results, although speculative, enhanced performance relative to sighted controls might be partly linked to cross-modal differences in the CB and early blind (EB) compared to SI. One of the issues not previously addressed in studies of cross-modal plasticity difference is whether visual activity might also be recruited in the CB using passive auditory speech listening and whether cross-modal plasticity in the CB is associated with enhanced or expanded adaptive properties of the neuronal populations associated with the expanded activation. To this aim, we used a repetition suppression (RS) paradigm to identify the neural substrate associated with passive speech listening to repeated vowels in CB and SI adults. Repetition suppression, the attenuation of neural response to repeated stimuli, has been observed in single-unit recordings in non-human primates [18] and in functional neuroimaging studies in humans [19]. Repetition suppression is associated with response priming and is used as a metric to examine the processing characteristic of neuronal populations [20][22]. Recent data [23] and theory [24] suggest that RS reflects a combination of attention and predictive mechanisms (predictive coding) integrating top-down expectations with bottom-up sensory input [25]. While a number of theoretical models have been proposed to explain RS [20], [22], [26][27] all are associated with increased processing and information encoding efficiencies related to repeated stimulus properties. Here we were interested to what extent within and across-modal activation to passive vowel processing would result in RS effects in the CB. Two groups of ten congenitally blind and ten sighted adults participated in a functional magnetic resonance imaging (fMRI) study. A sparse sampling acquisition technique was used where participants passively listened to single vowel repetitions during the silent interval between successive image acquisitions. While both groups demonstrated RS effects to the passive vowel presentations in the temporal cortex, extended RS effects were observed in visual and parietal cortical regions in the CB. Together with the enhanced performance for sound processing reported in the literature, it appears that the expansion of cortical representation for speech and increased processing efficiency within those recruited cortical areas, may be a hallmark of functional cross-modal reorganization in the CB.

Materials and Methods


Ten congenitally blind participants (4 females, mean age = 39 years, age range = 20–59 years) and ten sighted healthy adults (5 females, mean age = 35 years, age range = 22–59 years) comprised the experimental group. There was no significant difference in age between the two groups (t-value = 0.356, p = 0.726). The CB participants had complete congenital visual impairment, classified as category 5 (no light perception) except for one participant who was classified as category 4 (light perception) presenting distance visual acuity worse than 20/1200. The cause of blindness was not obtained. However during the recruitment process all blind participants declared that they were never able to see.

All participants were native Canadian French speakers, right-handed with no history of speech or hearing disorders. The experiment was performed in accordance with the ethical standards in the 1964 Declaration of Helsinki and requirements of the Faculty of Medicine, McGill University. The experimental and consent procedures were approved by the Research Ethics Board of the Montreal Neurological Institute. All sighted subjects provided written consent. Blind subjects were presented with a Braille copy of the consent form and after reading gave verbal consent.


The stimuli were multiple /i/ and /y/ French vowels recorded from a native French Canadian male speaker in a sound-attenuated room. Multiple utterances of /i/ and /y/ French vowels were individually recorded. Seven clearly articulated tokens of each vowel were selected and digitized at a sampling rate of 44.1 kHz with 16-bit quantization recording. Using Praat software (Institute of Phonetic Sciences, University of Amsterdam, NL), the fundamental frequency (F0), and first, second, and third formant frequencies (F1, F2, F3) values were calculated for each vowel from a section of the vowel located at ±25 ms of the maximum peak intensity. For the /i/ vowels, the mean F0, F1, F2, F3, peak intensity and duration values were 155 Hz (±8), 299 Hz (±9), 2287 Hz (±56), 3166 Hz (±35), 70 dB (±2) and 357 ms (±49), respectively. For /y/ vowels, the mean F0, F1, F2, F3, intensity and duration values were 156 Hz (±8), 301 Hz (±7), 2061 Hz (±61), 2982 Hz (±103), 73 dB (±2) and 322 ms (±52), respectively.


The fMRI experiment consisted of two functional runs (63 trials per run) in which participants passively listened to French steady-state vowels (/i/ and /y/). A sparse sampling acquisition paradigm was used (e.g., [28][29]) with the speech stimuli or the resting condition presented in the silent interval (7 sec) between volume acquisitions. In each run, the same vowel (/i/ or /y/), or the resting condition was presented in three sets of seven consecutive trials (see Figure 1 for details). This procedure allowed measuring changes in BOLD signal for repeated vowel processing. Blind and sighted participants were instructed to close their eyes, to pay attention to the auditory stimuli and not to move during the experimental session.

Figure 1. Schematic of the experimental runs.

Each run lasted 10.5 minutes and included 63 trials per run (TR = 10 secs; 7 seconds of silence). The /y/ and /i/ vowels, or rest were presented in three sets of 7 consecutive presentations (one vowel or rest per TR repeated 7 times), e.g., 3 repetitions of the sequence—( i i i i i i i y y y y y y y baseline baseline baseline baseline baseline baseline baseline).

Data acquisition

Magnetic resonance images were acquired with a 1.5T whole-body MRI scanner (Siemens Sonata MR scanner) and standard headcoil in the Brain Imaging Centre (BIC) at the Montreal Neurological Institute. Auditory stimuli were amplified (Rolls RA53b Headphone Amplifier) and presented to participants with MRI compatible insert earphones (Sensimetrics S14) at a comfortable sound pressure level.

Functional images were obtained using a T2*-weighted, echoplanar imaging (EPI) sequence with whole-brain coverage (TR = 10 s, acquisition time = 3000 ms, TE = 51 ms, flip angle = 90°). Each functional scan comprised thirty-five axial slices parallel to the anteroposterior commissural plane acquired in interleaved order (64×64 matrix; field of view: 256 mm2; 4×4 mm2 in plane resolution with a slice thickness of 4 mm without gap). A high-resolution T1-weighted whole-brain structural image was acquired for each participant after the second functional run (256×256 matrix; field of view: 256 mm2; sagittal volume of 256×256×176 mm3 with a 1 mm isotropic resolution, TR/TE = 22/9.2 ms with 30% partial echo, flip angle = 30°).

In each functional run and for each TR, the timing between the vowel onset and the midpoint of the following functional scan acquisition was randomly varied between 4 s, 5 s or 6 s. Each functional run was 10.5 minutes in length.

Data analyses

Data were analysed using the SPM5 software package (Wellcome Department of Imaging Neuroscience, Institute of Neurology, London, UK) running on Matlab (Mathworks, Natick, MA, USA). The maximum activation peaks for each cluster were labelled according to probabilistic cytoarchitectonic maps in the SPM Anatomy toolbox [30]. If a brain region was assigned a probability less than 50% or unspecified in the toolbox, the peak coordinates were converted from MNI space to Talairach space and the brain region identified with the Talairach Daemon [31].

The functional series was realigned for head movement. After segmentation of the T1 structural image and coregistration to the mean functional image, all functional images were spatially normalized into standard stereotaxic space of the Montreal Neurological Institute. All functional images were smoothed using an 8 mm FWHM Gaussian kernel.

A General Linear Model was used to analyse BOLD activity with regressors of interest related to the seven vowel repetitions and six realignment parameters with the silent trials forming an implicit baseline. The BOLD response for each event was modelled using a single-bin finite impulse response (FIR) basis function spanning the time of acquisition (3 s). Before estimation, high-pass filtering (cutoff of 128 s) was applied. Beta weights associated with the modelled FIR responses were then computed to fit the observed BOLD signal time course in each voxel for each condition. Individual statistical maps were calculated for each vowel repetition with the related baseline and subsequently used for group statistics.

A second-level random effect group analysis was carried-out. A mixed analysis of variance (ANOVA) was performed, with the group (2 levels: blind and sighted participants) as a between-subject factor and the vowel repetition (7 levels: R1 to R7) as a within-subject factor.

First, two t-contrasts were calculated to determine brain activity averaged across the seven vowel repetitions (i.e., irrespective of the RS) compared to the resting condition (mean effect of vowel perception: blind>rest and sighted>rest; false discovery rate corrected cluster and voxel levels of p<.001 and cluster extent of at least 30 voxels). To identify specific activity differences between the two groups, two t-contrasts were then calculated (main effect of group: blind>sighted participants and sighted>blind participants; corrected level of p≤.01 at the cluster level and uncorrected level of p<.001 at the voxel level, cluster extent of at least 30 voxels).

Second, in order to identify brain regions showing RS for repeated vowel processing, two t-contrasts were assessed to determine brain regions that showed a significant linear decrease in activity across the 7 vowel repetitions (RS effect: blind and sighted participants; corrected level of p≤.01 at the cluster level and uncorrected level of p<.001 at the voxel level, cluster extent of at least 30 voxels). Exclusive masking was used to identify voxels for which RS effects were not shared between the two groups. The SPM constituting the exclusive mask was thresholded at p<.05, whereas the contrast to be masked was thresholded at an uncorrected level of p<.001 at the voxel level but at a corrected level of p≤.01 at the cluster level and cluster extent of at least 30 voxels).


Mean effect of vowel processing

Surface rendering of brain activity and maximum activation peaks of the mean effect of vowel processing (compared to the resting condition) for the blind and sighted participants are provided in Figure 2A and Tables 1 & 2. For both blind and sighted participants, auditory vowel processing induced large bilateral activation of the auditory cortex, including activity in the transverse temporal gyrus (primary/secondary auditory cortex) and in the posterior part of the superior temporal gyrus/sulcus. For blind participants, additional bilateral occipital activation was observed in the extrastriate visual cortex with maximum activation peaks located in the left middle occipital gyrus, in the right lingual and parahippocampal gyri and in the cuneus, bilaterally.

Figure 2. Surface rendering of brain activity for vowel processing.

2A-Surface rendering of brain activity for the Mean effect of vowel processing for the blind (blue) and sighted (red) participants compared to rest (Mean effect: false discovery rate corrected level of p<.001 and cluster extent of at least 30 voxels); horizontal sections showing areas of activation from z = 0 to z = 20 in 5 mm increments. 2B-Surface rendering of brain activity for the Main effect of group (Group effect: corrected level of p≤.01 at the cluster level and uncorrected level of p<.001 at the voxel and cluster levels, cluster extent of at least 30 voxels); horizontal sections showing areas of activation from z = 0 to z = 20 in 5 mm increments.

Table 1. Mean effect of vowel processing compared to rest for blind participants (coordinates in MNI space).

Table 2. Mean effect of vowel processing compared to rest for sighted participants (coordinates in MNI space).

Main effect of group

Surface rendering of brain activity and maximum activation peaks of the main effect of group (blind vs. sighted participants) are provided in Figure 2B and Tables 3 & 4. The main effect of group revealed significant activation differences between blind and sighted participants during auditory vowel processing, with stronger neural responses for sighted participants in the right transverse and posterior superior temporal gyri as well as specific activity of the left extrastriate cortex (cuneus and middle occipital gyrus) for blind participants.

Table 3. Main effect of vowel processing – blind>sighted participants (coordinates in MNI space).

Table 4. Main effect of vowel processing – sighted>blind participants (coordinates in MNI space).

Repetition suppression effect

Surface rendering of brain activity and maximum activation peaks of the RS effect for the blind and sighted participants are provided in Figure 3A and Tables 5 & 6. As expected, RS was observed in the auditory cortex during repeated vowel processing. For sighted participants, BOLD decrease across the 7 consecutive vowels was observed bilaterally in the posterior part of the superior temporal gyrus/sulcus and in the right posterior part of the middle temporal gyrus. For blind participants, RS was also observed in the right posterior part of the superior and middle temporal gyri, with RS activity extending dorsally to the ventral part of suparmarginal gyrus. Although no RS was observed in the left posterior superior temporal gyrus for blind participants with an extend threshold of 30 voxels, it should be noted that this region appears sensitive to RS with a lower threshold of 10 voxels (p<.001 uncorrected at the voxel level but not surviving a corrected threshold at the cluster level). Additional RS was observed for blind participants in the left fusiform gyrus and in the bilateral extrastriate visual cortex (with maximum activation peaks located in the cuneus), with RS activity extending in the supramarginal gyrus, the intraparietal sulcus and the superior parietal lobule.

Figure 3. Surface rendering brain activity for the repetition suppression effect and related contrast estimates.

3A-Surface rendering of brain activity for the Repetition Suppression effect and related contrast estimates reflecting percentage BOLD signal decrease for the seven vowel repetitions in auditory, visual and parietal regions (RS effect: corrected level of p≤.01 at the cluster level and uncorrected level of p<.001 at the voxel level, cluster extent of at least 30 voxels); 3B-Surface rendering of the Group effect and horizontal sections showing areas of activation for 4 slices from z = 10 to z = 30 in 5 mm increments. Abbreviations: pSTG (posterior superior temporal gyrus); IPS (inferior parietal sulcus); SMG (supramarginal gyrus).

Table 6. RS effect for sighted participants (coordinates in MNI space).

Exclusive masking was used to identify voxels for which RS effects were not shared between the two groups. This analysis confirms stronger RS effect in the occipital and parietal cortices in the blind than in the sighted (see Figure 3B). However, the left fusiform gyrus did not survive this masking procedure. Finally, no voxels survived the inverse masking (RS effects in the control subjects masked by RS effects in the blind subjects) at the same threshold.


In the present study, CB and SI participants listened passively to short, repeated vowel sounds. The auditory stimulation resulted in bilateral activation in the transverse temporal and superior temporal gyri for both groups consistent with speech processing. Compared to SI adults and consistent with previous studies on auditory and speech and language processing in the blind [1],[4][5],[9] passive vowel processing activated bilateral primary and associative extrastriate visual cortex in the CB participants but not in the SI.

Our main purpose, however, was to investigate the presence and extent of RS effects in the CB during passive listening. Repetition suppression effects were observed as a linear BOLD signal decrease across the 7 consecutive vowels in auditory processing regions along the posterior part of the superior and middle temporal gyri for both groups overlapping in the right but only present in the left hemisphere for SI. Previous studies in sighted participants have consistently shown RS sensitivity of similar posterior auditory brain areas classically involved in speech and phonological processing [32][33]. For blind participants, however, a more extensive distribution of RS effects was found. The expanded regions of suppression were observed in extrastriate regions including the left fusiform gyrus and bilateral intraparietal sulcus (IPS) and supramarginal gyrus (SMG), the latter area associated with phonological processing [34] and visual word recognition [35][36] in SI and Braille reading in the blind [6]. The IPS, on the other hand, is involved in cross-modal interactions in SI including cross-modal links in attention [37].

For the CB, enhanced performance relative to SI has been reported for a wide range of behaviours from the ability to recognize rapid speech [13][14], [16][17] to detecting pitch change direction [11] to enhanced tactile acuity [38][40]. From these results, one possibility is that cross-modal plasticity in the CB is associated with more sensitive and/or efficient processing of sensory signals, including speech, in CB. The more extensive RS effects in the CB compared to the SI are consistent with this interpretation. However, RS effects have been shown to be sensitive to attention, which can also enhance or accentuate perceptual expectation (prediction) yielding neural response attenuation [23]. In the current study, we did not control for attention but assume that attentional factors, given the task, were minimal. In addition, attributing the RS effects to attentional differences in the two groups is difficult to support given the known differences in behavioral performance in a range of auditory based tasks in the CB compared to SI (see above). What can be suggested is that the RS differences during passive listening reflect an enhanced (more spatially extensive) obligatory predictive coding of sensory (auditory) input and cortico-cortical feedback [24][25], [27] in the CB. The enhanced neural processing would increase the sensitivity of the activated neurons by increasing the dynamic range, preventing saturation and increasing information encoding efficiency [41].

Whether the RS results are more directly attributable to differences in sensory processing or attention or basic mechanisms that underlie both (predictive coding), it is clear that the response of the CB participants differed from their sighted controls. Assuming that the current RS effects are representative of other auditory tasks for which CB perform better than SI, the current results suggest that cortical processing in CB may be optimized for auditory features, speech or otherwise. Moreover, enhanced RS effects or predictive coding may be a neural property differentiating processing in multiple cortical regions in the CB relative to SI. However, it should also be noted that previous reports of enhanced performance of the CB have been based on actual performance. Since the present study used passive listening only drawing a direct connection between the enhanced RS effects and behavior is tenuous.

Interestingly, activation of the parietal cortex, especially in the area of the IPS, suggests that unimodal sensory input in the CB activates multimodal cortex. It has been suggested that the sensitivity of multisensory heteromodal cortex like IPS is modulated by back-projection from sensory cortices [37]. In the context of speech perception, for example, multimodal sensory input (auditory and visual) integrates on heteromodal cortex and then back projects to the sensory cortices to modulate sensitivity. It is the case that in the CB this kind of interaction and modulation in heteromodal cortex during speech processing does not come from two different sensory inputs (auditory and visual), but from the same input (auditory). The speech input activates both the temporal cortex and extrastriate cortex and both project through the auditory visual dorsal streams to the heteromodal parietal cortex. That is, cross-modal plasticity in the CB is used to support auditory and visual convergence in the absence of direct visual receptor input. Through bi-directional projections the visual and auditory areas, driven by auditory input, could reinforce the predictive coding of stimulus input characteristics as a means to enhance further processing efficiency.

In summary, using the repetition of predictable speech stimuli, we were able to observe cross-modal neural processing differences in the congenitally blind that have not been reported previously. The present results, coupled with findings in the literature of superior processing and increased perceptual sensitivity in CB, suggest that sensory deprivation from birth may be responsible for a cascade of compensastory effects engaging cross-modal integration in the absence of multimodal sensory input resulting in enhanced and optimized predictive coding of sensory input. This computational framework is then used to enhance sensory processing and increase sensitivity and capacity for encoding stimulus features in the CB.

Author Contributions

Conceived and designed the experiments: LA MS LM VG. Performed the experiments: LA MS. Analyzed the data: LA MS LM VG. Wrote the paper: VG MS LA LM.


  1. 1. Kujala T, Huotilainen M, Sinkkonen J, Ahonen AI, Alho K, et al. (1995) Visual cortex activation in blind humans during sound discrimination. Neuroscience Letters 183: 143–146.
  2. 2. Collignon O, Vandewalle G, Voss P, Albouy G, Charbonneau G, et al. (2011) Functional specialization for auditory-spatial processing in the occipital cortex of congenitally blind humans. Proceedings of the National Academy of Sciences USA 108: 4435–4440.
  3. 3. Gougoux F, Zatorre RJ, Lassonde M, Voss P, Lepore F (2005) A functional neuroimaging study of sound localization: Visual cortex activity predicts performance in early-blind individuals. PLoS Biology 3: e27.
  4. 4. Burton H, Diamond JB, McDermott KB (2003) Dissociating cortical regions activated by semantic and phonological tasks: a fMRI study in blind and sighted people. Journal of Neurophysiology 90: 1965–1982.
  5. 5. Röder B, Stock O, Bien S, Neville H, Rösler F (2002) Speech processing activates visual cortex in congenitally blind humans. European Journal of Neuroscience 16 (5) 930–936.
  6. 6. Büchel C, Price C, Frackowiak RS, Friston K (1998) Different activation patterns in the visual cortex of late and congenitally blind subjects. Brain 121: 409–419.
  7. 7. Amedi A, Floel A, Knecht S, Zohary E, Cohen LG (2004) Transcranial magnetic stimulation of the occipital pole interferes with verbal processing in blind subjects. Nature Neuroscience 7: 1226–1270.
  8. 8. Cohen LG, Celnik P, Pascaul-Leone A, Corwell B, et al. (1997) Functional relevance of cross-modal plasticity in blind humans. Nature 389: 180–183.
  9. 9. Bedny M, Pascual-Leone A, Dodell-Feder D, Fedorenko E, Saxe R (2011) Language processing in the occipital cortex of congenitally blind adults. Proceedings of the National Academy of Sciences USA 108: 4429–4434.
  10. 10. Hugdahl K, Ek M, Takio F, Rintee T, Tuomainen J, et al. (2004) Blind individuals show enhanced perceptual and attentional sensitivity for identification of speech sounds. Cognitive Brain Research 19: 28–32.
  11. 11. Gougoux F, Lepore F, Lassonde M, Voss P, Zatorre RJ, et al. (2004) Neuropsychology: Pitch discrimination in the early blind. Nature 430: 309.
  12. 12. Weeks R, Horwitz B, Aziz-Sultan A, Tian B, Wessinger CM, et al. (2000) A positron emission tomographic study of auditory localization in the congenitally blind. Journal of Neuroscience 20: 2664–2672.
  13. 13. Hertrich I, Dietrich S, Moss A, Trouvain J, Ackermann H (2009) Enhanced speech perception capabilities in a blind listener are associated with activation of fusiform gyrus and primary visual cortex. Neurocase 15: 163–170.
  14. 14. Gordon-Salant S, Friedman SA (2011) Recognition of rapid speech by blind and sighted older adults. Journal of Speech Language and Hearing Research 54: 622–631.
  15. 15. Ménard L, Dupont S, Baum SR, Aubin J (2009) Production and perception of French vowels by congenitally blind adults and sighted adults. Journal of the Acoustical Society of America 126: 1406–1414.
  16. 16. Trouvain J (2007) On the comprehension of extremely fast synthetic speech. Saarland Working Papers in Linguistics 1: 5–13.
  17. 17. Hertrich I, Dietrich S, Ackermann H (2013) Tracking the speech signal – Time-locked MEG signals during perception of ultra-fast and moderately fast speech in blind and in sighted listeners. Brain & Language 124: 9–21.
  18. 18. Desimone R (1996) Neural mechanisms for visual memory and their role in attention. Proceedings of the National Academy of Sciences USA 93: 13494–13499.
  19. 19. Henson RN, Rugg MD (2003) Neural response suppression, haemodynamic repetition effects, and behavioural priming. Neuropsychologia 41: 263–270.
  20. 20. Gotts SJ, Chow CC, Martin A (2012) Repetition priming and repretition suppression: A case for enhanced efficiency through neural synchronization. Cognitive Neuroscience 3 (3–4) 227–259.
  21. 21. Naccache L, Dehaene S (2001) The priming method: imaging unconscious repetition priming reveals an abstract representation of number in the parietal lobes. Cerebral Cortex 11: 966–974.
  22. 22. Grill-Spector K, Henson R, Martin A (2006) Repetition and the brain: neural models of stimulus specific effects. Trends in Cognitive Sciences 10 (1) 14–23.
  23. 23. Larsson J, Smith AT (2011) fMRI repetition suppression: Neuronal adaptation or stimulus expectation? Cerebral Cortex .
  24. 24. Friston K (2005) A theory of cortical responses. Philosophical Transactions of the Royal Soceity of London. Series B: Biological Sciences 360: 815–836.
  25. 25. Rao RP, Ballard DH (1999) Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-fi eld effects. Nature Neuroscience 2: 79–87.
  26. 26. Henson RN (2012) Repetition accelerates neural dynamics: In defense of facilitation models. Cognitive Neuroscience 3 (3–4) 240–241.
  27. 27. Friston K (2012) Predictive coding, precision and synchrony. Cognitive Neuroscience 3 (3–4) 238–239.
  28. 28. Gracco VL, Tremblay P, Pike GB (2005) Imaging speech production using fMRI. Neuro Image 26: 294–301.
  29. 29. Grabski K, Schwartz JL, Lamalle L, Vilain C, Vallée N, et al. (2013) Shared and distinct neural correlates of vowel perception and production. Journal of Neurolinguistics 26 (3) 384–408.
  30. 30. Eickhoff SB, Stephan KE, Mohlberg H, Grefkes C, Fink GR, et al. (2005) A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuro Image 25: 1325–1335.
  31. 31. Lancaster JL, Woldorff MG, Parsons LM, Liotti M, Freitas CS, et al. (2000) Automated Talairach atlas labels for functional brain mapping. Human Brain Mapping 10 (3) 120–131.
  32. 32. Hasson U, Nusbaum HC, Small SL (2006) Repetition suppression for spoken sentences and the effect of task demands. Journal of Cognitive Neuroscience 18 (12) 2013–2029.
  33. 33. Vaden KI Jr, Muftuler LT, Hickok G (2010) Phonological repetition-suppression in bilateral superior temporal sulci. Neuroimage 49 (1) 1018–1023.
  34. 34. Hartwigsen G, Baumgaertner A, Price CJ, Koehnke M, Ulmer S, et al. (2010) Phonological decisions require both the left and right supramarginal gyri. Proceedings of the National Academy of Sciences USA 107 (38) 16494–16499.
  35. 35. Price CJ, Moore CJ, Humphreys GW, Wise RJ (1997) Segmenting semantic from phonological processes during reading. Journal of Cognitive Neuroscience 9: 727–733.
  36. 36. Stoeckel C, Gough PM, Watkins KE, Devlin JT (2009) Supramarginal gyrus involvement in visual word recognition. Cortex 45 (9) 1091–1096.
  37. 37. Calvert GA (2001) Cross-modal processing in the human brain: Insights from functional neuroimaging studies. Cerebral Cortex 11: 1110–1123.
  38. 38. Bhattacharjee A, Ye AJ, Lisak JA, Vargas MG, Goldreich D (2010) Vibrotactile masking experiments reveal accelerated somatosensory processing in congenitally blind Braille readers. Journal of Neuroscience 30: 14288–14298.
  39. 39. Norman JF, Bartholomew AN (2011) Blindness enhances tactile acuity and haptic 3-D shape discrimination. Attention, Perception & Psychophysics 73: 2323–2331.
  40. 40. Alary F, Duquette M, Goldstein R, Elaine Chapman C, Voss P, et al. (2009) Tactile acuity in the blind: a closer look reveals superiority over the sighted in some but not all cutaneous tasks. Neuropsychologia 47: 2037–2043.
  41. 41. Muller JR, Metha AB, Krauskopf J, Lennie P (1999) Rapid adpatation in visual cortex to the structure of images. Science 285: 1405–1408.