Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Event-Related Alpha Suppression in Response to Facial Motion

Event-Related Alpha Suppression in Response to Facial Motion

  • Christine Girges, 
  • Michael J. Wright, 
  • Janine V. Spencer, 
  • Justin M. D. O’Brien


While biological motion refers to both face and body movements, little is known about the visual perception of facial motion. We therefore examined alpha wave suppression as a reduction in power is thought to reflect visual activity, in addition to attentional reorienting and memory processes. Nineteen neurologically healthy adults were tested on their ability to discriminate between successive facial motion captures. These animations exhibited both rigid and non-rigid facial motion, as well as speech expressions. The structural and surface appearance of these facial animations did not differ, thus participants decisions were based solely on differences in facial movements. Upright, orientation-inverted and luminance-inverted facial stimuli were compared. At occipital and parieto-occipital regions, upright facial motion evoked a transient increase in alpha which was then followed by a significant reduction. This finding is discussed in terms of neural efficiency, gating mechanisms and neural synchronization. Moreover, there was no difference in the amount of alpha suppression evoked by each facial stimulus at occipital regions, suggesting early visual processing remains unaffected by manipulation paradigms. However, upright facial motion evoked greater suppression at parieto-occipital sites, and did so in the shortest latency. Increased activity within this region may reflect higher attentional reorienting to natural facial motion but also involvement of areas associated with the visual control of body effectors.


The visual system can reconstruct a perceptual scene from motion cues alone. For example, a human walker can be detected from just a dozen moving dots [1]. This perception of biological motion (BM) may underlie many aspects of social cognition [2][4]. Indeed, individuals can categorize genders, identify different people and recognize emotions from point-light walkers [5][8].

Neuroimaging studies suggest that BM data is processed within the superior temporal sulcus (STS) [9][12]. This substrate is a convergence point for dorsal and ventral pathways, and has multimodal associations with the amygdala, fusiform gyrus, MT+/V5 and cerebellum [10], [13][16]. Electrophysiological investigations support these data, in addition to providing a timeline of processing events [17][21]. Jokisch et al., [22] compared event-related potentials (ERPs) evoked by upright, inverted and scrambled point-light walkers. The amplitudes of the early N170 component only differed between upright and inverted BM. This difference was not observed for the second negative peak (N300), which appeared to occur over the STS complex. The authors suggest that this late processing stage is specific to the visual analysis of BM. Krakowski et al., [23] report similar findings. Compared to scrambled motion, viewing BM produced a positive shift of the ERP between 100 and 200 ms. This was followed by negativity from 200 to 350 ms over posterior middle temporal regions. Source analysis indicated that the first phase was generated by substrates involved in motion processing (MT+/V5), whilst neuronal populations within and around the STS evoked the negative-going ERP [23].

Whilst BM refers to both body and facial movement, there is little research concerning the latter. It is crucial however to extend our investigations to include this stimulus class, especially if we consider its prominent role in socio-emotional communication [2]. The present study thus examined facial motion perception via analysis of changes within the EEG alpha band (8–12 Hz). Originating from occipital regions, alpha waves are suppressed during active visual perception [24][25]. They appear to be synchronized with cyclic activity of the visual thalamic relay neurons, modulating signal transmission during early input stages [26]. Occipital alpha may also index memory processes, including those related to working memory loads and long-term stores [27][29]. Parieto-occipital alpha is also influenced by visual attention [30][33]. For example; alpha power is larger over visual cortices when attention is focused on the auditory part of an auditory-visual stimulus [34]. Participants also show an interhemispheric difference in alpha amplitudes during the Posner cueing paradigm [35]. The increase on the unattended side suggests alpha waves have a ‘gating mechanism’ [36]. Such function may inhibit incoming sensory information in terms of its behavioral relevance [37][38]. Others note that prestimulus alpha fluctuates with the excitability of the visual cortex and is predictive of an imminent perception [39][40].

Many studies have investigated static face perception by observing alpha oscillations [41][44]. Emotional faces increase alpha amplitudes at posterior occipital locations, whilst angry face stimulation specifically activates substrates over electrodes T5, P3 and O2 [45]. Frontal alpha activity is also associated with previously formed concepts concerning negative facial expressions, suggesting that the fronto-thalamic system is involved in the perception and evaluation of facial expressions [46], [47]. Regarding general face perception, Hsiao et al., [48] found 4–25 Hz activity in the middle occipital and occipitotemporal areas when participants viewed upright faces. Inverted faces however produced the most alpha enhancement in the right occipitotemporal area, indicating additional attentional requirements and increased synchrony between neuronal populations. Sakihara et al., [49] also found alpha, theta and beta suppression occurring over occipitotemporal areas during familiar, unfamiliar and own face perception. Such activity may illustrate the structural and semantic encoding of facial information [49], [50].

To our knowledge, no published EEG study has directly examined posterior alpha suppression in response to whole-face human motion. Participants therefore observed CGI averaged faces animated with human motion sequences. These exhibited both rigid (head rotations and translations) and non-rigid (facial expressions) motion, as well as speech expressions. Their task was to discriminate between successive motion captures during EEG recordings. Changes in alpha power were measured over parieto-occipital and occipital regions.

Further, the current study did not use inanimate (object) motion as a control. Such comparisons involve many unrestrained differences in low-level stimulus properties [51]. Instead, we used orientation-inverted and luminance-inverted facial stimuli as these manipulations are known to affect face recognition. Inversion paradigms impair static face perception by disrupting configural processing [52]. Moreover, the brain may treat orientation-inverted faces as objects, considering the involvement of the lateral occipital area here [53]. Luminance-inverted faces also affect processing regardless of preserving normal face structure and spatial frequencies [54], [55]. These negative images disrupt the N170 face-selective component and therefore early structural encoding [56], [57]. Together, these measures comprise an effective tool in evaluating facial motion perception [58], [59].

Materials and Methods

Ethics Statement

Ethical approval was obtained from Brunel University (Psychology Ethics Committee). Participants were given a description of the study and written informed consent was obtained.


Nineteen individuals (9 male, 10 female, mean age = 28.53 years, range = 22–54 years) with normal or corrected-to-normal vision participated in this study. None of the sample had any history of neurological or psychological disorders.


The stimuli were taken from a video database developed by Hill and Johnston [59] using motion capture technology. Using markers placed on major facial landmarks, motion was captured from 12 actors reciting simple question and answer jokes. These jokes allowed natural facial expressions (non-rigid motion), speech and head movements (rigid motion) to be captured. The motion sequences were then applied to a three dimensional computer-generated averaged head (taken from 100 men and 100 women) and outputted as 640×480 pixels, 25 frames-per-second movies (Figure 1, but see video S1 for a dynamic example). By using an average face on all sequences, facial motion could be measured independently from structural and surface-based facial cues. The appearances of all motion capture faces were therefore identical, and only differed in the way they moved. An orientation-inverted and luminance-inverted version of each stimulus was generated in Matlab (MATLAB 7.10.0. Natick, Massachusetts: The MathWorks Inc., 2010) by manipulation and re-encoding of the original stimulus video file.

Figure 1. Snapshot series demonstrating part of a facial motion sequence.


Observers viewed the dynamic stimuli on a computer screen. Viewing distance was 80 cm, at which the distance of the 38 cm×30 cm display subtended an angle of approximately 28°×22°. The experiment consisted of 3 blocks, each with 50 trials; upright facial motion, orientation-inverted facial motion and luminance-inverted facial motion. Blocks were repeated three times in a counterbalanced order to avoid practice effects, fatigue or decreasing vigilance influencing the EEG waveform.

Participants completed a sequence discrimination task during the EEG recordings. This tested their ability to differentiate between facial motion sequences that were presented in a continuous series. All animations were presented for 3000 ms. A single animation was presented, and after an interstimulus interval (ISI) of 1000 ms, another animation appeared. During a second ISI, participants were required (according to pre-task instructions) to respond via the keypad, whether the two animations were the same (press 1) or different (press 2) from each other. This process continued throughout the testing period, such that they always judged whether the current animation was the same or different from the previous animation. The same format was used for all three conditions (upright, orientation-inverted and luminance-inverted).

ERP Recording and Analysis

EEGs were recorded with an average common reference from 64 Ag-AgCl electrodes. These were placed according to the International 10/20 system. A horizontal electrooculogram (EOG) was recorded from electrodes placed on the outer canthi of both eyes. Vertical EOG electrodes were placed above and below the middle of the left eye. Impedances did not exceed 10 KΩ. The EEG was amplified at a gain of 1000 and bandpass filtered at 0.1–100 Hz. It was digitized at 1000 Hz via a Synamps2 amplifier and Scan 4.4 acquisition and analysis software (Compumedics Neuroscan Ltd.).

Offline, a DC offset correction was applied to the raw waveform, and the time series was bandpass filtered at 0.1–128 Hz (24 dB/octave). A visual scan was conducted to mark ‘bad’ blocks and eye blink artifacts were removed by a principle components procedure. Using the cleaned EEG, an event file was created and used to epoch the data for each condition from −100 to 923 ms (0 ms = stimulus onset). Sweeps were baseline corrected (entire sweep) and amplitudes greater than ±75 µV were rejected. An event-related band power analysis detected event-related frequencies within the alpha band. The data was band-pass filtered with a centre frequency of 10 Hz, and a half bandwidth of 2 Hz (12 dB/octave) within a moving 100 ms window. The baseline, mid-point maximum and late-minimum amplitudes (and the latency in which they occurred), were detected and analyzed.

Statistical Analysis

A repeated-measures ANOVA was used to test for differences between the alpha amplitudes elicited by each facial motion. Time sample (baseline, mid-point, late-minimum), sequence type (same, different) face type (upright, orientation-inverted, luminance-inverted), hemisphere (left, right) and electrode site were the within-participant factors. A Bonferroni correction was applied to post-hoc contrasts. A repeated-measures ANOVA (face type×hemisphere×electrode site) was used to analyze differences in the latencies of alpha amplitudes produced by each face type. As sequence type did not yield any significant main effects, data was collapsed across these levels.


The strongest alpha power was observed at parieto-occipital (PO7, PO5, PO3, PO8, PO6, PO4) and occipital (CB1, O1, O2, CB2) scalp locations. Amplitudes were observed at three time samples (baseline −100–0 ms, mid-point 300–500 ms, and late-minimum 600–823 ms). Observing data at three time-samples allowed a more detailed analysis to be made with regards to patterns of alpha activity post motion onset.

Grand Average Data

Data from one participant were excluded from statistical analysis due to technical faults with the EEG recording system. At parieto-occipital (PO) and occipital (O) sites, upright facial motion increased alpha power before suppressing it. This pattern of alpha activity did not occur for other stimuli (Table 1).

Table 1. Grand averaged amplitude and latency data for facial motion at PO and O sites.

Amplitude Data

Facial motion (regardless of type) suppressed alpha power, as indicated by significant differences between the time-sample amplitudes (O sites: F (2, 16) = 32.45, p<0.01 and PO sites: F (2, 16) = 52.95, p<0.01). For PO data, simple contrasts indicated a significant difference between the late-minimum interval and baseline for all facial motion (Upright F (1, 17) = 41.68, p<0.005; Luminance-inverted F (1, 17) = 90.71, p<0.005; Orientation-inverted F (1, 17) = 39.43, p<0.005). The difference between the recovery interval and baseline was significant for orientation-inverted faces only (F(1, 17) = 9.88, p<0.05). At PO and O electrodes, there was a significant main effect of face type on overall alpha power across the three time-samples (F(2, 16) = 3.97, p<0.05 and F (2, 16) = 4.67, p<0.05, respectively).

The amount of alpha suppression evoked by each facial motion only differed at PO sites, as revealed by a significant time-sample×face type interaction (F(4, 14) = 6.39, p<0.01). Simple contrasts showed that this interaction was driven by a significant difference between upright and orientation-inverted faces in the mid-point time interval only (F(1, 17) = 11.64, p<0.05). See Table 2 for a summary of significant main effects and interactions.

Table 2. Significant main effects and interactions at PO and O electrodes.

Latency Data

Differences in the latency of the peak alpha amplitudes were observed amongst the facial motion types (Table 3). Face type had a significant effect on the latency of the late-minimum amplitudes at PO sites (F(2, 34) = 3.44, p<0.05). Simple contrasts indicated that this was driven by a significant difference between upright and orientation-inverted facial motion (F(1, 17) = 6.27, p<0.05). Compared with other types, upright motion suppressed alpha at earlier latencies (733 ms vs. 755 ms for orientation-inverted stimuli). At O sites, the latencies of the mid-point amplitudes were significantly affected by face type (F(2, 34) = 4.57, p<0.05). Simple contrasts revealed a significant difference between the mid-point latencies for upright (443 ms) and orientation-inverted (467 ms) facial motion (F(1, 17) = 7.20, p<0.05).

Table 3. Latency of mid-point peak and minimum amplitudes at PO and O electrodes.


Transient Alpha Increase

Unexpectedly, upright facial motion initially evoked an increase in alpha over parieto-occipital and occipital regions. The neural efficiency argument [60] provides one interpretation for this. Reflecting on our expertise, less information processing is required for upright face perception [61]. This would certainly explain why the control faces evoked alpha suppression almost instantly; unfamiliar stimuli would require increased attentional effort and involvement of high-level cognitive resources [54], [62]. Yet this argument does not explain why upright facial motion subsequently suppressed alpha after this time point. Alternatively, the initial high alpha amplitude could reflect a ‘gating’ mechanism used to filter out irrelevant visual inputs [37], [38]. In this case, form cues provided no additional information and were thus ignored. The subsequent suppression would therefore correlate with attention to motion cues when engaging in facial motion tasks. It is important to note however that this transient increase in alpha following video onset could be due to the motion-onset ERP. As our data analysis was conducted using induced event-related bandpower measures, we cannot fully address this point. Future studies could potentially utilize evoked synchronization measures in order to observe a more distinct emergence of face-selective ERP components.

Posterior Activation in Facial Motion Perception

With reference to the amount of suppression evoked by each facial motion, no difference emerged at occipital locations. This suggests that early visual processing occurs irrespective of orientation or luminance-reversal [47], [63]. This finding is in contrast to studies of static face perception. For example, Itier and Taylor [56] report that inverted and negative faces affected early encoding, as demonstrated by a reduced N170 response. Further, in the context of encoding and retrieval mechanisms, occipital alpha is suppressed when participants perceive famous (and thus familiar) faces compared to non-famous faces [64]. The authors suggest face perception evokes interplay between semantic knowledge and episodic memory formation. In the case of biologically unfamiliar faces (e.g., orientation-inverted or luminance-inverted stimuli), we may expect less occipital alpha activity to occur. Yet, this effect was not found here. It is possible that early encoding processes remain unaffected by such visual manipulations, perhaps due to the detailed three-dimensional representation facial motion provides [65]. We are disinclined to accept this view however as many studies do report a disruption in perceiving inverted point-light figures [22], [66], [67]. To our knowledge, only one study has found a comparable response to upright and inverted walkers over the left occipital cortex [68].

By contrast, upright facial motion reduced alpha more than control stimuli at parieto-occipital regions. A study comparing the ERP response to upright and scrambled point-light walkers also reports differences emerging over this region [23]. In addition, stronger alpha suppression following BM perception has been noted over the parieto-occipital cortex [69]. This enhanced activity may reflect a number of significant underlying processes. First, the medial portion of the parieto-occipital cortex has been associated with attentional reorienting during cognitive-motor tasks [70]. Others extend this finding to the dorsal aspect of the parieto-occipital sulcus [71]. An increase in parieto-occipital activity thus suggests higher attentional effort allocated to perceiving upright facial motion. Second, the parieto-occipital cortex contains functional areas associated with the visual control of body effectors [72]. Regions of the superior and medial portions play a critical role in proximal and distal aspects of reaching/grasping movements, pointing gestures, head movements and eye-gaze shifts [73][75]. Motion selectivity has also been observed within this region [76], indicating dorsal visual stream involvement [77]. Perhaps observing upright facial motion, which included head and eye translations, activated a portion of these substrates. Further, it is possible that parieto-occipital electrodes are indirectly recording activity occurring within the posterior superior temporal sulcus (pSTS). One study which found face-selective ERPs occurring over the parieto-occipital cortex to facial motion supports this idea [78]. There is also evidence that the STS may actually extend into the parieto-occipital and occipital regions [79].

The larger amount of suppression evoked by upright facial motion also occurred within the shortest latency at parieto-occipital sites. Such early processing could reflect a pop-out effect caused by familiar orientations [22]. If this was the case though, luminance-inverted faces would have also been processed just as quickly. Instead, automated feed forward systems may in part be responsible for the efficient processing of upright motion [9], [80][82]. Yet, top-down computations should not be completely disregarded [83]. For example, body motion perception utilizes a feedforward and feedback functional loop between the right pSTS and left lateral cerebellum [15].

Implications of the Current Data

The results may have been influenced by sensorimotor alpha (mu rhythms) recorded over central electrodes. Mu rhythms index action planning and preparation within the somatosensory cortex [84], [85]. They are suppressed and their power attenuated when one performs an action [86] but also during the observation of biological movements [87][89]. In the current study, participants responded via a button press after observing facial motion sequences. Such experimental paradigm perhaps activated anterior systems. It should be noted however that central electrodes were analyzed, and no significant effects found.

In addition, we did not use inanimate or scrambled motion as a control. Thus, it remains unknown whether differential activations would have occurred for any stimuli presented in unfamiliar contexts. However, the manipulations we used are known to disrupt configural and holistic processing, meaning that control stimuli may be processed in a manner similar to objects [53]. We are disinclined then to suggest a role of familiarity, but instead that the parieto-occipital region shows selectivity to upright facial motion.

Supporting Information

Video S1.

Example of the facial motion animations used in this study. Please see Hill and Johnston [59] for further examples and a full description of how the stimuli were made.




We would like to thank Fabiola Kajo for her help with data collection.

Author Contributions

Conceived and designed the experiments: CG MJW JS JOB. Performed the experiments: CG. Analyzed the data: CG MJW JS JOB. Wrote the paper: CG MJW. Produced stimuli manipulations: JOB. Drafted the paper: CG MJW JS JOB.


  1. 1. Johansson G (1973) Visual perception of biological motion and a model for its analysis. Percept Psychophys 14: 201–11.
  2. 2. Pavlova MA (2012) Biological motion processing as a hallmark of social cognition. Cereb cortex 22(5): 981–95.
  3. 3. Miller LE, Saygin AP (2013) Individual differences in the perception of biological motion: Links to social cognition and motor imagery. Cognition 128(2): 140–148.
  4. 4. Blake R, Shiffrar M (2007) Perception of human motion. Annu Rev Psychology 58: 47–73.
  5. 5. Schouten B, Davila A, Verfaillie K (2013) Further explorations of the facing bias in biological motion perception: perspective cues, observer sex, and response times. PloS One 8(2): e56978.
  6. 6. Pollick FE, Kay JW, Heim K, Stringer R (2005) Gender recognition from point-light walkers. J Exp Psychol Hum Percept Perform 31(6): 1247–65.
  7. 7. Mcglothlin B, Jiacoletti D, Yandell L (2012) The Inversion Effect: Biological Motion and Gender Recognition. Psychol Res 17(2): 68–73.
  8. 8. Parron C, Da Fonseca D, Moore DG, Santos A, Monfardini E, et al. (2008) Recognition of biological motion in children with autistic spectrum disorders. Autism 12(3): 261–74.
  9. 9. Allison T, Puce A, McCarthy G (2000) Social perception from visual cues: role of the STS region. Trends Cogn Sci 4(7): 267–78.
  10. 10. Grossman ED, Jardine NL, Pyles JA (2010) fMR-Adaptation Reveals Invariant Coding of Biological Motion on the Human STS. Front Hum Neurosci, 4 (15): 1–16.
  11. 11. Vander Wyk BC, Voos A, Pelphrey KA (2012) Action representation in the superior temporal sulcus in children and adults: an fMRI study. Dev Cogn Neurosci 2(4): 409–16.
  12. 12. Vander Wyk BC, Hudac CM, Carter EJ, Sobel DM, Pelphrey KA (2009) Action understanding in the superior temporal sulcus region. Psychol Sci, 20(6), 771–7.
  13. 13. Herrington JD, Nymberg C, Schultz RT (2011) Biological motion task performance predicts superior temporal sulcus activity. Brain Cogn 77(3): 372–81.
  14. 14. Sadeh B, Podlipsky I, Zhdanov A, Yovel G (2010) Event-related potential and functional MRI measures of face-selectivity are highly correlated: a simultaneous ERP- fMRI investigation. Hum Brain Mapp 31(10): 1490–501.
  15. 15. Sokolov AA, Gharabaghi A, Tatagiba MS, Pavlova MA (2010) Cerebellar engagement in an action observation network. Cereb Cortex 20: 486–91.
  16. 16. Sokolov AA, Erb M, Gharabaghi A, Grodd W, Tatagiba MS, et al. (2012) Biological motion processing: the left cerebellum communicates with the right superior temporal sulcus. NeuroImage 59(3): 2824–30.
  17. 17. Hirai M, Fukushima H, Hiraki K (2003) An event-related potentials study of biological motion perception in humans. Neurosci Lett 344(1): 41–44.
  18. 18. Hirai M, Senju A, Fukushima H (2005) Hiraki K (2005) Active processing of biological motion perception: an ERP study. Cogn Brain Res 23: 387–96.
  19. 19. Hirai M, Hiraki K (2006a) The relative importance of spatial versus temporal structure in the perception of biological motion: an event-related potential study. Cognition, 99 (1): B15–B29.
  20. 20. Hirai M, Hiraki K (2006b) Visual search for biological motion: an event-related potential study. Neurosci Lett, 403 (3): 299–304.
  21. 21. Saunier G, Martins EF, Dias EC, de Oliveira JM, Pozzo T, et al. (2013) Electrophysiological correlates of biological motion permanence in humans. Behav Brain Res 236(1): 166–74.
  22. 22. Jokisch D, Troje NF, Koch B, Schwarz M, Daum I (2005) Differential involvement of the cerebellum in biological and coherent motion perception. Eur J Neurosci 21(12): 3439–46.
  23. 23. Krakowski AI, Ross LA, Snyder AC, Sehatpour P, Kelly SP, et al. (2011) The neurophysiology of human biological motion processing: A high-density electrical mapping study. NeuroImage 56(1): 373–83.
  24. 24. Berger H (1929) Uber das Elektrenkephalogramm des Menschen. Arch. Psychiatr. Nervenkr 87: 527–70.
  25. 25. Toscani M, Marzi T, Righi S, Viggiano MP, Baldassi S (2010) Alpha waves: a neural signature of visual suppression. Exp Brain Res 207(3–4): 213–9.
  26. 26. Lorincz ML, Kekesi KA, Juhasz G, Crunelli V, Hughes SW (2009) Temporal framing of thalamic relay-mode firing by phasic inhibition during the alpha rhythm. Neuron 63: 683–96.
  27. 27. Tuladhar AM, ter Huurne N, Schoffelen JM, Maris E, Oostenveld R, et al. (2007) Parieto-occipital sources account for the increase in alpha activity with working memory load. Hum Brain Mapp 28(8): 785–92.
  28. 28. Jokisch D, Jensen O (2007) Modulation of gamma and alpha activity during a working memory task engaging the dorsal or ventral stream. J Neurosci 27: 3244–3251.
  29. 29. Klimesch W (1997) EEG-alpha rhythms and memory processes. Int J Psychophysiol 26: 319–340.
  30. 30. Thut G, Nietzel A, Brandt SA, Pascual-Leone A (2006) Alpha-band electroencephalographic activity over occipital cortex indexes visuospatial attention bias and predicts visual target detection. J Neurosci 26(37): 9494–502.
  31. 31. Rihs TA, Michel CM, Thut G (2007) Mechanisms of selective inhibition in visual spatial attention are indexed by alpha-band EEG synchronization. Eur J Neurosci 25: 603–10.
  32. 32. Belyusar D, Snyder AC, Frey HP, Harwood MR, Wallman J, et al. (2013) Oscillatory alpha-band suppression mechanisms during the rapid attentional shifts required to perform an anti-saccade task. NeuroImage 65: 395–407.
  33. 33. Capotosto P, Babiloni C, Romani GL, Corbetta M (2009) Frontoparietal cortex controls spatial attention through modulation of anticipatory alpha rhythms. J Neurosci 29: 5863–872.
  34. 34. Foxe JJ, Simpson GV, Ahlfors SP (1998) Parieto-occipital approximately 10 Hz activity reflects anticipatory stat of visual attention mechanisms. NeuroReport 9: 3929–3933.
  35. 35. Kelly SP, Lalor EC, Reilly RB, Foxe JJ (2006) Increases in alpha oscillatory power reflect an active retinotopic mechanism for distracter suppression during sustained visuo- spatial attention. J Neurophysiol, 95 (6): 3844–51.
  36. 36. Pfurtscheller G, Lopes da Silva FH (1999) Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol 110(11): 1842–1857.
  37. 37. May ES, Butz M, Kahlbrock N, Hoogenboom N, Brenner M, et al. (2012) Pre- and post-stimulus alpha activity shows differential modulation with spatial attention during the processing of pain. NeuroImage 62(3): 1965–74.
  38. 38. Snyder AC, Foxe JJ (2010) Anticipatory attentional suppression of visual features indexed by oscillatory alpha-band power increases: a high-density electrical mapping study. J Neurosci 30(11): 4024–32.
  39. 39. Romei V, Brodbeck V, Michel C, Amedi A, Pascual-Leone A, et al. (2008) Spontaneous fluctuations in posterior alpha-band EEG activity reflect variability in excitability of human visual areas. Cereb Cortex 18: 2010–2018.
  40. 40. Van Dijk H, Schoffelen JM, Oostenveld R, Jensen O (2008) Prestimulus oscillatory activity in the alpha band predicts visual discrimination ability. J Neurosci 28: 1816–1823.
  41. 41. Başar E, Güntekin B, Öniz A (2006) Principles of oscillatory brain dynamics and a treatise of recognition of faces and facial expressions. Prog Brain Res. 159, 43–62.
  42. 42. Başar E, Schmiedt-Fehr C, Öniz A, Başar-Eroğlu C (2008) Brain oscillations evoked by the face of a loved person. Brain Res 1214: 105–15.
  43. 43. Balconi M, Pozzoli U (2008) Event-related oscillations (ERO) and event-related potentials (ERP) in emotional face recognition. Int J Neurosci 118(10): 1412–24.
  44. 44. Balconi M, Mazza G. (2009) Brain oscillations and BIS/BAS (behavioral inhibition/activation system) effects on processing masked emotional cues. ERS/ERD and coherence measures of alpha band. Int J Psychophys, 74(2), 158–65.
  45. 45. Güntekin B, Basar E (2007) Emotional face expressions are differentiated with brain oscillations. Int J Psychophysiol 64(1): 91–100.
  46. 46. Kostandov EA, Kurova NS, Cheremushkin EA, Petrenko NE (2007) Dynamics of the spatial organization of cortical electrical activity during the formation and actualization of a cognitive set to facial expressions. Zh Vyssh Nerv Deyat, 57 (1): 33–42.
  47. 47. Kostandov ÉA, Kurova NS, Cheremushkin EA, Petrenko NE, Ashkinazi ML (2010) Spatial Synchronization of EEG Theta and Alpha Rhythms in an Unconscious Set to the Perception of an Emotional Facial Expression. Neurosci Behav Physiol, 40 (2): 197–204.
  48. 48. Hsiao FJ, Lin YY, Hsieh JC, Wu ZA, Ho LT, et al. (2006) Oscillatory characteristics of face-evoked neuromagnetic responses. Int J Psychophysiol 61(2): 113–20.
  49. 49. Sakihara K, Gunji A, Furushima W, Inagaki M (2012) Event-related oscillations in structural and semantic encoding of faces. Clin Neurophysiol 123(2): 270–7.
  50. 50. Gunji A, Inagaki M, Inoue Y, Takeshima Y, Kaga M (2009) Event-related potentials of self- face recognition in children with pervasive developmental disorders. Brain Dev 31: 139–47.
  51. 51. George N, Dolan RJ, Fink GR, Baylis GC, Russell C, et al. (1999) Contrast polarity and face recognition in the human fusiform gyrus. Nat Neurosci 2(6): 574–80.
  52. 52. Valentine T (1998) Upside-down faces: A review of the effect of inversion on face recognition. Br J Psychol 79: 471–491.
  53. 53. Pitcher D, Duchaine B, Walsh V, Yovel G, Kanwisher N (2011) The role of lateral occipital face and object areas in the face inversion effect. Neuropsychologia, 49(12), 3448–53.
  54. 54. Kemp R, Pike G, White P, Musselman A (1996) Perception and recognition of normal and negative faces: the role of shape from shading and pigmentation cues. Perception 25: 37–52.
  55. 55. Taubert J, Alais D (2011) Identity aftereffects, but not composite effects, are contingent on contrast polarity. Perception 40(4): 422–436.
  56. 56. Itier RJ, Taylor MJ (2002) Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: A repetition study using ERPs. NeuroImage 15: 353–372.
  57. 57. Tomalski P, Johnson MH (2012) Cortical sensitivity to contrast polarity and orientation of faces is modulated by temporal-nasal hemifield asymmetry. Brain Imaging Behav, 6(1), 88–101.
  58. 58. Thornton IM, Mullins E, Banahan K (2011) Motion can amplify the face-inversion effect. Psihologija 44: 5–22.
  59. 59. Hill H, Johnston A (2001) Categorizing sex and identity from the biological motion of faces. Curr Biol 11(11): 880–5.
  60. 60. Gauthier I, Tarr MJ (1997) Becoming a “Greeble” expert: Exploring mechanisms for face recognition. Vision Res 37(12): 1673–82.
  61. 61. Diamond R, Carey S (1986) Why faces are and are not special: An effect of expertise. J Exp Psychol Gen 115: 107–17.
  62. 62. Cole HW, Ray WJ (1985) EEG correlates of emotional tasks related to attentional demands. Int J Psychophysiol. 3: 33–41.
  63. 63. Goffaux V, Gauthier I, Rossion B (2003) Spatial scale contribution to early visual differences between face and object processing. Cogn Brain Res 16: 416–424.
  64. 64. Zion-Golumbic E, Kutas M, Bentin S (2010) Neural dynamics associated with semantic and episodic memory for faces: evidence from multiple frequency bands. J Cog Neurosci 22(2): 263–77.
  65. 65. O’Toole AJ, Roark DA, Abdi H (2002) Recognizing moving faces: A psychological and neural synthesis. Trends Cogn Sci 6(6): 261–66.
  66. 66. Grossman ED, Blake R (2001) Brain activity evoked by inverted and imagined biological motion. Vision Res 41(10–11): 1475–82.
  67. 67. Hirai M, Chang DHF, Saunders DR, Troje NF (2011) Body configuration modulates the usage of local cues to direction in biological-motion perception. Psychol Sci 22(12): 1543–9.
  68. 68. Pavlova M, Lutzenberger W, Sokolov A, Birbaumer N (2004) Dissociable Cortical Processing of Recognizable and Non-recognizable Biological Movement: Analysing Gamma MEG Activity. CerebCortex, 14(2), 181–188.
  69. 69. Perry A, Bentin S, Shalev I, Israel S, Uzefovsky F, et al.. (2010). Intranasal oxytocin modulates EEG mu/alpha and beta rhythms during perception of biological motion. Psychoneuroendocrinology, 35(10), 1446–53.
  70. 70. Ciavarro M, Ambrosini E, Tosoni A, Committeri G, Fattori P, et al. (2013) rTMS of Medial Parieto-occipital Cortex Interferes with Attentional Reorienting during Attention and Reaching Tasks. J Cog Neurosci 25(9): 1453–62.
  71. 71. Tosoni A, Shulman GL, Pope AL, McAvoy MP, Corbetta M (2013) Distinct representations for shifts of spatial attention and changes of reward contingencies in the human brain. Cortex 49: 1733–1749.
  72. 72. Monaco S, Cavina-Pratesi C, Sedda A, Fattori P, Galletti C, et al.. (2011) Functional magnetic resonance adaptation reveals the involvement of the dorsomedial stream in hand orientation for grasping. J Neurophysiol, 1062248–63.
  73. 73. Rossit S, Fraser JA, Teasell R, Malhotra PA, Goodale MA (2011) Impaired delayed but preserved immediate grasping in a neglect patient with parieto-occipital lesions. Neuropsychologia, 49(9), 2498–504.
  74. 74. Fattori P, Raos V, Breveglieri R, Bosco A, Marzocchi N, et al. (2010) The dorsomedial pathway is not just for reaching: grasping neurons in the medial parieto-occipital cortex of the macaque monkey. J Neurosci 30(1): 342–9.
  75. 75. Tikhonov A, Haarmeier T, Thier P, Braun C, Lutzenberger W (2004) Neuromagnetic activity in medial parieto-occipital cortex reflects the perception of visual motion during eye movements. NeuroImage 21(2): 593–600.
  76. 76. Stiers P, Peeters R, Lagae L, Van Hecke P, Sunaert S (2006) Mapping multiple visual areas in the human brain with a short fMRI sequence. NeuroImage, 29 (1): 74–89.
  77. 77. Blanke O, Landis T, Safran AB, Seeck M (2002) Direction-specific motion blindness induced by focal stimulation of human extrastriate cortex. Eur J Neurosci 15(12): 2043–48.
  78. 78. Puce A, Smith A, Allison T (2000) ERPs evoked by viewing facial movements. Cogn Neuropsychol 17(1): 221–39.
  79. 79. Matsumoto R, Ikeda A, Nagamine T, Matsuhashi M, Ohara S, et al. (2004) Subregions of human MT complex revealed by comparative MEG and direct electrocorticographic recordings. Clin Neurophysiol 115(9): 2056–65.
  80. 80. Grossman E, Donnelly M, Price R, Pickens D, Morgan V, et al.. (2000) Brain areas involved in perception of biological motion. J Cogn Neurosci, 12, 711–72.
  81. 81. Lehky SR (2000) Fine discrimination of faces can be performed rapidly. J Cogn Neurosci 12: 848–855.
  82. 82. Kawasaki H, Tsuchiya N, Kovach CK, Nourski KV, Oya H, et al. (2012) Processing of facial emotion in the human fusiform gyrus. J Cogn Neurosci 24(6): 1358–70.
  83. 83. Grinter EJ, Maybery MT, Badcock DR (2010) Vision in developmental disorders: is there a dorsal stream deficit? Brain Res Bull 82(3–4): 147–60.
  84. 84. Pineda JA (2005) The functional significance of mu rhythms: Translating “seeing” and “hearing” into “doing”. Brain Res, 50 (1): 57–68.
  85. 85. Mizuhara H (2012) Cortical dynamics of human scalp EEG origins in a visually guided motor execution. NeuroImage 62(3): 1884–95.
  86. 86. Gastaut H (1952) Etude électrocorticographique de la réactivité des rythmes rolandiques. Revue Neurologique 87(2): 176–182.
  87. 87. Ulloa ER, Pineda JA (2007) Recognition of point-light biological motion: mu rhythms and mirror neuron activity. Behav Brain Res 183(2): 188–94.
  88. 88. Di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G (1992) Understanding motor events: a neurophysiological study. Exp Brain Res, 91, 176–80.
  89. 89. Rizzolatti G, Craighero L. (2004) The mirror-neuron system. Annu Rev Neurosci, 27, 169–192.