Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Neural correlates of facial expression processing during a detection task: An ERP study

  • Luxi Sun ,

    Contributed equally to this work with: Luxi Sun, Jie Ren

    Affiliation School of Economics and Business Administration, Chongqing University, Chongqing, China

  • Jie Ren ,

    Contributed equally to this work with: Luxi Sun, Jie Ren

    Affiliation Laboratory of Emotion and Mental Health, Chongqing University of Arts and Sciences, Chongqing, China

  • Weijie He

    hweijie@foxmail.com

    Affiliation Laboratory of Emotion and Mental Health, Chongqing University of Arts and Sciences, Chongqing, China

Abstract

Given finite attentional resources, how emotional aspects of stimuli are processed automatically is controversial. Present study examined the time-course for automatic processing of facial expression by assessing N170, and late positive potentials (LPPs) of event-related potentials (ERPs) using a modified rapid serial visual presentation (RSVP) paradigm. Observers were required to confirm a certain house image and to detect whether a face image was presented at the end of a series of pictures. There were no significant main effects on emotional type for P1 amplitudes, whereas happy and fearful expressions elicited larger N170 amplitudes than neutral expressions. Significantly different LPP amplitudes were elicited depending on the type of emotional facial expressions (fear > happy > neutral). These results indicated that threatening priority was absent but discrimination of expressive vs. neutral faces occurred in implicit emotional tasks, at approximately 250 ms post-stimulus. Moreover, the three types of expressions were discriminated during the later stages of processing. Encoding emotional information of faces can be automated to a relatively higher degree, when attentional resources are mostly allocated to superficial analyzing.

Introduction

Facial expression processing occurs during many situations, and people are both highly efficient and fast at identifying emotional information from others’ expressions [1]. Furthermore, this processing also occurs when emotion-related content is not necessary. Input expression signals elicit unintentional responses; hence, processing results in automatic extraction of emotional information [24].

Electrophysiological data reveal that visual evoked potentials are sensitive to the emotional content of facial expressions at very early stages of processing [5, 6]. Additionally, emotional stimuli, compared with neutral stimuli, elicit greater visual cortex activation during passive viewing [7, 8]. This activation is associated with “emotional attention” which is defined as a predisposition to spontaneously collect all processing resources for emotional information [9, 10].

Rellecke and colleagues [11] used a face-words discrimination task to determine that processing emotionality in faces, not words, can induce activity indexed by early posterior negativity (EPN). This group also assessed the automaticity of facial expression processing by systematically comparing ERP differences elicited by three kinds of faces under various depths of processing conditions [12]. They found that automatic enhanced encoding of angry faces were indicated by P1, N170, and EPN in the early processing stages; however, happy expressions that occurred later depended on participant intentions. These findings contribute to understanding of cognitive processing of expressions.

However, we aimed to create experimental conditions that minimize intentional observer allocation of attentional resources to each expression because expressions are usually perceived rapidly and unconsciously. Therefore, we designed an experimental task requiring rapid face detection; a target stimulus is briefly presented as a component of a stimuli stream and observers determine only whether it is presented, thereby enabling more automatic access to the expressional contents of the target face. A rapid serial visual presentation (RSVP) paradigm met our criteria, as stimuli are presented sequentially and rapidly (approximately 100 ms/item). When the interval between two targets is short (200–500 ms), detection of the second target is impaired by the first target and this phenomenon is called Attentional Blink [13, 14]. During the period of attentional blink, processing resources are not adequate to recognize all facial expression information.

Facial expression categorization has been assessed [15] using an RSVP paradigm with three processing stages: fear popup, emotional/non-emotional discrimination, and complete separation. A subsequent study depicted similar phases in emotional adjective processing [16], and both studies indicated that emotion processing is spontaneous, is not significantly affected by competing stimuli, and does not require attentional resources.

The current study extended previous findings by testing whether the RSVP model requires emotion categorization, in healthy participants. The present study differs from previous ones because observers were instructed to detect whether an upright face appeared in the stimuli sequence. ERP amplitudes N1, P1, N170, vertex positive potential, N3, and P3 have been used in previous experiments to reveal periodic activities. In the present study, we used a simplified method to index the activities in decoding facial expression information; only P1, N170, and LPP were analyzed. The reasons for inclusion of these components include that P1 can be distinguished from N1 because N1 is more sensitive to attentional manipulations demanding feature discrimination rather than detection [17, 18]. N170 and VPP are dipoles for each other, so we choose only N170 represented for the next processing stage. Additionally, LPP reflects brain electrical activity during both automatic and controlled attentional processing for emotional information.

P1 is a positive-direction ERP component that primarily occurs in the bilateral occipital areas, with onset latency between 60 and 90 ms, and peaks at approximately 100–130 ms post-stimulus. Furthermore, there is an early effect of facial expressions on distinctive P1 responses for fearful compared to neutral faces [2, 19, 20]. This effect is interpreted as related to processing the coarse features of stimuli. However, magnetoencephalography and electrophysiological evidence have demonstrated that P1 is affected by facial emotion, providing evidence for rapid facial emotion processing [5, 21, 22].

N170 is a negative amplitude component detected at 120–220 ms, and peaks at approximately 170 ms post-stimulus in the lateral occipito-temporal electrode. N170 components elicited from the right hemisphere have larger amplitudes than those from the left hemisphere [23]. The main feature of this electro-physiological component is greater responses to faces than other kinds of stimuli [24, 25]. Some studies have found that facial expressions regulate and control activity indexed by N170 [2, 5].

LPPs consist of a sustained positivity that occurs at approximately 400–600 ms following stimulus onset, which increases more for emotional compared to neutral images [26, 27]. Since these stimuli automatically captivate attention and are preferentially processed by the brain, LPPs are considered indexes of motivated attention [28, 29]. Furthermore, LPPs indicate more elaborate emotion-related processing such as conscious evaluation [30] and perceptual analysis [31], and are also related to high-level recognition processing. For example, Langeslag and colleagues [32] proposed that LPPs are related to an individual’s experiences, as they were larger in response to the observer’s beloved person compared to a friend or an unfamiliar person. However, electro-cortical responses to emotional stimuli were not influenced by task difficulty [29], thereby providing evidence that emotional stimuli automatically receive attentional processing resources. Furthermore, LPPs may also be related to a rapid and dynamic course of attention to emotional stimuli [33].

The automaticity of emotional processing has often been examined by regulating concurrent multiple task demands. Erk, Abler, and Walter [34] reported that a demanding and distracting task decreased neural activity related to emotional involvement; however, it did not affect neural activity related to actual emotional stimuli processing, indicating that participants could process emotional stimuli to some extent while distracted. Therefore, in the current study, we hypothesized that if the detecting of different facial expression showed different spontaneous process, not affected by a competitive target and not requiring attentional resource, then it should be classified as an automatic process. Participants would be able to automatically process more information than what were required according to instructions that we provided before the experiments. This automatic process may have highly efficient advantage in acquiring valuable knowledge for judging emotional content.

Materials and methods

Participants

Sixteen undergraduates (8 men and 8 women; 19–24 years old) participated in our experiment; none dropped out of the experiment once it began. All of them were right-handed, had normal or corrected-to-normal vision and reported no history of neurological diseases and no structural brain abnormality. All participants were provided written informed consent prior to the study. The study was approved by Chongqing University Human Research Institutional Review Board in accordance with the Declaration of Helsinki (1991).

Stimuli

Experimental materials comprised 30 human facial expression images and 3 upright house images as visual stimuli. We picked out the face pictures from the native Chinese Facial Affective Picture System (CFAPS) to generate emotional stimuli, with 18 different images of normal faces (6 happy, 6 neutral and 6 fearful, evenly divided between male and female) as targets and 12 of inverted neutral counterparts as distractions. Upright face pictures were chosen in such a manner that varied prominently in valence from one to another F2,15 = 338.03, p < 0.001 (M ± SD, happy: 6.90 ± 0.22, neutral: 4.70 ± 0.31, fearful: 2.75 ± 0.29) but were approximate in arousal F2,15 = 0.66, p > 0.05 (happy: 5.55 ± 0.40, neutral: 5.28 ± 0.44, fearful: 5.28 ± 0.57). Males’ and females’ face appeared equally in the pictures sequence. They were similar to each other in size, background, spatial frequency, contrast grade, brightness and other physical properties. All faces were ruled out the hair, moustache so that merely interior characteristics of them were kept. We managed to cut every face into the shape of an oval using Adobe Photoshop CS4 software. Each stimulus subtended 5.7 × 4.6° of viewing angle, and the screen resolution was 72 pixels per inch. All stimuli were displayed in the center of the screen.

Procedure

Participants were seated at a viewing distance of 70 cm in front of a computer screen in a dimly lit, sound attenuated room. Experiment started with a white and a blue fixation point lasting for 500 and 300 ms presented sequentially and separately in the center of the screen. After a short while, there would be 14 images of distracting and target stimuli, with each being kept only 117 ms. Distractive stimuli comprised 12 inverted neutral faces. T1 and T2 were chosen randomly from three upright images of a house and three kinds of face with equal possibility, respectively. In particular, each condition of T2 (happy, fearful, neutral, and absent) would appear with the same possibility of 25%. T1 appeared in the third position in the stimuli series. T2 emerged after two distractive images after T1. Subjects were required to respond to one or two questions after the presentation of these pictures in previous experiment [15, 16]. In this study, they needed to reply both of these questions. By pressing one of three buttons of the response box with their right hand, the first task was to make a judgment which house they saw just now. The second task was to report whether they had seen a face or not as we told them before. Observer did not have to point out the specific expression types (Fig 1). Still, this judgment should be made as accurately as possible and there was no time limit for subjects to confirm their responses.

thumbnail
Fig 1. Experimental procedure and stimuli illustration: Participants were required to make effort on deciding which house were represented and whether a hairless face appeared or not by pressing ‘1(house of one type and presented face)’or ‘2 (house of other type and absent face).

https://doi.org/10.1371/journal.pone.0174016.g001

Inter-stimulus interval (ISI) lasted for 500 ms, during which the screen kept black and blank. The whole experiments were divided into 3 identical blocks. The observer was permitted to have a short rest between the two consecutive blocks. Each subject completed a total of 240 trials. The study includes four conditions (three facial expressions and ‘T2 absent’) and 60 trials were presented for each condition. The whole experimental procedure was performed on E-Prime 1.2.

ERP recording

Scalp electrical activity (EEG) was recorded with tin-electrodes mounted in a 64-electrode elastic cap (Brain Product), with the initial reference on the bilateral mastoids. Vertical and horizontal electro-oculogram (EOG) was measured by additional electrodes placed around the eye. All inter-electrode impedance was kept below 5 kΩ throughout recording. The electroencephalogram (EEG) and EOG were amplified using a 0.01–100 Hz band-pass filter and continuously sampled at 500 Hz/channel. Trials with EOG artifacts (mean EOG voltage exceeding ± 80 μV) and those contaminated with artifacts because of amplifier clipping, where peak-to-peak deflection exceeding ± 80 μV, were cut off from averaging.

Data measure and analysis

We used one-way repeated ANOVAs to compare the response accuracy of the factor emotional type (three levels: happy, neutral, and fear). Then mean ERP amplitudes were recalculated to average reference and analyzed, for their epochs were generated off-line, beginning 200 ms prior to T2 stimuli onset and lasting for 1200 ms. What counted, trials were accepted only when subjects correctly responded to both T1 and T2. But only T2 was the analyzed target stimulus.

In the present study, P1, N170, LPP components were measured and analyzed the amplitudes as extended on the topography and previous findings [12, 35]. These components were measured and analyzed by the amplitude (P1, N170). The following 9 electrode sites (Pz, P3, P4, POz, PO3, PO4, Oz, O1 and O2) were selected for statistical analysis of the P1 component (140–220 ms); And 4 electrode sites (P7, P8, PO7 and PO8) for N170 (220–320 ms). The mean amplitudes of LPP component (400–500 ms) was calculated at the 21 electrode sites (Fz, F3, F4, FCz, FC3, FC4, Cz, C3, C4, CPz, CP3, CP4, Pz, P3, P4, POz, PO3, PO4, Oz, O1 and O2). A three-way repeated measures analyses of variances (ANOVAs) on the amplitude of P1, N170 and LPP component was conducted with expressions, hemispheres (left, medial and right for P1 and LPP; left and right for N170) and electrodes (refer to previous electrode sites mentioned above) as within-subjects factors. These effects with two or more freedom were adjusted for violations of sphericity according to the Greenhouse-Geisser correction.

Results

Behavioral performance

The result showed that marginal significant main effect at emotional types (F2,30 = 2.94, p = 0.083, η2 p = 0.164). The pairwise comparison indicated that happiness expressions (94.61 ± 4.71%) elicited marginal significant higher accuracy than did neutral faces (92.02 ± 4.52%, p = 0.077), but fear (94.07 ± 5.50%) and happy (p = 1.000), fear and neutral faces (p = 0.474) did not show significant accuracy differences.

ERP data analysis

P1.

P1 amplitudes showed significant main effect at hemispheres (F2,30 = 5.65, p = 0.011, η2 p = 0.273) and electrodes (F2,30 = 22.29, p < 0.001, η2 p = 0.598). Right hemisphere (2.66 μV) elicited larger P1 amplitudes than medial (1.79 μV, p = 0.004), while left hemisphere (2.07 μV) did not show any significant amplitude differences with medial (p = 0.938) and right hemisphere (p = 0.196). O1/z/2 electrodes (3.08 μV) elicited largest P1 amplitudes than PO1/z/2 (2.31 μV, P = 0.022) and P1/z/2 (1.14 μV, p = 0.001) electrodes, PO1/z/2 (2.31 μV, p = 0.022) electrodes elicited larger P1 amplitudes than P1/z/2 electrodes (p < 0.001). While P1 amplitudes did not show significant main effect at emotional types (F2,30 = 0.86, p = 0.432, η2 p = 0.054; happiness 2.03 μV, neutral 2.19 μV, fear 2.30 μV).

N170.

N170 amplitudes showed significant main effect at emotional types and hemispheres (F2,30 = 11.13, p < 0.001, η2 p = 0.426; F1,15 = 3.52, p = 0.080, η2 p = 0.190). The pairwise comparison indicated that right hemisphere (-4.25 μV) elicited larger N170 amplitudes than left hemisphere (-3.10 μV, Fig 2). The happy expressions (-4.09 μV, p = 0.002) and fearful expressions (-3.85 μV, p = 0.010) elicited larger N170 amplitudes than neutral expressions (-3.09 μV), while the former two emotional conditions did not show any significant amplitude differences (p = 0.758).

thumbnail
Fig 2. Grand average ERP component of N170 at the indicated sites and corresponding topography for each condition.

https://doi.org/10.1371/journal.pone.0174016.g002

LPP.

LPP amplitudes showed significant main effect at emotional types and hemispheres (F2,30 = 11.17, p = 0.002, η2 p = 0.427; F2,30 = 3.55, p = 0.043, η2 p = 0.191). The pairwise comparison indicated that fearful expressions (0.84 μV, Figs 3 and 4) elicited larger LPP amplitudes than did happy (0.54 μV, p = 0.047) and neutral expressions (0.26 μV, p = 0.007), and happy ones showed larger LPP amplitudes than did neutral ones (p = 0.017). Medial sites (0.71 μV) elicited larger LPP amplitudes than left hemisphere (0.18 μV, p = 0.056).

thumbnail
Fig 3. Grand average ERP component of LPP at the indicated sites and corresponding topography for each condition.

https://doi.org/10.1371/journal.pone.0174016.g003

thumbnail
Fig 4. N170 and LPP amplitudes during the corresponding time window for 3 types of emotion.

*, p < 0.05; **, p < 0.01. Y axis, amplitude (μV, mean ± SE).

https://doi.org/10.1371/journal.pone.0174016.g004

Discussion

Response accuracy was better for happy-face stimuli than for neutral-face stimuli; however, there were no significant differences among other conditions. This finding suggests that happy facial expressions are more easily perceived from a stream of rapid stimuli presentations, which is inconsistent with the results of previous studies [15]. Happy facial expressions showed a processing advantage [36] in the current study, compared to fearful facial expressions. Happy faces themselves have different low-level physical properties compared to the other two kinds of expressions, which may explain advantages of processing this type of facial expressions when distinguishing the expression type is not required. No other differences were observed in the presence of the attentional blink period, including threatening stimuli preference effects. Differences in accuracy between any two of these types of facial expressions were not significant, and therefore do not reflect internal processing.

Consistent with previous findings [15, 16], our findings support an automatic three-stage model of perceiving facial expressions. The enhanced processing was observed at the first stage of processing, reflecting rapid neural processing of the target face. However, rapid and distinct perception of fearful facial expressions did not occur soon after the target onset. The latter two stages were similar to the corresponding stages; emotional expressions are preferentially processed later, in the second stage. Finally, all expressions are differentiated in the third stage.

Earlier investigations indicated that compared to positive or neutral stimuli, responses to negative stimuli were enhanced, indicating a negativity bias in attention allocation (For example: [19]). This process showed characteristics including speediness, coarseness, and automation. Especially for processing of fearful expressions, larger P1 amplitude effects were found regardless of spatial locations [5], intentional states [12], and visual cues [37]. In an affective priming task [38], larger occipital P1 potentials occurred in response to fearful faces compared to happy faces, indicating that recognition of ambiguous faces (such as a surprised face) is susceptible to threat information for visual stimuli, which is represented by millisecond-based differences in responses. However, our results showed that this difference was not appeared during a rapid sequential stimuli stream.

P1’s effect of encoding fearful expressions was absent in current study. We reconsidered the difference between their experimental designs and took it as the main reason. For previous one, we demanded participants to make a judgment of emotional type of the presented facial stimulus. The procedure needs elaborated analysis in order to extract relevant information [12]. While detecting whether a facial stimuli was presented consumed only superficial analysis in current study. This procedure conveyed some kind of default mode for processing emotional stimuli unconstrained by any additional task-imposed requirements. They differ in depth of processing, and what’s more, their processing can be classified as two types of intentional states, one explicit and one implicit [39]. This result would argue that in competitive resources surroundings, and human might ignore detailed information at very early stages for perceiving new coming faces. At the very least, emotion encoding was not pronounced.

Vuilleumier and Pourtois [40] demonstrated that the P1 advantage effect for fearful expressions results from processing of low-level visual features. The absence of P1 amplitudes for emotional expressions may indicate that top-down effects in visual system searching for threatening cues are also based upon bottom-up mechanism; intentionally, participants did not allocate adequate attentional resources. As predicted, during the attentional blink period, limited attentional resources considerably modified actual performance of threat perception.

But, an overt second stage was found in accordance with previous processing expression studies [15, 35]. The N170 component may reflect processing at this stage. Fearful and happy expressions represent two types of typical emotional expressions. The larger N170 amplitudes elicited by emotional expressions [35, 41] suggested that they were sensitive to general emotional facial expressions; however, they could not distinguish specific expressions, and it showed an emotional preference and such facial processing may occur in or near the fusiform gyrus [15, 42].

A more automatic expression-processing task produced similar results, suggesting that neural processes related to distinguishing emotional faces from non-emotional faces may be similar to that in our previous study. For present one, we chose lag2 to focus on the processing during attentional blink period. And results are consistent with those of many discrimination tasks [43, 44]. Though processing fearful faces is advantageous in conditions with limited attentional resources, fearful faces may not attract human attention when distinguishing faces is not needed; the details of facial expression may be ignored or processed later. Convincing results may be determined in the next stage of processing.

Processing emotional facial expressions reflected by LPPs showed different in the third stage. Previous studies have also demonstrated that LPP amplitudes reflect the degree of essentialness for affective stimuli, which elicit larger LPP amplitudes compared to neutral stimuli [45, 46]. Enhanced P300 (280–450 ms) amplitude occurs for fearful facial expressions [35], suggesting that signals containing potential danger can enhance elaborate processing of stimulus and context. Since the final decision regarding distinguishing the expressions was not a component of our instructions, it is not known whether participants completed the procedure. However, the participants indeed performed some intentional categorization by result of their neural activity. Because in present results, face processing related potentials (LPPs) increased in the parietal regions, and previous ERP studies [47] demonstrated that it reflected further evaluation of information related to the affective valence of a face and this stage of processing expressions depends on observers’ own intentions.

In contrast, LPP is a very sensitive index of attention manipulations related to modulating a limited number of resources [48]. Participants quickly focused their attention on the task-relevant stimuli, rather than on other inverted distractors. The differences between the LPP amplitudes indicated an elaborate processing stage for emotional meanings. Moreover, LPPs reflect the gateway to conscious processing [30, 49]. Considering our findings, facial expression information indexed by enhanced LPPs may be linked to distinct representations in working memory during this stage, which is consistent with previous studies (For example: [4]). Enhanced LPPs may also reflect that participants recognize intrinsic meaning from the distinct face representations. Therefore, the detection task in our RSVP paradigm may have induced coarse processing of the stimuli, and may also automatically produce meaningful learning regarding the meaning conveyed in the faces.

In the present study, the peak latencies of P1 and N170 were approximately 180 and 270 ms, respectively, which are 50 ms and 100 ms later than those typically reported [50]. The reasons for these differences have been discussed in previous studies [15, 16], and this designation is based on the scalp distribution of the two components, which were consistent with the expected parieto-occipital regions (P1) and occipito-temporal regions (N170). In particular, N1 was recorded at the same scalp electrode points as in a previous study. Therefore, results of N170 component can be directly compared with the results from a different task in previous study [15]. Furthermore, the time delays used for P1 and N170 amplitudes are likely attributed to the RSVP paradigm in this experiment, as completing two tasks during a rapidly presented stimuli stream is a relatively difficult task for participants. Both of the delays to the target stimulus are therefore minimal.

P1 amplitudes elicited in the parietal and parietal-occipital regions of the right hemisphere were greater than those elicited in the left hemisphere. The present data indicated right hemisphere dominance for the N170 amplitude, which is consistent with increased N170 over the right hemisphere observed for both real and schematic faces [51]. Previous studies have shown that the right hemisphere may be preferentially involved in processing that occurs later than 200 ms after stimulus onset [52, 53].

Finally, one may expect a limitation that enhanced processing at the neural level associated with improved performance as indicated by ERP components (P1, N170, LPP) should be reconciled with the finding of increased accuracy. However, increased neural processing of emotional expressions is not necessarily related with improved performance [54]. Automatic processing at the neural level may be based on the stimuli’s affective relevance ensuring behavioral adaptability in the real world. But more research is needed for accounting for the disparity between behavioral and neural activity

Conclusions

We tested the automaticity of facial expression processing when no intentional categorization task was required. Therefore, our procedure reflected relatively automatic facial expression encoding. The amplitudes of P1, N170, and LPP components supported three stages of facial expression processing. There was no processing preference for fearful faces, and early threatening information appeared to be ignored. However, our findings still support an automatic multi-stage processing of facial expressions, for participants automatically distinguished between the emotional expressions and non-emotional expressions, and automatically explore the specific features of three types of facial expression later after the stimuli onset. We examined automatic facial expressions processing in the relative limited attentional resources surroundings. That may build a foundation for considering how to improve humans’ perceiving outside world in an automatic procedure.

Acknowledgments

The work was supported by the National Natural Science Foundation of China (31600917)

Author Contributions

  1. Conceptualization: LS WH.
  2. Data curation: JR.
  3. Formal analysis: JR.
  4. Funding acquisition: WH.
  5. Investigation: LS JR WH.
  6. Methodology: LS WH.
  7. Project administration: LS.
  8. Resources: WH.
  9. Supervision: WH.
  10. Validation: LS WH.
  11. Visualization: JR.
  12. Writing – original draft: LS JR WH.
  13. Writing – review & editing: LS JR WH.

References

  1. 1. Ekman P. Emotions revealed: understanding faces and feelings. London: Weidenfeld & Nicolson; 2003.
  2. 2. Batty M, Taylor MJ. Early processing of the six basic facial emotional expressions. Cognitive Brain Research. 2003;17(3):613–20. pmid:14561449
  3. 3. Kissler J, Herbert C, Winkler I, Junghofer M. Emotion and attention in visual word processing: an ERP study. Biol Psychol. 2009;80(1):75–83. pmid:18439739
  4. 4. Schupp HT, Flaisch T, Stockburger J, Junghöfer M. Emotion and attention: event-related brain potential studies. Prog Brain Res. 2006;156:31–51. pmid:17015073
  5. 5. Pourtois G, Grandjean D, Sander D, Vuilleumier P. Electrophysiological correlates of rapid spatial orienting towards fearful faces. Cerebral cortex. 2004;14(6):619–33. pmid:15054077
  6. 6. Eger E, Jedynak A, Iwaki T, Skrandies W. Rapid extraction of emotional expression: Evidence from evoked potential fields during brief presentation of face stimuli. Neuropsychologia. 2003;41(7):808–17. pmid:12631531
  7. 7. Bradley MM, Sabatinelli D, Lang PJ, Fitzsimmons JR, King W, Desai P. Activation of the visual cortex in motivated attention. Behavioral neuroscience. 2003;117(2):369–80. pmid:12708533
  8. 8. Lane RD, Chua PM, Dolan RJ. Common effects of emotional valence, arousal and attention on neural activation during visual processing of pictures. Neuropsychologia. 1999;37(9):989–97. pmid:10468363
  9. 9. Pessoa L, Kastner S, Ungerleider LG. Attentional control of the processing of neural and emotional stimuli. Brain Research Cognitive Brain Research. 2002;15(1):31–45. pmid:12433381
  10. 10. Posner MI, Dehaene S. Attentional networks. Trends in Neurosciences. 1994;17(2):75–9. pmid:7512772
  11. 11. Rellecke J, Palazova M, Sommer W, Schacht A. On the automaticity of emotion processing in words and faces: event-related brain potentials evidence from a superficial task. Brain and cognition. 2011;77(1):23–32. pmid:21794970
  12. 12. Rellecke J, Sommer W, Schacht A. Does processing of emotional facial expressions depend on intention? Time-resolved evidence from event-related brain potentials. Biological psychology. 2012;90(1):23–32. pmid:22361274
  13. 13. Chun Marvin M, Potter Mary C. A two-stage model for multiple target detection in rapid serial visual presentation. Journal of Experimental Psychology: Human Perception and Performance. 1995;21(1):109–27. pmid:7707027
  14. 14. Raymond JE, Shapiro KL, Arnell KM. Temporary suppression of visual processing in an RSVP task: An attentional blink? Journal of Experimental Psychology: Human Perception and Performan. 1992;18(3):849–60.
  15. 15. Luo W, Feng W, He W, Wang NY, Luo YJ. Three stages of facial expression processing: ERP study with rapid serial visual presentation. NeuroImage. 2010;49(2):1857–67. pmid:19770052
  16. 16. Zhang D, He W, Wang T, Luo W, Zhu X, Gu R, et al. Three stages of emotional word processing: An ERP study with rapid serial visual presentation. Social Cognitive and Affective Neuroscience. 2014;9(12):1897–903. pmid:24526185
  17. 17. Vogel EK, Luck SJ. The visual N1 component as an index of a discrimination process. Psychophysiology. 2000;37(2):190–203. pmid:10731769
  18. 18. Hopf J-M, Vogel E, Woodman G, Heinze H-J, Luck SJ. Localizing visual discrimination processes in time and space. Journal of Neurophysiology. 2002;88(4):2088–95. pmid:12364530
  19. 19. Smith NK, Cacioppo JT, Larsen JT, Chartrand TL. May I have your attention, please: Electrocortical responses to positive and negative stimuli. Neuropsychologia. 2003;41(2):171–83. pmid:12459215
  20. 20. Williams LM, Liddell BJ, Rathjen J, Brown KJ, Gray J, Phillips M, et al. Mapping the time course of nonconscious and conscious perception of fear: An integration of central and peripheral measures. Human brain mapping. 2004;21(2):64–74. pmid:14755594
  21. 21. Liu J, Harris A, Kanwisher N. Stages of processing in face perception: an MEG study. Nature neuroscience. 2002;5(9):910–6. pmid:12195430
  22. 22. Brosch T, Sander D, Pourtois G, Scherer KR. Beyond fear rapid spatial orienting toward positive emotional stimuli. Psychological science. 2008;19(4):362–70. pmid:18399889
  23. 23. Bentin S, Allison T, Puce A, Perez E, Mccarthy G. Electrophysiological Studies of Face Perception in Humans. Journal of Cognitive Neuroscience. 1996;8(6):551–65. pmid:20740065
  24. 24. Rossion B, Joyce CA, Cottrell GW, Tarr MJ. Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. NeuroImage. 2003;20(3):1609–24. pmid:14642472
  25. 25. Itier RJT M. J. N170 or N1? Spatiotemporal Differences between Object and Face Processing Using ERPs. Cerebral cortex. 2004;14(2):132–42. pmid:14704210
  26. 26. Cuthbert BN, Schupp HT, Bradley MM, Birbaumer N, Lang PJ. Brain potentials in affective picture processing: covariation with autonomic arousal and affective report. Biological psychology. 2000;52(2):95–111. pmid:10699350
  27. 27. Schutter DJ, de Haan EH, van Honk J. Functionally dissociated aspects in anterior and posterior electrocortical processing of facial threat. International Journal of Psychophysiology. 2004;53(1):29–36. pmid:15172133
  28. 28. Schupp HT, Junghofer M, Weike AI, Hamm AO. The selective processing of briefly presented affective pictures: an ERP analysis. Psychophysiology. 2004a;41(3):441–9.
  29. 29. Hajcak G, Dunning JP, Foti D. Neural response to emotional pictures is unaffected by concurrent task difficulty: An event-related potential study. Behavioral neuroscience. 2007;121(6):1156–62. pmid:18085868
  30. 30. Kranczioch C, Debener S, Engel AK. Event-related potential correlates of the attentional blink phenomenon. Cognitive Brain Research. 2003;17(1):177–87. pmid:12763203
  31. 31. Schupp HT, Ohman A, Junghofer M, Weike AI, Stockburger J, Hamm AO. The facilitated processing of threatening faces: An ERP analysis. Emotion. 2004b;4(2):189–200.
  32. 32. Langeslag SJ, Jansma BM, Franken IH, Van Strien JW. Event-related potential responses to love-related facial stimuli. Biological psychology. 2007;76(1–2):109–15. pmid:17681417
  33. 33. Hajcak G, Dunning JP, Foti D. Motivated and controlled attention to emotion: Time-course of the late positive potential. Clinical Neurophysiology Official Journal of the International Federation of Clinical Neurophysiology. 2009;120(3):505–10. pmid:19157974
  34. 34. Erk S, Abler B, Walter H. Cognitive modulation of emotion anticipation. European Journal of Neuroscience. 2006;24(4):1227–36. pmid:16930447
  35. 35. Williams LM, Palmer D, Liddell BJ, Song L, Gordon E. The 'when' and 'where' of perceiving signals of threat versus non-threat. NeuroImage. 2006;31(1):458–67. pmid:16460966
  36. 36. Leppänen JM, Hietanen JK. Positive facial expressions are recognized faster than negative facial expressions, but why? Psychological research. 2004;69(1):22–9.
  37. 37. Bar-Haim Y, Lamy D, Glickman S. Attentional bias in anxiety: A behavioral and ERP study. Brain and Cognition. 2005;59(1):11–22. pmid:15919145
  38. 38. Li W, Zinbarg RE, Boehm SG, Paller KA. Neural and behavioral evidence for affective priming from unconsciously perceived emotional facial expressions and the influence of trait anxiety. Journal of Cognitive Neuroscience. 2008;20(1):95–107. pmid:17919076
  39. 39. Knyazev GG, Slobodskoj-Plusnin JY, Bocharov AV. Event-related delta and theta synchronization during explicit and implicit emotion processing. Neuroscience. 2009;164(4):1588–600. pmid:19796666
  40. 40. Vuilleumier P, Pourtois G. Distributed and interactive brain mechanisms during emotion face perception: evidence from functional neuroimaging. Neuropsychologia. 2007;45(1):174–94. pmid:16854439
  41. 41. Wild-Wall N, Dimigen O, Sommer W. Interaction of facial expressions and familiarity: ERP evidence. Biological psychology. 2008;77(2):138–49. pmid:17997008
  42. 42. Deffke I, Sander T, Heidenreich J, Sommer W, Curio G, Trahms L, et al. MEG/EEG sources of the 170-ms response to faces are co-localized in the fusiform gyrus. NeuroImage. 2007;35(4):1495–501. pmid:17363282
  43. 43. Blau VC, Maurer U, Tottenham N, McCandliss BD. The face-specific N170 component is modulated by emotional facial expression. Behavioral and Brain Functions. 2007;3:7. pmid:17244356
  44. 44. Eimer M, Holmes A, Mcglone FP. The role of spatial attention in the processing of facial expression: An ERP study of rapid brain responses to six basic emotions. Cognitive Affective & Behavioral Neuroscience. 2003;3(2):97–110.
  45. 45. Campanella S, Quinet P, Bruyer R, Crommelinck M, Guerit J-M. Categorical perception of happiness and fear facial expressions: an ERP study. Journal of cognitive neuroscience. 2002;14(2):210–27. pmid:11970787
  46. 46. Miltner WH, Trippe RH, Krieschel S, Gutberlet I, Hecht H, Weiss T. Event-related brain potentials and affective responses to threat in spider/snake-phobic and non-phobic subjects. International Journal of Psychophysiology. 2005;57(1):43–52. pmid:15896860
  47. 47. Van Strien JW, De Sonneville LM, Franken IH. The late positive potential and explicit versus implicit processing of facial valence. Neuroreport. 2010;21(9):656–61. pmid:20453693
  48. 48. Donchin E, Kramer A, Wickens C. Applications of event-related brain potentials to problems in engineering psychology. In: Coles MGH, Donchin E, Porges S, editors. Psychophysiology Systems Processes & Applications New York Guilford Press. New York: Guilford Press; 1986.
  49. 49. Luck SJ, Hillyard SA, editors. The operation of selective attention at multiple stages of processing: Evidence from human and monkey electrophysiology. The new cognitive neurosciences; 2000. Cambridge, MA: MIT Press.
  50. 50. Luck SJ. An Introduction to the Event-Related Potential Technique2005.
  51. 51. Krombholz A, Schaefer F, Boucsein W. Modification of N170 by different emotional expression of schematic faces. Biological psychology. 2007; 76(3): 156–62. pmid:17764809
  52. 52. Kawasaki H, Adolphs R, Kaufman O, Damasio H, Damasio AR, Granner M, et al. Single-neuron responses to emotional visual stimuli recorded in human ventral prefrontal cortex. Nature neuroscience. 2001;4(1):15–6. pmid:11135639
  53. 53. Pizzagalli DA, Lehmann D, Hendrick AM, Regard M, Pascual-Marqui RD, Davidson RJ. Affective Judgments of Faces Modulate Early Activity (∼160 ms) within the Fusiform Gyri. NeuroImage. 2002;16(3):663–77.
  54. 54. Pourtois G, Vuilleumier P. Dynamics of emotional effects on spatial attention in the human visual cortex. Prog Brain Res. 2006;156:67–91. pmid:17015075