Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Different Hemispheric Roles in Recognition of Happy Expressions

  • Akinori Nakamura ,

    nakamura@ncgg.go.jp

    Affiliations Department of Clinical and Experimental Neuroimaging, National Center for Geriatrics and Gerontology, Obu, Japan, Method and Developmental Group “MEG and EEG: Signal Analysis and Modelling”, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

  • Burkhard Maess,

    Affiliation Method and Developmental Group “MEG and EEG: Signal Analysis and Modelling”, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

  • Thomas R. Knösche,

    Affiliation Method and Developmental Group “Cortical Networks and Cognitive Functions”, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

  • Angela D. Friederici

    Affiliation Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

Abstract

The emotional expression of the face provides an important social signal that allows humans to make inferences about other people's state of mind. However, the underlying brain mechanisms are complex and still not completely understood. Using magnetoencephalography (MEG), we analyzed the spatiotemporal structure of regional electrical brain activity in human adults during a categorization task (faces or hands) and an emotion discrimination task (happy faces or neutral faces). Brain regions that are specifically important for different aspects of processing emotional facial expressions showed interesting hemispheric dominance patterns. The dorsal brain regions showed a right predominance when participants paid attention to facial expressions: The right parietofrontal regions, including the somatosensory, motor/premotor, and inferior frontal cortices showed significantly increased activation in the emotion discrimination task, compared to in the categorization task, in latencies of 350 to 550 ms, while no activation was found in their left hemispheric counterparts. Furthermore, a left predominance of the ventral brain regions was shown for happy faces, compared to neutral faces, in latencies of 350 to 550 ms within the emotion discrimination task. Thus, the present data suggest that the right and left hemispheres play different roles in the recognition of facial expressions depending on cognitive context.

Introduction

The capacity to recognize facial expressions is one of the most important abilities in human social interaction. It is well known that humans can easily discriminate at least six emotional expressions: happiness, surprise, fear, sadness, anger, and disgust [1], [2] (also see [3] for review). Information from facial expressions helps us to make inferences about another individual's state of mind and facilitates communication. Lines of evidence have strongly suggested that information about personal identity and emotion is, to a considerable degree, processed separately in the human brain. Likewise, different neuronal populations that are tuned to selectively respond to either identity or expression have been found in monkeys [4], [5]. Some patients with impairment of facial recognition (prosopagnosia) have shown normal performance in recognizing facial expressions [6], [7], while others have shown the opposite symptoms; impairments in recognizing facial affect but preserved ability for facial identification [8], [9]. Behavioral studies have also shown dissociation between the processing of facial identity and facial expression [10], [11]. These observations support Bruce and Young's (1986) [12] cognitive model of face recognition, which proposed distinct module-based processing pathways for facial identification, emotional expression, and speech-related facial movements. Haxby et al. (2000, 2002) [13], [14] proposed a neural model for face perception based on Bruce and Young's model, which, in principle, can explain the findings of various previous studies. The model assumes two major neural pathways, one to process invariant aspects of faces leading to facial identification, and another to process changeable aspects of faces such as eye gaze, expression, and lip movements.

Concerning the recognition of the invariant aspects of faces, various functional neuroimaging studies have shown converging results indicating that the ventral occipitotemporal region, namely the fusiform face area (FFA), plays an essential role in facial recognition [15][20]. In contrast, findings of studies on emotional expressions of faces appear to be more complicated. To date, several brain regions have been reported to serve important functions in recognizing facial expressions. The amygdala is one of the most important and well-known structures to be involved in the recognition of emotions, especially because of its role in fear perception, which is well established in both clinical [21], [22] and neuroimaging studies [23][25]. The activation of the insula in response to the disgust expression has been repeatedly reported [26], [27]. In addition, involvement of the right primary and secondary somatosensory cortices in judging emotional facial expression has been demonstrated by lesion studies [28], [29]. The inferior frontal cortex [30], [31] and the orbitofrontal cortex [32], [33] have also been reported to contribute to the processing of facial emotion. Moreover, the superior temporal sulcus (STS) region is considered important because neurons in this region are tuned to respond to social signals including facial expression, eye gaze, lip movements, and gestures [34][40] (also, see [41] for review). Most of these findings are supported by an extensive voxel-based meta-analysis of functional MRI studies [42]. However, because a number of distributed brain areas appear to have important roles, as reviewed above, it is still difficult to disentangle their different contributions and draw a concise and detailed picture of the brain mechanisms underlying the recognition of facial expression.

Electrophysiological studies have, to an extent, helped unravel the complex human recognition system for facial expression by providing time-series information. Various electrophysiological studies have found a prominent electromagnetic component related to face perception with peak latency around 160–200 ms (commonly known as the N170), which originates in or around the FFA [43][48]. Several event-related potential (ERP) studies have demonstrated that the N170 is not affected by facial expressions, but that later ERP components show expression-related changes [49][54]. These results suggest that the FFA, at least in a time range of around 170 ms, is not an important contributor to the recognition of facial expression, and expression-related processing takes place later than 200 ms after stimulus onset. Magnetoencephalography (MEG) can provide further useful information because of its better spatial resolution compared to electroencephalography (EEG). Using MEG, the Ioannides and Streit group demonstrated the time courses of the electrical activity elicited by emotional faces in several brain regions, including the primary visual cortex, the FFA, and the amygdala [55][58]. Kujara et al. (2009) [59] showed the importance of the STS in processing painful expressions. However, these sets of results are still insufficient to gain a full understanding of the brain mechanisms underlying the recognition of facial expressions. The first objective of the present study was to provide detailed spatiotemporal information related to facial expression recognition. The second objective was to analyze the possible hemispheric roles for different subprocesses of this ability. There have been two conflicting theories for hemispheric lateralization in emotional processing. The right hemisphere hypothesis posits the right predominance for recognizing and expressing emotion, irrespective of emotional valence. This theory is supported by many previous studies [6], [28][31], [59], [60]. The valence hypothesis states that the hemispheric roles differ depending on the types of emotion; the left hemisphere is dominant for positive emotions and the right hemisphere is dominant for negative emotions. The valence hypothesis is also supported by a considerable number of studies [61][64]. The detailed spatiotemporal information provided by the current study is expected to shed further light on this issue.

Methods

Subjects

The study was approved by the Ethics Committee of the University of Leipzig, and all subjects gave written informed consent prior to participation. Twenty right-handed volunteers (21 to 30 years old, mean age 25.3 years, 10 males) with normal or corrected-to-normal visual acuity (greater than 25/25) participated in the MEG measurements. We excluded four subjects because of noisy MEG signal or large head movements during the measurements, and analyzed data sets from the remaining 16 subjects. An additional reaction time study was conducted with ten of these 16 subjects at a later date.

Stimuli and tasks

Visual stimuli were digitized gray-scale photos (400×400 pixels) of faces with happy and neutral expressions, both taken from 66 individuals (altogether 132 pictures), provided by the AR Face Database [65] and 66 pictures of hands with 11 different postures taken from 6 individuals [36]. The stimuli were controlled by ERTS-VIPL (BeriSoft Co., Frankfurt, Germany) and projected onto a screen using a liquid crystal video projector. The visual angle was 13°×13°, and the viewing distance was 70 cm. A small fixation point was placed at the center of the screen.

There were two different tasks. The emotion task involved discrimination of happy faces (EmoFH) from neutral faces (EmoFN), and the categorization task involved discrimination of faces (CatFH) from hands (CatHG). The CatFH picture set contained the same happy faces as EmoFH. Therefore, the contrast between EmoFH and CatFH was expected to demonstrate the differences between active and passive processing of emotional faces and reflect the influence of attention on processing facial expressions, whereas the contrast between EmoFH and EmoFN was expected to reflect the emotion-specific processing of a happy expression. Visual event-related MEG responses to both the emotion and categorization tasks were recorded in different measurement blocks, while the order of the blocks was counter-balanced across the subjects. The detailed experimental procedures are shown in Fig 1. First, a randomly selected picture was displayed for 700 ms. Selections were taken from either the set of happy or neutral faces (emotion task), or the set of happy faces or hand gestures (categorization task). Next, a blank screen appeared for 300 ms and was followed by a “Go” signal. Then, with their right hand, participants had to indicate via button press (pre-assigned 1 or 2) whether the facial expression was happy or neutral (emotion taks), or whether the picture was a face or hand (categorization task). Finally, a blank screen was presented for a random time interval (1000 ms±300 ms) and then the next trial started. The assignment of the response buttons (1 or 2) was counterbalanced across participants. In each task block, 396 pictures (198 per condition) were randomly presented and one measurement block took about 20 min.

thumbnail
Figure 1. Experimental procedures.

During the measurements, participants were asked to gaze at the fixation point in the center of the screen (+). First, a picture of a face either with or without expression (emotion task), or a face or picture of a hand (categorization task) was randomly presented for 700 ms. Next, a blank screen appeared for 300 ms and was followed by a “Go” signal. Then the participants had to indicate whether the face was with or without expression (or whether the picture was a face or hand) by pressing pre-assigned button 1 or 2 using the right hand. Finally, a blank screen was presented for a random time interval (1000 ms±300 ms). Then the next trial started. The assignment of the response buttons (1 or 2) was counterbalanced across participants. The photographs in the figure are not the original images used in the study, but similar images used for illustrative purposes only. The subject of the photographs has given informed consent to publication, as outlined in the PLOS consent form.

https://doi.org/10.1371/journal.pone.0088628.g001

Behavioral performances were recorded during the MEG recordings. Additionally, we conducted a separate reaction time study with 10 of the 16 subjects using the same task sets but without the “Go” signal. Subjects were instructed to press the appropriate response button immediately after the judgment.

MEG recordings

A 148-channel whole-head system consisting of magnetometor sensors (WHS2500, 4D-Neuroimaging, San Diego, Ca, USA) was used for the MEG measurements. Eye movements were monitored by vertical and horizontal electrooculograms (EOGs). Signals were recorded with a bandwidth of 0.1 to 100 Hz and digitized using a sample rate of 508.6 Hz. The continuous MEG data were filtered off-line with a 0.5–30 Hz band-pass filter. In each session, 196 epochs per condition were collected and averaged in a time window of between −100 ms and +700 ms relative to stimulus onset. Direct current (DC) offset was subtracted using a pre-stimulus period (100 ms) as the baseline. Epochs with motion or eye movement artifacts (more than 30 µV in horizontal or 50 µV in vertical eye movements) or with incorrect responses were excluded from averaging.

Data Analysis

MEG data were analyzed using the region of interest (ROI) analysis based on the principal components analysis (PCA) [36], which consisted of the following four steps: First, individual brain current source density (CSD) maps were calculated by L2-minimum-norm estimates [66][68] using ASA software (ANT Software BV, Enschede, Netherlands). The linear inverse was regularized using Tikhonov regularization. The regularization factor was computed according to the estimated signal-to-noise ratio [69]. For the CSD calculation, we used realistically shaped volume conductors constructed from individual MRIs. The source reconstruction surfaces were located 1 cm below the individual brain envelope, and meshed with 1222 nodes. The orientation of the current dipole at each node was free, but restricted within the surface (restricted to tangential sources).

Second, each individual MRI was spatially normalized onto Talairach and Tournoux [70] standard brain space by linear transformation. Each subject's CSD data were also spatially normalized using the identical transformation parameters. Further analyses were done within this normalized space.

Third, spatio-temporally separable independent factors in the time-series were extracted from the CSD data sets using a spatial PCA [36], [71], [72]. For the PCA, we constructed a rectangular matrix that had a column for each CSD node (1222 data points) and a row for each time instant (−100 to +660 ms, 379 points) of 3 conditions (EmoFH, EmoFN and CatFH) and 16 subjects. Since the MEG responses to hand postures had already been reported in our previous study [36] and are not a focus of the present study, we did not analyze the CatHG condition. Then the covariance matrix of the CSD was submitted to a PCA. After extraction of the spatial factors, these factors were rotated using the VARIMAX method [73]. We analyzed factors that showed an explained variance of more than 1.5% (PCA factor score) and a maximal correlation value larger than 0.5. In addition, factors that showed poor signal quality with a large amount of noise contamination were eliminated from the analyses. For this, we analyzed each PCA factor score [72] and computed the S/N ratio for (the maximal deflection in a post-stimulus time window)/(the standard deviation in the pre-stimulus baseline). Any factor that remained at an S/N ratio smaller than 10 for all conditions was eliminated from the analyses.

Fourth, an ROI analysis was performed using the spatial information from the PCA-factors in order to obtain time courses of certain brain regional electric activities. The ROIs, each of which consisted of 10 data points, were defined by thresholding as follows: To start with, we determined the data point that had the maximal magnitude in each PCA factor. Within a radius of 20 mm from that point, we then selected data points up to the 10th highest in magnitude. If a PCA factor shape involved more than two different anatomical subdivisions, we created additional ROIs starting from the local maximum within the anatomical subdivision and applied the same procedure again. For each ROI, time courses for the 3 conditions in the CSD values were calculated separately as averages across all subjects. The peak latencies were determined from the most prominent positive deflection in the averaged time courses of the EmoFH condition. If there were more than two prominent peaks, we took the earliest. The onset latency of the electric activation in each ROI was determined by scanning the magnitude of electrical activity starting from 0 ms. The latency at which the activity became stronger than five standard deviations (SD; uncorrected p<0.00001) and continued to increase for more than 20 ms was defined as the onset latency of the electric activation. For all ROIs, condition effects were tested in five time windows (TWs), which were 100 ms in length and had starting points ranging from 50 to 550 ms. The CSD values of the solutions were averaged for each ROI, condition, and TW, and were used as dependent valuables for the repeated-measure analyses of variances (ANOVAs). First, a 3-way ANOVA (ROI * condition * TW in 16 subjects) was conducted in order to check the 3-way interaction. Consequently, 2-Way ANOVAs (condition * TW in 16 subjects) were then conducted for each ROI. For the analyses, the sphericity was checked using Mauchly's test, and degrees of freedom were adjusted according to Huynh-Feldt if appropriate. Paired t-tests were applied for the post hoc analyses using the Bonferoni correction. These statistical analyses were performed using SPSS ver, 21 (IBM, New York, USA).

In order to check for the influence of individual differences in signal strength (because a few subjects with large signals may dominate the results), the PCA-based ROI analysis was performed using both normalized and original data sets. For normalization, individual CSD data sets were divided by their medians, which were calculated across all conditions, time points, and dipole positions for each subject. The outputs of both data sets were similar, but the S/N ratio was slightly better in the median-normalized data set. Therefore, the results using median-normalized data are presented in this article. Individual activation was also analyzed in each ROI. The mean activity in a 20-ms time window around the peak latency was regarded as significant when it was larger than three SD (uncorrected p<0.001) compared to the averaged baseline.

Spatial accuracy

According to a previous simulation study [36], the spatial accuracy of source estimation of our method using the L2-minimum-norm calculation is quite reliable if a source is located close to the reconstruction surface. Most of the sources in the outer cortical mantle belong to this category. This is because MEG is most sensitive to shallow electrical sources [74][76]. However, due to the limitations of the 2-dimensional reconstruction, there are exceptions. These are: 1) Activity which is located in the same position but at a different depth, as, for example, the insula and frontal operculum or the FFA and cerebellum, is reconstructed into the shallow position only, i.e., the frontal operculum or the cerebellum. 2) Source activity at larger distances from the MEG sensors, i.e., in the more medial parts of the basal brain, is reconstructed with significantly less spatial accuracy (error >20 mm). Therefore, an ROI in the medial temporal region might also reflect activity from the ventral and medial temporal structure including the amygdala, hippocampus, parahippocampus, and the anterior parts of the fusiform gyri.

Results

Behavioral data

During the MEG recordings, all subjects were able to perform both tasks accurately. The proportions of correct responses in the emotion recognition task were 93±4% in the EmoFH condition and 94±5% in the EmoFN condition. In the categorization task, they were 97±4% in the CatFH condition and 96±4% in the CatHG condition. There were no statistically significant condition differences between EmoFH and EmoFN, or between EmoFH and CatFH. Reaction times, measured in a separate, purely behavioral study, demonstrated that the mean reaction time for the EmoFH, EmoFN, CatFH, and CatHG conditions were 558±55 ms, 564±52 ms, 410±42 ms and 413±33 ms, respectively. There were no condition differences in reaction time within each task (EmoFH vs. EmoFN or CatFH vs. CatHG). However, the mean reaction time to EmoFH was significantly longer than that to CatFH (p<0.0001).

MEG data

The grand-averaged CSD maps showed spatio-temporal dynamics of brain surface electric activation while recognizing facial emotion (Fig. 2). The earliest electrical activity started at the occipital pole, and then immediately spread ventrally through the bilateral occipitotemporal regions, which were maximally activated at around 160 ms. The electric activation also showed rapid dorsal spreading. However, apparent interhemispheric differences were found, especially at latencies later than 200 ms, in that the right parietofrontal regions were more strongly activated than the left ones.

thumbnail
Figure 2. Spatially normalized CSD maps of the EmoFH condition averaged across 16 subjects.

Each column shows the brain surface electric activity at a given time point. Each row shows the view from the back, bottom, left, and right.

https://doi.org/10.1371/journal.pone.0088628.g002

We extracted 9 PCA factors (Fig. 3, PCA, A-I) from the CSD data of the three conditions (EmoFH, EmoFN and CatFH), which accounted for 58% of the total variance. Using the spatial information of these PCA factors, we created 14 ROIs according to anatomical subdivisions (Fig. 3, ROI; Table 1). These regions were widely distributed from posterior cortices (visual) to anterior cortices (inferior prefrontal) involving both ventral and dorsal regions. The averaged time courses of all conditions at each ROI are illustrated in Fig. 3. Each of them showed a characteristic activation pattern, although the S/N ratio of the ROIs in the ventral regions (e.g., C1, C2, D1, and D2) appeared to be lower. Compared to the pre-stimulus baseline, all ROIs showed highly significant activation (at least more than 10 times SD larger deflections at their peak) in all three conditions. Individual data also demonstrated that at least 12 of the 16 subjects (14.4±1.6 on average) elicited significant activation in all 14 ROIs, and all 16 subjects showed significant activation in a subset of 6 ROIs (A1, A2, B, E, F, and H) (Table 1, Ind-act). The onset latency of the activation was shortest in the primary visual region (56 ms), and all ROIs showed initial significant deflections before 100 ms (Table 1, onset). On the other hand, peak latencies ranged widely from 132 ms (primary visual region) to around 400 ms (inferior prefrontal regions). The ventral occipitotemporal regions and the right posterior dorsal regions generally showed earlier peak latencies of around 160 to 190 ms (Table 1, peak).

thumbnail
Figure 3. Results of the PCA-based ROI analysis.

The PCA column shows the spatial distribution of nine factors extracted by PCA. Views of the brains are indicated in parentheses. The shape of each ROI created by these PCA factors is shown in the ROI column. C*T indicates ROIs that showed significant interaction (condition * TW), and Cn indicates ROIs that showed significant main effects of condition. Time courses of CSD at each ROI averaged across the 16 subjects are displayed on the right-hand side of the corresponding ROIs. Time courses of the 3 conditions are plotted in red (EmoFH), blue (EmoFN), and green (CatFH). The unit of the Y-axis demonstrates the median-normalized value. Time windows, during which statistically significant condition differences were detected by post hoc analyses, are indicated in red (EmoFH > EmoFN), yellow (EmoFH > CatFH), and green (CatFH > EmoFH) bars.

https://doi.org/10.1371/journal.pone.0088628.g003

The 3-way ANOVA (14 ROIs * 3 conditions * 5 TWs) showed a significant 3-way interaction (p<0.05). Subsequent 2-way ANOVAs demonstrated that 4 ROIs (A2, B, D1, and D2) showed significant interactions (condition * TW), and 2 ROIs (C1 and C2) showed significant main effects of condition (Table 1 ANOVA). Post hoc analyses detected significant condition differences in 10 TWs of these 6 ROIs (Fig. 3, color bars and asterisks). Correction for the multiple comparisons were done using Bonferroni correction that p-values were multiplied by the number of condition contrasts (3) for the ROIs which showed significant interactions, and further multiplied by the number of TWs (5) for ROIs which showed significant main effects of condition. The right central and inferior frontal ROIs (A2 and B) demonstrated stronger activation in EmoFH (recognizing happy faces in the emotion task) than CatFH (categorizing faces vs. hands) in TWs 350–550 ms. On the other hand, in the left inferior temporal and medial temporal ROIs (C1 and C2), electrical activity to EmoFH was generally greater than that to EmoFN (recognizing neutral faces in the emotion task), especially in TWs 350–550 ms. In contrast, the right inferior temporal and medial temporal ROIs (D1 and D2) were activated strongest in CatFH in TW 150–250 ms.

Discussion

Distributed brain regions for face perception and their time courses.

The present study demonstrated that a network of distributed brain regions is activated during face processing, irrespective of the task (face categorization or emotion discrimination) or condition (within these tasks: happy vs. neutral; hand vs. face). The PCA-based ROI analysis suggests that at least 9 independent factors involving 14 different anatomical subdivisions participate in face perception and facilitate recognition of the facial emotion (Fig. 3 and Table 1). This is compatible with previous studies [58], [77] and also supportive of Haxby's model [13], [14]. The regions activated are widely distributed over the brain and include the primary visual, ventral occipitotemporal, STS, parietal, frontal, and inferior prefrontal cortices, and involve most of the expected regions. Interestingly, the spreading of the electric activation over these areas appears to occur very fast. Within 50 ms after the initial activation in the primary visual cortices, all regions started to show significant activation (Table 1, onset). This is compatible with a previous study using intracranial EEG recording that reported that coherent neural activity between the FFA and other brain regions spreads very quickly [78]. On the other hand, each of our ROIs exhibited their activation peaks at quite different latencies, ranging from 130 to 400 ms (Table 1, peak, Fig. 3). This suggests that each brain region plays its own role at a particular stage of information processing. The present analysis detected activation in the left STS, which is thought to process the core changeable aspects of faces [13], [14]. However, we did not observe different activation patterns between happy and neutral faces in the STS region (Fig. 3, G1). The absence of condition effects for emotional discrimination suggests that information processing for the changeable aspects of faces, at this stage, cannot be directly associated with decoding specific emotional information. Another explanation could be response habituation. Kujala et al. (2009) [59] demonstrated that STS neurons produce reduced responses to the repetition of painful faces. The repetition of happy faces may also have led to a habituation of the STS neurons.

Effects of attention on facial expression

Regions that showed significant task effects between the EmoFH and CatFH conditions are considered to be involved in attentive processing of the facial emotion, because the stimulus sets of EmoFH and CatFH were identical (both happy faces) but the levels of attention to facial emotions were different. Since the task effects were mainly observed in the right hemisphere, they cannot be attributed to the contamination of the motor activity related to button responses because all subjects responded using their right hand.

The task effects were found in the right parietofrontal network, including central (Fig. 3, A2) and inferior frontal (Fig. 3, B) regions. Since it is difficult to clearly separate the activity from pre- and postcentral cortices using MEG, the ROI A2 is considered to reflect the electrical activity from the somatosensory and motor/premotor cortices.

These findings are compatible with a number of previous studies. The right somatosensory cortex is considered to play an important role in recognizing facial expressions, because lesions affecting this area cause impaired ability to assess other people's emotional states [28], [29]. Hemodynamic studies have demonstrated the involvement of the right inferior frontal cortex in emotional processing [30], [31]. It appears plausible to explain these results using the “simulation theory”, which states that we can infer another person's intention/emotion by generating the same internal representation based on the mirror neuron network [79][82]. Intensive investigations have revealed that, in humans, the inferior frontal, inferior parietal, and STS regions play an essential role in the mirror neuron network [37], [83], [84]. Interestingly, the spatial distribution of the PCA Factor A (Fig. 3) appears to involve the regions associated with the mirror neuron system, suggesting that they work harmoniously during the processing of facial emotion. Leslie et al. (2004) [30] suggested that the mirror neuron system in the right hemisphere plays a key role in emotional processing. The involvement of the right mirror neuron circuit has even been demonstrated in the recognition of hand signs [36]. Therefore, it is suggested that the right mirror neuron network is important for the processing of social signals.

Recently, Monroe et al (2013) [85] reported that explicit processing of facial emotion, compared with implicit processing, elicited stronger electromagnetic activity in the left FFA in latencies of around 170 ms (M170). However, we could not detect such a condition effect in latencies earlier than 200 ms in the left ventral occipitotemporal ROIs (Fig. 3, ROI C1, C2, and F). We consider this to be due to the differences in emotion contrast. The authors found that fearful faces evoked greater M170 compared with happy or neutral faces during explicit processing of emotion. And in fact, they did not find differences between the responses to happy versus neutral faces.

Regions involved in processing happy expressions

Previously, specific brain regions responsible for the recognition of happy expressions were not clearly identified and functional neuroimaging studies showed conflicting results. Several studies have reported activation of the amygdala in response to happy faces [23], [86]; in contrast, some other studies failed to detect any amygdala activation [25], [87].

Here, the condition contrast between EmoFH (happy) and EmoFN (neutral) was expected to help identify the brain regions and processes related to the recognition of happy expressions. Both the left inferior and the medial temporal ROIs demonstrated significantly stronger activation in EmoFH than EmoFN in the TWs from 350 to 550 ms (Fig. 3, C1 and C2). The results might reflect the amygdala activation, because it is compatible with previous MEG studies using a magnetic field tomography technique [55][58], [77], which consistently detected significant electromagnetic activation in the amygdala. The results are also in line with Breiter's (1996) [23] study, which demonstrated left amygdala activation by comparing hemodynamic responses to happy faces with hemodynamic responses to neutral faces. However, it is still an open question as to whether MEG is capable of detecting amygdala activity or not. MEG is most sensitive to tangentially-oriented electrical activity that is a summation of the postsynaptic potentials in the apical dendrites of thousands of pyramidal neurons arranged like palisades [74], [75], whereas such a neuronal structure is not seen in the amygdala. In addition, since the spatial accuracy of the source estimation in the medial part of the basal brain is limited, the medial temporal ROI may, alternatively, reflect activity from the medial temporal structure, including the amygdala, hippocampus, parahippocampus, and the anterior part of the fusiform gyri. (see Methods). Therefore, in this paper, we refrain from specifying the actual sources corresponding to the activity in the left medial and lateral ROIs, but do consider these ROIs to reflect the activity in the left ventral part of the brain.

Interhemispheric differences

The present study clearly demonstrates interhemispheric differences in the brain mechanisms involved in the recognition of facial expression. Interestingly, the ventral and dorsal brain regions showed opposite patterns in hemispheric predominance for different aspects of face processing. The dorsal brain regions, especially the parietofrontal network, showed marked right hemispheric predominance if the subjects paid attention to the facial expression (Fig. 3 A2 and B). This is compatible with previous studies supporting the right hemisphere hypothesis [6], [28][31], [59], [60]. In contrast, the left ventral regions were strongly activated by the happy expressions during recognition of the emotional value (Fig. 3 C1 and C2), whereas the right counterparts were activated during categorization (Fig. 3 D1 and D2) of happy faces versus hands. In order to analyze the hemispheric differences in these left and right ventral regions, we conducted additional 3-way ANOVAs (ROI * condition * TW) for C1 vs D1 as well as C2 vs D2. For both ANOVAs, we found significant main effects of ROI (p<0.001 and p<0.05, respectively) indicating that the total CSD power in the right ROIs (D1 and D2) are generally larger than that of their left counterparts (C1 and C2).

A limitation of our study is that we did not include stimuli displaying negative emotions. Although our study does not allow us to make statements regarding negative emotions, we consider our findings supportive for the valence hypothesis [61]-[64] that the left hemisphere plays a role in the recognition of happy expressions. On the other hand, our results also demonstrated a general predominance in the right hemisphere, especially for the attentive processing of facial emotion. Thus we consider that the two theories do not pose a conflict, but are rather connectable.

Conclusion

The present data indicate that reading of facial expressions activates a parieto-frontal network in the right hemisphere. In contrast, the recognition of the emotional value of the facial expression, for happy expressions, mainly causes activity in inferior and medial temporal regions of the left hemisphere.

Acknowledgments

We are grateful to Y. Wolff for technical assistance.

Author Contributions

Conceived and designed the experiments: AN BM TK AF. Performed the experiments: AN BM. Analyzed the data: AN BM. Contributed reagents/materials/analysis tools: AN BM TK. Wrote the paper: AN BM TK AF.

References

  1. 1. Ekman P, Friesen WV (1971) Constants across cultures in the face and emotion. J Pers Soc Psychol 17: 124–129.
  2. 2. Ekman P (1993) Facial expression and emotion. Am Psychol 48: 384–392.
  3. 3. Posamentier MT, Abdi H (2003) Processing faces and facial expressions. Neuropsychol Rev 13: 113–143.
  4. 4. Hasselmo ME, Rolls ET, Baylis GC (1989) The role of expression and identity in the face-selective responses of neurons in the temporal visual cortex of the monkey. Behav Brain Res 32: 203–218.
  5. 5. Rolls ET (1984) Neurons in the cortex of the temporal lobe and in the amygdala of the monkey with responses selective for faces. Hum Neurobiol 3: 209–222.
  6. 6. Etcoff NL (1984) Selective attention to facial identity and facial emotion. Neuropsychologia 22: 281–295.
  7. 7. Tranel D, Damasio AR, Damasio H (1988) Intact recognition of facial expression, gender, and age in patients with impaired recognition of face identity. Neurology 38: 690–696.
  8. 8. Bowers D, Bauer RM, Coslett HB, Heilman KM (1985) Processing of Faces by Patients with Unilateral Hemisphere Lesions 1. Dissociation between Judgments of Facial Affect and Facial Identity. Brain Cogn 4: 258–272.
  9. 9. DeKosky ST, Heilman KM, Bowers D, Valenstein E (1980) Recognition and discrimination of emotional faces and pictures. Brain Lang 9: 206–214.
  10. 10. Ellis AW, Young AW, Flude BM (1990) Repetition priming and face processing: Priming occurs within the system that responds to the identity of a face. Quarterly Journal of Experimental Psychology Section A 42: 495–512.
  11. 11. Young AW, McWeeny KH, Hay DC, Ellis AW (1986) Matching familiar and unfamiliar faces on identity and expression. Psychol Res 48: 63–68.
  12. 12. Bruce V, Young A (1986) Understanding face recognition [see comments]. Br J Psychol 77: 305–327.
  13. 13. Haxby JV, Hoffman EA, Gobbini MI (2000) The distributed human neural system for face perception. Trends Cogn Sci 4: 223–233.
  14. 14. Haxby JV, Hoffman EA, Gobbini MI (2002) Human neural systems for face recognition and social communication. Biol Psychiatry 51: 59–67.
  15. 15. Clark VP, Keil K, Maisog JM, Courtney S, Ungerleider LG, et al. (1996) Functional magnetic resonance imaging of human visual cortex during face matching: a comparison with positron emission tomography. Neuroimage 4: 1–15.
  16. 16. Haxby JV, Horwitz B, Ungerleider LG, Maisog JM, Pietrini P, et al. (1994) The functional organization of human extrastriate cortex: a PET-rCBF study of selective attention to faces and locations. J Neurosci 14: 6336–6353.
  17. 17. Kanwisher N, McDermott J, Chun MM (1997) The fusiform face area: a module in human extrastriate cortex specialized for face perception. J Neurosci 17: 4302–4311.
  18. 18. McCarthy G, Puce A, Gore JC, Allison T (1997) Face-specific processing in the human fusiform gyrus. J Cogn Neurosci 9: 605–610.
  19. 19. Nakamura K, Kawashima R, Sato N, Nakamura A, Sugiura M, et al. (2000) Functional delineation of the human occipito-temporal areas related to face and scene processing. A PET study. Brain 123 (Pt 9): 1903–1912.
  20. 20. Sergent J, Ohta S, MacDonald B (1992) Functional neuroanatomy of face and object processing. A positron emission tomography study. Brain 115: 15–36.
  21. 21. Adolphs R, Tranel D, Damasio H, Damasio A (1994) Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature 372: 669–672.
  22. 22. Calder AJ, Young AW, Rowland D, Perrett DI, Hodges JR, et al. (1996) Facial emotion recognition after bilateral amygdala damage: Differentially severe impairment of fear. cognitive neuropsychology 13: 699–745.
  23. 23. Breiter HC, Etcoff NL, Whalen PJ, Kennedy WA, Rauch SL, et al. (1996) Response and habituation of the human amygdala during visual processing of facial expression. Neuron 17: 875–887.
  24. 24. Morris JS, Friston KJ, Buchel C, Frith CD, Young AW, et al. (1998) A neuromodulatory role for the human amygdala in processing emotional facial expressions. Brain 121 (Pt 1): 47–57.
  25. 25. Morris JS, Frith CD, Perrett DI, Rowland D, Young AW, et al. (1996) A differential neural response in the human amygdala to fearful and happy facial expressions. Nature 383: 812–815.
  26. 26. Phillips ML, Young AW, Senior C, Brammer M, Andrew C, et al. (1997) A specific neural substrate for perceiving facial expressions of disgust. Nature 389: 495–498.
  27. 27. Sprengelmeyer R, Rausch M, Eysel UT, Przuntek H (1998) Neural structures associated with recognition of facial expressions of basic emotions. Proc Biol Sci 265: 1927–1931.
  28. 28. Adolphs R, Damasio H, Tranel D, Cooper G, Damasio AR (2000) A role for somatosensory cortices in the visual recognition of emotion as revealed by three-dimensional lesion mapping. J Neurosci 20: 2683–2690.
  29. 29. Adolphs R, Damasio H, Tranel D, Damasio AR (1996) Cortical systems for the recognition of emotion in facial expressions. J Neurosci 16: 7678–7687.
  30. 30. Leslie KR, Johnson-Frey SH, Grafton ST (2004) Functional imaging of face and hand imitation: towards a motor theory of empathy. Neuroimage 21: 601–607.
  31. 31. Nakamura K, Kawashima R, Ito K, Sugiura M, Kato T, et al. (1999) Activation of the right inferior frontal cortex during assessment of facial emotion. J Neurophysiol 82: 1610–1614.
  32. 32. Hornak J, Rolls ET, Wade D (1996) Face and voice expression identification inpatients with emotional and behavioural changes following ventral frontal lobe damage. Neuropsychologia 34: 247–261.
  33. 33. Vuilleumier P, Armony JL, Driver J, Dolan RJ (2001) Effects of attention and emotion on face processing in the human brain: an event-related fMRI study. Neuron 30: 829–841.
  34. 34. Calvert GA, Bullmore ET, Brammer MJ, Campbell R, Williams SC, et al. (1997) Activation of auditory cortex during silent lipreading. Science 276: 593–596.
  35. 35. Hoffman EA, Haxby JV (2000) Distinct representations of eye gaze and identity in the distributed human neural system for face perception. Nat Neurosci 3: 80–84.
  36. 36. Nakamura A, Maess B, Knosche TR, Gunter TC, Bach P, et al. (2004) Cooperation of different neuronal systems during hand sign recognition. Neuroimage 23: 25–34.
  37. 37. Nishitani N, Hari R (2002) Viewing lip forms: cortical dynamics. Neuron 36: 1211–1220.
  38. 38. Perrett DI, Smith PA, Potter DD, Mistlin AJ, Head AS, et al. (1985) Visual cells in the temporal cortex sensitive to face view and gaze direction. Proc R Soc Lond B Biol Sci 223: 293–317.
  39. 39. Puce A, Allison T, Bentin S, Gore JC, McCarthy G (1998) Temporal cortex activation in humans viewing eye and mouth movements. J Neurosci 18: 2188–2199.
  40. 40. Wicker B, Michel F, Henaff MA, Decety J (1998) Brain regions involved in the perception of gaze: a PET study. Neuroimage 8: 221–227.
  41. 41. Allison T, Puce A, McCarthy G (2000) Social perception from visual cues: role of the STS region. Trends Cogn Sci 4: 267–278.
  42. 42. Fusar-Poli P, Placentino A, Carletti F, Landi P, Allen P, et al. (2009) Functional atlas of emotional faces processing: a voxel-based meta-analysis of 105 functional magnetic resonance imaging studies. J Psychiatry Neurosci 34: 418–432.
  43. 43. Allison T, Ginter H, McCarthy G, Nobre AC, Puce A, et al. (1994) Face recognition in human extrastriate cortex. J Neurophysiol 71: 821–825.
  44. 44. Botzel K, Schulze S, Stodieck SR (1995) Scalp topography and analysis of intracranial sources of face-evoked potentials. Exp Brain Res 104: 135–143.
  45. 45. Nakamura A, Yamada T, Abe Y, Nakamura K, Sato N, et al. (2001) Age-related changes in brain neuromagnetic responses to face perception in humans. Neurosci Lett 312: 13–16.
  46. 46. Sams M, Hietanen JK, Hari R, Ilmoniemi RJ, Lounasmaa OV (1997) Face-specific responses from the human inferior occipito-temporal cortex. Neuroscience 77: 49–55.
  47. 47. Sato N, Nakamura K, Nakamura A, Sugiura M, Ito K, et al. (1999) Different time course between scene processing and face processing: a MEG study. Neuroreport 10: 3633–3637.
  48. 48. Watanabe S, Kakigi R, Koyama S, Kirino E (1999) Human face perception traced by magneto- and electro-encephalography. Brain Res Cogn Brain Res 8: 125–142.
  49. 49. Eimer M, Holmes A (2002) An ERP study on the time course of emotional face processing. Neuroreport 13: 427–431.
  50. 50. Herrmann MJ, Aranda D, Ellgring H, Mueller TJ, Strik WK, et al. (2002) Face-specific event-related potential in humans is independent from facial expression. Int J Psychophysiol 45: 241–244.
  51. 51. Holmes A, Vuilleumier P, Eimer M (2003) The processing of emotional facial expression is gated by spatial attention: evidence from event-related brain potentials. cognitive brain research 16: 174–184.
  52. 52. Krolak-Salmon P, Fischer C, Vighetto A, Mauguiere F (2001) Processing of facial emotional expression: spatio-temporal data as assessed by scalp event-related potentials. Eur J Neurosci 13: 987–994.
  53. 53. Munte TF, Brack M, Grootheer O, Wieringa BM, Matzke M, et al. (1998) Brain potentials reveal the timing of face identity and expression judgments. Neurosci Res 30: 25–34.
  54. 54. Streit M, Wolwer W, Brinkmeyer J, Ihl R, Gaebel W (2000) Electrophysiological correlates of emotional and structural face processing in humans. Neurosci Lett 278: 13–16.
  55. 55. Ioannides AA, Liu LC, Kwapien J, Drozdz S, Streit M (2000) Coupling of regional activations in a human brain during an object and face affect recognition task. Hum Brain Mapp 11: 77–92.
  56. 56. Ioannides AA, Poghosyan V, Dammers J, Streit M (2004) Real-time neural activity and connectivity in healthy individuals and schizophrenia patients. Neuroimage 23: 473–482.
  57. 57. Liu L, Ioannides AA, Streit M (1999) Single trial analysis of neurophysiological correlates of the recognition of complex objects and facial expressions of emotion. Brain Topogr 11: 291–303.
  58. 58. Streit M, Dammers J, Simsek-Kraues S, Brinkmeyer J, Wolwer W, et al. (2003) Time course of regional brain activations during facial emotion recognition in humans. Neurosci Lett 342: 101–104.
  59. 59. Kujala MV, Tanskanen T, Parkkonen L, Hari R (2009) Facial expressions of pain modulate observer's long-latency responses in superior temporal sulcus. Hum Brain Mapp 30: 3910–3923.
  60. 60. Borod JC, Cicero BA, Obler LK, Welkowitz J, Erhan HM, et al. (1998) Right hemisphere emotional perception: evidence across multiple channels. Neuropsychology 12: 446–458.
  61. 61. Ahern GL, Schwartz GE (1979) Differential lateralization for positive versus negative emotion. Neuropsychologia 17: 693–698.
  62. 62. Wedding D, Stalans L (1985) Hemispheric differences in the perception of positive and negative faces. Int J Neurosci 27: 277–281.
  63. 63. Canli T, Desmond JE, Zhao Z, Glover G, Gabrieli JD (1998) Hemispheric asymmetry for emotional stimuli detected with fMRI. Neuroreport 9: 3233–3239.
  64. 64. Graham R, Cabeza R (2001) Event-related potentials of recognizing happy and neutral faces. Neuroreport 12: 245–248.
  65. 65. Martinez AM, Benavente R (1998) The AR Face Database. CVC Technical Report No.24.
  66. 66. Fuchs M, Wagner M, Kohler T, Wischmann HA (1999) Linear and nonlinear current density reconstructions. J Clin Neurophysiol 16: 267–295.
  67. 67. Hamalainen MS, Ilmoniemi RJ (1994) Interpreting magnetic fields of the brain: minimum norm estimates. Med Biol Eng Comput 32: 35–42.
  68. 68. Knosche T, Praamstra P, Stegeman D, Peters M (1996) Linear estimation discriminates midline sources and a motor cortex contribution to the readiness potential. Electroencephalogr Clin Neurophysiol 99: 183–190.
  69. 69. Knösche TR (1997) Solutions of the Neuroelectromagnetic Inverse Problem: An Evaluation Study.
  70. 70. Talairach J, Tournoux P (1988) Co-planar Stereotaxic Atlas of the Human Brain: 3-Dimentional Proportional System: An Approach to Cerebral Imaging: Thime, Stuttgart.
  71. 71. Dien J (1998) Addressing misallocation of variance in principal components analysis of event-related potentials. Brain Topogr 11: 43–55.
  72. 72. Maess B, Friederici AD, Damian M, Meyer AS, Levelt WJ (2002) Semantic category interference in overt picture naming: sharpening current density localization by PCA. J Cogn Neurosci 14: 455–462.
  73. 73. Kaiser HF (1958) The varimax criterion for analytic rotation in factor analysis. Psychometrika 23: 187–200.
  74. 74. Ahlfors SP, Han J, Belliveau JW, Hamalainen MS (2010) Sensitivity of MEG and EEG to source orientation. Brain Topogr 23: 227–232.
  75. 75. Hamalainen M, Hari R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV (1993) Magnetoencephalography - Theory, Instrumentation, and Applications to Noninvasive Studies of the Working Human Brain. Reviews of modern physics 65: 413–497.
  76. 76. Hillebrand A, Barnes GR (2002) A quantitative assessment of the sensitivity of whole-head MEG to activity in the adult human cortex. Neuroimage 16: 638–650.
  77. 77. Streit M, Ioannides AA, Liu L, Wolwer W, Dammers J, et al. (1999) Neurophysiological correlates of the recognition of facial expressions of emotion as revealed by magnetoencephalography. cognitive brain research 7: 481–491.
  78. 78. Klopp J, Marinkovic K, Chauvel P, Nenov V, Halgren E (2000) Early widespread cortical distribution of coherent fusiform face selective activity. Hum Brain Mapp 11: 286–293.
  79. 79. Adolphs R (1999) Social cognition and the human brain. Trends Cogn Sci 3: 469–479.
  80. 80. Decety J, Grezes J (1999) Neural mechanisms subserving the perception of human actions. Trends Cogn Sci 3: 172–178.
  81. 81. Rizzolatti G, Fadiga L, Gallese V, Fogassi L (1996) Premotor cortex and the recognition of motor actions. Brain Res Cogn Brain Res 3: 131–141.
  82. 82. Rizzolatti G, Fogassi L, Gallese V (2001) Neurophysiological mechanisms underlying the understanding and imitation of action. Nat Rev Neurosci 2: 661–670.
  83. 83. Decety J, Grezes J, Costes N, Perani D, Jeannerod M, et al. (1997) Brain activity during observation of actions. Influence of action content and subject's strategy. Brain 120 (Pt 10): 1763–1777.
  84. 84. Iacoboni M, Koski LM, Brass M, Bekkering H, Woods RP, et al. (2001) Reafferent copies of imitated actions in the right superior temporal cortex. Proc Natl Acad Sci U S A 98: 13995–13999.
  85. 85. Monroe JF, Griffin M, Pinkham A, Loughead J, Gur RC, et al. (2013) The fusiform response to faces: explicit versus implicit processing of emotion. Hum Brain Mapp 34: 1–11.
  86. 86. Yang TT, Menon V, Eliez S, Blasey C, White CD, et al. (2002) Amygdalar activation associated with positive and negative facial expressions. Neuroreport 13: 1737–1741.
  87. 87. Dolan RJ, Fletcher P, Morris J, Kapur N, Deakin JF, et al. (1996) Neural activation during covert processing of positive emotional facial expressions. Neuroimage 4: 194–200.