The authors have declared that no competing interests exist.
Conceived and designed the experiments: TK AP SA. Performed the experiments: AP. Analyzed the data: AP KM. Contributed reagents/materials/analysis tools: AP MSR. Wrote the paper: AP. Revision of the manuscript: SA MSR KM TK.
Imitation of facial expressions engages the putative human mirror neuron system as well as the insula and the amygdala as part of the limbic system. The specific function of the latter two regions during emotional actions is still under debate. The current study investigated brain responses during imitation of positive in comparison to non-emotional facial expressions. Differences in brain activation of the amygdala and insula were additionally examined during observation and execution of facial expressions. Participants imitated, executed and observed happy and non-emotional facial expressions, as well as neutral faces. During imitation, higher right hemispheric activation emerged in the happy compared to the non-emotional condition in the right anterior insula and the right amygdala, in addition to the pre-supplementary motor area, middle temporal gyrus and the inferior frontal gyrus. Region-of-interest analyses revealed that the right insula was more strongly recruited by (i) imitation and execution than by observation of facial expressions, that (ii) the insula was significantly stronger activated by happy than by non-emotional facial expressions during observation and imitation and that (iii) the activation differences in the right amygdala between happy and non-emotional facial expressions were increased during imitation and execution, in comparison to sole observation. We suggest that the insula and the amygdala contribute specifically to the happy emotional connotation of the facial expressions depending on the task. The pattern of the insula activity might reflect increased bodily awareness during active execution compared to passive observation and during visual processing of the happy compared to non-emotional facial expressions. The activation specific for the happy facial expression of the amygdala during motor tasks, but not in the observation condition, might reflect increased autonomic activity or feedback from facial muscles to the amygdala.
Imitation is an ability that includes a wide range of different phenomena. Infants imitate nearly from birth on
In previous studies, to examine the neural basis of imitation, participants were explicitly asked to reproduce facial expressions (e.g.
Beyond the putative human mirror neuron system and motor-, as well as sensorimotor cortices, the pre-supplementary motor area (pre-SMA) was reported to be involved in imitation of emotional facial expressions
In sum, both regions seem to be involved during imitation, perception and execution of emotional facial expressions, but results of studies differ, which may be due to differences in the task (observation with different tasks, execution, (delayed) imitation), stimulus material (pictures versus video clips of different length), control condition (fixation cross, neutral face without movement, non-emotional facial expression) or analysis (e.g. conjunction of two t maps versus separate listing of two t-tests, different thresholds).
We aimed at examining brain regions with increased BOLD response during imitation of positive facial affect. We wanted to control for effects due to motion and therefore compared the happy facial expression with a non-emotional facial expression. We hypothesized to find amygdala and insula involved in affective imitation. In subsequent analyses we intended to compare the BOLD responses of the target regions also during perception and execution of facial expressions, respectively. We focused on the insula and the amygdala, because these structures were assumed to be central for the “extended” MNS
The study was approved by the local Ethics Committee (Medical Faculty of the RWTH Aachen University; code: EK 099/08) based on the declaration of Helsinki and all participants gave written informed consent prior to participation. They were paid for their participation.
Thirty-two healthy, right-handed
We used video clips depicting facial expressions as dynamic stimulus material. The clips were recorded in an in-house media centre with a commercial video camera (Sony DVX 2000, spatial resolution 720×576 pixels). The video clips depicted 24 actors executing happy facial expressions (smile), non-emotional facial expressions (lip protrusion), or neutral faces (relaxed face without motion). Each video clip began with the actor having a neutral face. After one second, the actor began to produce a facial expression. The video clip ended again with a neutral face (see
In the first and second run (A) participants observed and executed facial expressions (if an actor or scrambled faces were displayed, respectively), in the third and fourth run (B) participants were to imitate facial expressions if an actor was presented, and again, to execute facial expressions in case of scrambled video clips. Within one block, four video clips à five seconds were shown. Abbreviations: A) N_OBS: non-emotional observation, E_OBS: emotional observation, C_EXP: control execution, B) E_EXP: emotional execution, C_IMI: control imitation, E_IMI: emotional imitation.
Before the fMRI experiment participants were told that they would see either video clips depicting actors showing a facial expression (smile, lip protrusion, neutral face), or scrambled faces. For run 1 and 2, they were instructed to (i) attentively
Accordingly, the imitation task was presented separately from the observation task in the last two runs of the experiment and participants did not know about the imitation condition until run 3. This order was chosen to prevent participants from unintentionally imitating already during the observation task.
The fMRI experiment was designed as a two factorial design with the factors task (
The videos were presented in blocks of four 5 s-video clips. Each run consisted of 18 blocks of video clips (3 blocks of each condition). The task order within the single runs was pseudo-randomized across participants. A low-level baseline (fixation cross) was presented between the blocks for 6.4 seconds (one block plus low level baseline corresponded to 12 scans). After every sixth block an additional baseline (30.8 seconds corresponding to 14 scans) was included to allow for the hemodynamic response to return to baseline. All together, the fMRI scanning lasted for approximately 45 min.
The faces of the participants were video-taped during the entire fMRI experiment to ensure that participants followed the task instructions. The video tapes were judged online and after the experiment by a certified Facial Action Coding System rater (FACS;
After fMRI scanning, participants completed a post-scanning rating of their subjective feeling of happiness during all conditions. The rating was presented on a 7-point Likert scale (1 = ‘not at all’ until 7 = ‘very strong’). A paired t-test was calculated to test if participants felt happier in the emotional than in the non-emotional imitation condition.
Functional T2* weighted images were obtained with a Siemens 3-Tesla MR-scanner using Echo planar imaging (EPI) (TR = 2200 ms, TE = 30 ms, flip angle 90°, FoV 224 mm, base resolution 64, voxel size 3.3×3.3 mm2, 36 slices with slice thickness 3.5 mm, and distance factor 10%).
High-resolution T1-weighted anatomical 3-D Magnetization Prepared Rapid Gradient Echo (MP-RAGE) images (TR 1900 ms, TE 2.52 ms, TI 900 ms, flip angle 9°, FoV 250 mm, 256×256 matrix, 176 slices per slab) were acquired at the end of the experimental runs.
Data processing and statistic analyses (see also
Data were subsequently analyzed by a two-level approach. Using a general linear model (GLM), each experimental condition was modeled on the single-subject level with a separate regressor convolved with a canonical hemodynamic response function and its first temporal derivative. The following conditions were included:
The low-level baseline between the experimental conditions (fixation cross) was implicitly modeled as baseline. Realignment parameters were included as six additional regressors in the GLM as head motion nuisance covariates. The execution task was present in all four runs and therefore twice as often as the other tasks. However, to avoid effects due to habituation and repetition, only the execution blocks of the first and second run were included in the second level analysis resulting in an equal amount of blocks of each experimental condition.
Parameter estimates for each voxel were calculated using maximum likelihood estimation and corrected for non-sphericity. First-level contrasts were fed into a flexible factorial second level group analysis with the factors condition and subjects. The factor condition was modeled as fixed effect and encompassed all 9 levels of the single-subject analysis. The subjects-factor was modeled as random effect.
The first six conditions of the experiment are described detailed in Kircher et al.
First, to show the reliability of our experimental design, we examined activations during imitation of happy and non-emotional facial expressions, separately. Accordingly, two contrasts were computed: imitation of happy facial expressions was contrasted with the high level baseline (H_IMI>N_IMI), likewise imitation of the non-emotional facial expressions was contrasted with the high level baseline (NE_IMI>N_IMI). We hypothesized to replicate activation patterns of previous studies examining the imitation of facial expressions.
Next, we tested for activations specific for the imitation of happy facial expressions, namely, activation during imitation of the happy facial expressions contrasted with the activation during imitation of the non-emotional facial expressions (H_IMI>NE_IMI). All whole brain analyses at group level are reported as significant at a threshold of p<.05, FWE corrected (k>15). Brain structures were labeled using the Anatomy Toolbox v 1.6
To identify differences in emotional experience during the imitation tasks, subjective ratings of happiness were compared using a paired t-test. Participants reported a stronger feeling of happiness during imitation of emotional facial expressions (
Imitation of both facial expressions revealed a distributed network mainly including frontal and parietal cortices (see
The analysis was FWE-corrected at a threshold of p<0.05 (k>15). Both facial expressions involved a widespread bilateral network including (pre-)motor areas, the insula, temporal areas, the brain stem, and the cerebellum.
MNI | ||||||||
Contrast | X | Y | Z | k | t-value | p-value | Side | Region |
Imitation: Happy>Neutral | 54 | −8 | 36 | 23044 | 12.48 | <0.001 | R | Postcentral gyrus |
−46 | −12 | 36 | 11240 | 12.55 | <0.001 | L | Postcentral gyrus; | |
3 | 2 | 57 | 5582 | 13.58 | <0.001 | B | pre-SMA | |
−24 | −62 | −22 | 3821 | 10.94 | <0.001 | B | Cerebellum (lobule IV, VIIa) | |
−3 | −30 | −9 | 150 | 6.06 | <0.001 | L | Brain stem | |
48 | −44 | 54 | 143 | 5.92 | 0.001 | R | IPL (PFm) | |
50 | −4 | −16 | 90 | 5.51 | 0.003 | R | MTG | |
−8 | −12 | −21 | 27 | 5.21 | 0.013 | L | Brainstem | |
44 | 26 | 3 | 24 | 5.03 | 0.015 | R | IFG pars triangularis (BA 45) | |
Imitation: Non-Emotional>Neutral | 57 | −6 | 36 | 9787 | 16.95 | <0.001 | R | Postcentral gyrus (BA 3) |
−54 | −9 | 38 | 9008 | 16.30 | <0.001 | L | Postcentral gyrus (BA 3) | |
2 | −2 | 57 | 3460 | 10.66 | <0.001 | R | SMA | |
−50 | −68 | 0 | 2983 | 9.31 | <0.001 | L | MTG | |
−22 | −62 | −22 | 2098 | 11.78 | <0.001 | L | Cerebellum VI | |
22 | −63 | −22 | 959 | 9.72 | <0.001 | R | Cerebellum VI | |
46 | −58 | 0 | 937 | 7.05 | <0.001 | R | Posterior MTG | |
−14 | −12 | −3 | 474 | 9.29 | <0.001 | L | Thalamus | |
52 | −38 | 0 | 104 | 5.92 | 0.001 | R | MTG | |
9 | 12 | 34 | 35 | 5.35 | 0.007 | R | MCC |
Imitation of happy in contrast to non-emotional facial expressions (H_IMI>NE_IMI) was associated with right hemispheric activation in the pre-SMA, the right insula extending to the IFG op (BA 44) and the pars triangularis (BA 45), and in the right amygdala extending to the parahippocampal gyrus. Furthermore, increased activation was found in the right middle temporal gyrus (MTG) extending to the inferior temporal gyrus and to the visual area V5, in the right thalamus, in the right pallidum and in the right cuneus (see
Significant right-hemispheric activation differences were found, amongst others, right-hemispheric in the pre-SMA, the insula, the amygdala, and the MTG (A). Average parameter estimates extracted from the main peak and depicted including 90% confidence interval (CI). The emotional conditions (E) are colored in red, the non-emotional (N) in rose, and the control (neutral face without movement (C)) in grey. Significant post-hoc t-tests (p<0.0055 with Bonferroni correction) are marked with an asterisk (B).
MNI | |||||||
X | Y | Z | k | t-value | p-value | Side | Region |
50 | −30 | −15 | 1206 | 7.54 | <0.001 | R | MTG |
3 | 10 | 57 | 625 | 7.26 | <0.001 | R | pre-SMA |
39 | −45 | 6 | 371 | 6.27 | <0.001 | R | MTG |
46 | 8 | 2 | 366 | 6.42 | <0.001 | R | Insula |
24 | −26 | 3 | 301 | 6.48 | <0.001 | R | Thalamus |
29 | −2 | −30 | 149 | 6.06 | <0.001 | R | Amygdala |
15 | −2 | −3 | 104 | 6.26 | <0.001 | R | Pallidum |
54 | −51 | 8 | 75 | 5.30 | 0.009 | R | MTG |
18 | −75 | 20 | 65 | 5.30 | 0.009 | R | Cuneus |
The table shows MNI coordinates of the main peaks of the significant clusters, the number of significant voxel (k), FWE-corrected p-values<0.05, k>15, the hemisphere (L = left, R = right, and B = bilateral), and the name of the region in which the main peak was localized. Abbreviations: MTG: middle temporal gyrus, pre-SMA: pre-supplementary motor area.
To compare the activation of the right anterior insula in six conditions of interest, contrast estimates were extracted at the peak voxel in whole brain analysis H-IMI>NE_IMI (MNI [46 8 2]) from the normalized and 8 mm smoothed images and entered into a repeated measures ANOVA with factors task (observation, execution, imitation) and facial expression (happy, non-emotional). This analysis revealed a significant main effect for the factor facial expression,
Condition | ||||
Happy_Observation | 0.17 | 0.73 | 0.45 | 0.3 |
Happy_Execution | 1.45 | 0.96 | 0.52 | 0.64 |
Happy_Imitation | 1.18 | 0.53 | 0.57 | 0.53 |
Non-Emotional_Observation | −0.37 | 0.68 | 0.32 | 0.39 |
Non-Emotional_Execution | 0.82 | 0.80 | −0.01 | 0.51 |
Non-Emotional_Imitation | 0.39 | 0.72 | 0.09 | 0.42 |
Post Hoc Tests Insula | |||
Happy: Imitation>Observation | −5.85 | 26 | <0.001 |
Non-Emotional: Imitation>Observation | −3.449 | 26 | 0.002 |
Happy: Execution>Observation | −5.05 | 26 | <0.001 |
Non-Emotional: Execution>Observation | −5.487 | 26 | <0.001 |
Non-Emotional: Execution>Imitation | 3.208 | 26 | 0.004 |
Observation: Happy>Non-Emotional | 4.135 | 26 | <0.001 |
Imitation: Happy>Non-Emotional | 5.625 | 26 | <0.001 |
Post Hoc Test Amygdala | |||
Imitation: Happy>Non-Emotional | 5.167 | 26 | <0.001 |
Execution: Happy>Non-Emotional | 4.821 | 26 | <0.001 |
Non-Emotional: Observation>Execution | 3.100 | 26 | 0.005 |
Top: The table shows means (M) and standard deviations (SD) of extracted data from main peak activation of insula (MNI [46 8 2]) and amygdala (MNI [28 −2 −30]) in the comparison Emotional_Imitation>Non-Emotional_Imitation). Bottom: The table shows t-values degrees of freedom (df) and p-values of post-hoc t-tests for both regions of interest (p<0.0055 with Bonferroni correction).
For the amygdala, contrast estimates were extracted from the same contrast (H_IMI>N_IMI) at the peak voxel in the whole brain analysis (MNI [28 −2 −30]) from the normalized and 8 mm smoothed images and entered into a repeated measures ANOVA with factors task (observation, execution, imitation) and facial expression (emotional, non-emotional). The repeated measures ANOVA revealed a significant main effect for the factor facial expression,
No other comparison survived the Bonferroni correction (but there were two trends: Insula activation tended to be stronger during execution of the happy, compared with execution of the non-emotional facial expressions,
In summary, analyses of the fMRI data revealed the following results (see
We examined neural correlates specific for happy in comparison to non-emotional facial expressions during imitation. In line with previous research, happy and non-emotional facial expressions activated a similar bilateral imitation network encompassing (pre-)motor, somatosensory, and superior and middle temporal cortices. Imitation of happy facial expressions contrasted with imitation of non-emotional facial expressions revealed right hemispheric activity in the pre-SMA, insula extending to premotor cortices, the amygdala and the medial temporal cortex.
Involvement of the insula in emotional tasks has been shown in several studies (for reviews see
We found significantly stronger insula activation during both imitation and observation of happy in comparison to the non-emotional facial expressions (with a trend concerning the execution of facial expressions). The awareness of bodily states might be stronger, when an emotionally salient stimulus is presented (here the actor depicting a happy facial expression). Summing up our results, insula activation was significantly stronger during motor tasks and when an emotionally salient stimulus was presented. Both effects are in line with Craig
In a previous Magnetencephalography study Chen and colleagues
Interestingly, activation of the insula specific for happy affect extended to the IFG op (BA 44) and pars triangularis (BA45). Several studies have found activation in IFG op during imitation of hand-movements (e.g.
The amygdala, a central part of the emotion-circuitry (for a review see
While the difference between the emotional and non-emotional facial expressions was small during observation, we found affect-specific increase of the right amygdala during imitation and execution. Executing emotional facial expressions has been claimed to increase autonomic arousal and emotional experience
In another line of evidence, a recent study by Hennenlotter and colleagues
Finally, it should also be mentioned that affect-specific activation of amygdala during the emotional motor conditions might also be due to faster habituation of the amygdala during observation of facial expressions. While amygdala response has been shown to habituate during observation of emotional facial expressions (e.g.
We found bilateral activations during imitation of positive and non-emotional facial expressions. Although possible lateralization during imitation has been debated controversial
When imitation of happy facial expressions was compared with imitation of non-emotional facial expressions, we found significant activation peaks in the right hemisphere only. Concerning the insula, previous studies on imitation, execution or observation of emotional facial expressions report controversial results. Carr et al.
Previous imaging studies examining emotion processing reported increased amygdala activation more often in the left than in the right hemisphere (for a meta analysis see
Apart from these results, recent research suggests an influence of acquisition parameters of EPI sequences on laterality of amygdala activation. Mathiak and colleagues
A limitation of this study is the use of a blocked design as we cannot exclude bias due to differential habituation of the amygdala response in different tasks. Furthermore, we compared a happy facial expression (a smile) to a non-emotional facial expression (lip protrusion). We tried to choose similar movements to compare happy with non-emotional facial expressions (in both expressions the lips are moved). Nevertheless, the amount of motion or differential usage of facial muscles to produce the two facial expressions may be possible confounds that are not excluded by our study design. Likewise, we cannot generalize the results to other emotions. Although both structures have been shown to be involved during observation (e.g.
Our findings support the notion that insula and amygdala activation is increased in addition to activation in the putative human mirror neuron system during the imitation of positive facial expressions. We found increased activation of the right insula during imitation and observation of positive facial expressions, which may be related to enhanced emotional awareness during the presentation of emotionally salient stimuli. In contrast, the increase of right amygdala activation to happy in comparison to non-emotional facial expressions was larger during imitation and execution than to observation. This might be due to an enhanced autonomic reaction in response to feedback from facial muscles of the amygdala, or to differences in habituation of amygdala response. In addition, we found that the insula, but not the amygdala, showed enhanced activity during execution and imitation compared with observation of happy facial expressions.
We thank all participants, Sebastian Kath for help with preparing the stimulus material, Simon Eickhoff and Thilo Kellermann for statistical advice, Gina Joue for editing the manuscript, and Cordula Kemper for assistance during scanning.