Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Blocking facial mimicry affects recognition of facial and body expressions

  • Sara Borgomaneri ,

    Roles Formal analysis, Investigation, Methodology, Software, Writing – original draft, Writing – review & editing

    alessio.avenanti@unibo.it (AA); sara.borgomaneri@unibo.it (SB)

    Affiliations Centro studi e ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum – Università di Bologna, Campus di Cesena, Cesena, Italy, IRCCS Fondazione Santa Lucia, Rome, Italy

  • Corinna Bolloni,

    Roles Formal analysis, Investigation, Methodology, Resources, Software, Writing – review & editing

    Affiliation Centro studi e ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum – Università di Bologna, Campus di Cesena, Cesena, Italy

  • Paola Sessa,

    Roles Supervision, Writing – original draft, Writing – review & editing

    Affiliations Dipartimento di Psicologia dello Sviluppo e della Socializzazione, Università degli studi di Padova, Padova, Italy, Padova Neuroscience Center (PNC), Padova, Italy

  • Alessio Avenanti

    Roles Conceptualization, Formal analysis, Funding acquisition, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

    alessio.avenanti@unibo.it (AA); sara.borgomaneri@unibo.it (SB)

    Affiliations Centro studi e ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum – Università di Bologna, Campus di Cesena, Cesena, Italy, Centro de Investigación en Neuropsicología y Neurociencias Cognitivas, Universidad Católica del Maule, Talca, Chile

Blocking facial mimicry affects recognition of facial and body expressions

  • Sara Borgomaneri, 
  • Corinna Bolloni, 
  • Paola Sessa, 
  • Alessio Avenanti
PLOS
x

Abstract

Facial mimicry is commonly defined as the tendency to imitate—at a sub-threshold level—facial expressions of other individuals. Numerous studies support a role of facial mimicry in recognizing others’ emotions. However, the underlying functional mechanism is unclear. A prominent hypothesis considers facial mimicry as based on an action-perception loop, leading to the prediction that facial mimicry should be observed only when processing others’ facial expressions. Nevertheless, previous studies have also detected facial mimicry during observation of emotional bodily expressions. An emergent alternative hypothesis is that facial mimicry overtly reflects the simulation of an “emotion”, rather than the reproduction of a specific observed motor pattern. In the present study, we tested whether blocking mimicry (“Bite”) on the lower face disrupted recognition of happy expressions conveyed by either facial or body expressions. In Experiment 1, we tested participants’ ability to identify happy, fearful and neutral expressions in the Bite condition and in two control conditions. In Experiment 2, to ensure that such a manipulation selectively affects emotion recognition, we tested participants’ ability to recognize emotional expressions, as well as the actors’ gender, under the Bite condition and a control condition. Finally, we investigated the relationship between dispositional empathy and emotion recognition under the condition of blocked mimicry. Our findings demonstrated that blocking mimicry on the lower face hindered recognition of happy facial and body expressions, while the recognition of neutral and fearful expressions was not affected by the mimicry manipulation. The mimicry manipulation did not affect the gender discrimination task. Furthermore, the impairment of happy expression recognition correlated with empathic traits. These results support the role of facial mimicry in emotion recognition and suggest that facial mimicry reflects a global sensorimotor simulation of others’ emotions rather than a muscle-specific reproduction of an observed motor expression.

Introduction

Humans often unconsciously and unintentionally imitate or mimic others’ postures, prosody and facial expressions (e.g., [1]). Facial mimicry is the tendency to subtly imitate others’ facial expressions, and has been a subject of great interest among scholars during the last twenty years [26]. Facial mimicry reactions are, in most cases, undetectable to the human eye, and sensitive techniques are required to quantify facial muscle contractions. When electromyography (EMG) is used, these reactions are generally observed as congruent muscle responses to the observed facial expressions within one second after exposure [79].

The main aim of the present study was to investigate the causal involvement of facial mimicry in the recognition of facial and body emotional expressions. As detailed in the following paragraphs, such evidence would more clearly define the mechanisms underlying facial mimicry and its role in emotion recognition.

Facial mimicry has been considered a “social glue” [10], as it can generate a feeling of similarity which in turn favors prosocial behavior [11] and positive relationships [4,12]. Studies have shown a link between mimicry and the recruitment of the reward circuit, thus emphasizing the “reward value of the act of mimicking” [13]. It is, therefore, not surprising that a relationship between facial mimicry and levels of individual empathy or susceptibility to emotional contagion has been demonstrated [1,1418], although the robustness of the relationship is still not clear (e.g., [19]). As a conspicuous line of research has strongly sustained, all these effects of facial mimicry could relate to its facilitating role in recognizing others’ expressions of emotions [20,21].

Despite the rich literature on the existence of the phenomenon and its possible functions and relevance in social contexts, the underlying psychological and neural mechanisms are less clear. One of the most influential hypotheses—also termed the “matched motor hypothesis” (e.g., [4])—takes into account an action-perception loop [3,22,23], and holds that perception and action are tightly coupled, such that re-enacting a perceived action (including a facial expression) supports its perceptual recognition [2426]. Within this theoretical framework, facial mimicry has been conceived as linked to an internal simulation process that the observer would spontaneously put in place during the observation of others’ facial expressions and that, ultimately, would support their recognition [2731]. In line with this proposal, neuroimaging techniques have shown that (pre)motor and somatosensory regions representing the face are active during the perception of emotional face [3235], and neural activity in these regions correlates with the degree of facial mimicry [3638]. Moreover, damage to, or transcranial magnetic stimulation (TMS) interference with, the face representations in motor and somatosensory areas disrupts facial mimicry and impairs the recognition and interpretation of emotional expressions [3944] (see also [45]). Furthermore, sensorimotor simulation would in turn affect theory of mind and emotion-related regions including the amygdala and the insula [31], especially for dynamic emotional stimuli [46,47].

Although the general notion of simulation seems to be accepted by a large group of scholars, there is no agreement on how simulation processes are neurally and computationally implemented, and to what extent facial mimicry is as a fundamental “computational step” for emotion recognition [28,31]. Importantly, however, mounting evidence suggests that mimicry plays a causal role in the recognition of others’ emotional expressions [48,49]. In particular, several studies have demonstrated that altering spontaneous facial mimicry affects emotion recognition [20,4852], modulates electrophysiological markers of face representations in visual working memory [53] and influences affective brain networks recruited during emotion perception [54,55]. Relevant to the present work is the evidence that blocking mimicry on either the lower or the upper half of the face induces specific effects on emotion recognition. For example, Oberman and colleagues [48] asked participants to bite a pen to block mimicry on the lower half of the observer’s face. These authors found that the bite manipulation specifically impaired the ability to recognize happy faces and, to some extent, disgusted faces, whereas it did not affect recognition of fearful or sad expressions. Ponari and colleagues [49] replicated the findings for happiness and disgust, and further demonstrated that blocking mimicry of the upper face resulted in poorer recognition of anger. No effects were reported for surprise and sadness, whereas blocking mimicry of both the upper and the lower face reduced recognition of fear. Two further studies showed that blocking mimicry on the lower face reduced the ability to distinguish authentic and false smiling expressions [56,57] (see also [51]). Taken together, these studies show a good correspondence between blocking specific facial muscles (e.g., orofacial muscles which are involved in smiling) and the recognition of selected facial expressions involving the same muscles (e.g., happiness), although results are not entirely consistent for all emotions (e.g., fear).

Another recent line of evidence suggests that the relationship between perceived and “enacted” sensorimotor patterns is not of the one-to-one type. It is known that the tendency to mimic others’ expressions is not limited to the observation of facial expressions. Indeed, studies indicate that seeing emotional bodily expressions also induces sensorimotor activity in the observer’s motor and somatosensory areas [5865], and TMS interference with sensorimotor areas affects recognition of emotional body postures [59,6668]. Interestingly, when seeing an emotional body, even when no face is observable, people tend to show motor responses in facial muscles, and these responses appear emotionally congruent with the observed body expression [6972]; conversely, viewing fearful and angry facial expressions elicited muscular responses not only at the level of the face, but also at the level of the arm muscles involved in defensive postures [73]. Yet, the nature of these muscular responses is unclear. On the one hand, they may reflect epiphenomenal activity, i.e., a simple by-product of previously established associations between making and perceiving emotional expressions with no functional relevance to emotion recognition. On the other hand, they could causally contribute to emotion recognition as they do during observation of facial emotional expressions.

Given these premises, we, first of all, wanted to test whether blocking facial mimicry could interfere with the recognition of facial emotional expressions and, furthermore, whether this interference could extend to the recognition of bodily expressions. This kind of evidence would embrace a model of simulation in which facial sensorimotor signals impact visual recognition of emotional expressions because of their modulatory effect on a covert simulation of a well-structured motor program (i.e., one that includes more than a facial component) linked with a specific emotion.

In two behavioral experiments, we tested whether blocking mimicry on the lower face (i.e., during a “Bite” condition in which participants followed the classical biting-a-pen procedure) disrupted recognition of happy expressions conveyed by either facial or body expressions [48,49]. Specifically, in Experiment 1 we tested participants’ ability to identify happy, fearful and neutral expressions in the Bite condition and in two control conditions requiring the participant to hold a pen with the lips (“Lip”) or to keep the face relaxed (“Rest”). In Experiment 2, we tested participants’ ability to recognize the happy and fearful expressions of female/male models. Moreover, to ensure that such an effect is specific to emotion recognition, we tested whether any impairment is found when participants are asked to discriminate the gender of the models in the Bite and Lip conditions. Importantly, in both experiments, participants performed the tasks with both facial and body stimuli. Based on prior work [48,49], we expected that biting a pen would consistently reduce recognition accuracy for happy facial expressions. If the sensorimotor simulation underpinning facial mimicry consists of a low-level, muscle-specific imitation that provides a body-part specific signal aiding visual processing, we expect that the Bite condition should disrupt recognition of happy faces but not happy bodies. On the other hand, if facial mimicry is a “piece” of a more abstract and global sensorimotor simulation of observed expressions, then we expect that the “Bite” condition should reduce accuracy for happy expressions conveyed by both faces and bodies.

A final aim of the study was to investigate the relationship between dispositional empathy and the effect of blocking mimicry on emotion recognition. Indeed, facial mimicry is considered a relatively automatic process and yet it has multiple motivational and contextual moderators, including individual characteristics like empathic tendencies [1,4,31]. In particular, inter-individual differences in emotional empathy (rather than cognitive empathy) were found to predict the magnitude of spontaneous facial mimicry when observing facial expressions [14,16,18,74]. Therefore, in this study, we asked participants to fill out the Interpersonal Reactivity Index (IRI) questionnaire [75] which separately assesses emotional and cognitive aspects of empathy dispositions. Based on studies of facial mimicry (e.g., [1]), we expected that emotional empathy would predict the effect of blocking mimicry on emotion recognition.

Materials and methods

Participants

Seventy-nine healthy participants took part in the study. Twenty-four participants (10 men, mean age ± SD: 25.8y ± 2.4) were assigned to Experiment 1, and another 24 participants (11 men, 23.3y ± 3) were assigned to Experiment 2. Moreover, 20 participants (9 men, 26.2y ± 3.0) and 11 participants (4 men, 24.7y ± 1.2) took part in pilot studies 1 and 2, respectively. All participants were naïve to the purpose of the experiment and gave their written informed consent before being tested. The experimental procedures were approved by the University of Bologna Bioethics Committee and were carried out in accordance with the 1964 Declaration of Helsinki.

Stimuli

In Experiment 1, stimuli consisted of photographs (1000 x 1500 pixels) depicting either the face or the body of male actors showing happy, neutral or fearful expressions (Fig 1). We selected body stimuli from a validated database where facial information was removed from the body images by blurring [58,76]. Moreover, we adapted facial stimuli from the Nimstim database [77]. From an initial pool of 372 stimuli, we selected a total of 138 pictures (23 pictures for each combination of face/body medium and happy/neutral/fearful expression) based on the results of a first pilot study whose aim was to identify two sets of facial and body expressions matched for emotional intensity. We selected happy and fearful stimuli with relatively high, but not extreme, ratings, to ensure visual recognition was not trivial (see S1 Text and S1 Table), as sensorimotor simulation is thought to contribute to visual perception particularly when stimuli are relatively subtle [29,31,78,79].

thumbnail
Fig 1. Examples of face and body stimuli in Experiment 1.

Stimuli included happy, neutral and fearful expressions of the face or the body.

https://doi.org/10.1371/journal.pone.0229364.g001

In Experiment 2, face and body stimuli consisted of photographs of male and female actors from the same databases used in Experiment 1 [58,76,77,80]. Each face was cropped in an elliptical shape to exclude hair, ears and necks, thus eliminating their potential influence on gender or emotion recognition [8184]. Similarly, the chest of each body was occluded using a black strip. These changes were introduced to avoid that a single diagnostic feature (e.g., presence of long hair or female breasts) could make gender recognition too easy.

From an initial pool of 278 stimuli, we selected a total of 80 pictures (10 pictures for each combination of face/body medium, happy/fearful expression and female/male gender) based on the results of a second pilot study. The aim of the pilot study was to choose a pool of stimuli with models whose emotional expression and gender could be recognized with high (>80%) and comparable accuracy (see S1 Text and S2 Table). Then each of the 80 pictures was mirror-reversed, resulting in a total of 160 stimuli (80 faces and 80 bodies; Fig 2).

thumbnail
Fig 2. Examples of face and body stimuli in Experiment 2.

Stimuli included male and female actors showing happy and fearful expressions of the face or the body.

https://doi.org/10.1371/journal.pone.0229364.g002

Procedure and experimental design

All experiments were programmed using MATLAB software to control picture presentation and to record participants’ responses. In Experiment 1, participants took part in two sessions whose order was randomized across participants. In one session they were presented with facial stimuli, whereas in the other session they were presented with body stimuli. In both sessions, participants performed an emotion recognition task during which they were asked to identify the emotion conveyed by the model’s facial/bodily expression. The expression could be happy, neutral or fearful. For each face/body medium, there were three blocks of 138 stimuli each (3 emotional expressions x 23 stimuli x 2 repetitions). In each block, participants performed the emotion discrimination task under three different facial manipulation conditions, in which they were requested to: (1) sit and relax their face (“Rest”); (2) place a pen in their mouth horizontally and hold on to it using their teeth while not allowing their lips to touch the pen (“Bite”); or (3) place a pen in their mouth horizontally and hold onto it using their lips while not allowing their teeth to touch the pen (“Lip”). Thus, the experiment consisted of 828 trials with 46 trials in each cell, constituting a 2 (Medium: face, body) x 3 (Facial manipulation: Rest, Bite, Lip) x 3 (Expression: happy, neutral, fearful) factorial design. The order of the blocks and stimuli within each block were randomized.

In Experiment 2, participants underwent two consecutive sessions, one with facial stimuli and the other with body stimuli. In each session, participants performed two different tasks, an emotion recognition task and a gender recognition task. In the emotion recognition task, participants were presented with a facial/body expression of happiness or fear and were asked to identify the emotion. In the gender discrimination task, participants were presented with the same set of stimuli used for the emotion recognition task and were asked to identify the gender of the model. For each face/body session and task, there were four experimental blocks of 160 stimuli each. Each block was divided into two halves, during which participants underwent the same “Bite” and “Lip” conditions used in Experiment 1 while they were presented with happy/fearful expressions of female and male models in a randomized order (2 emotional expressions x 2 genders x 20 stimuli x 2 facial manipulations). The experiment consisted of 640 trials, following a 2 (Medium: face, body) x 2 (Task: emotion recognition, gender recognition) x 2 (Facial manipulation: Bite, Lip) x 2 (Expression: happy, fearful) x 2 (Gender: female, male) factorial design. For data analysis, this design was simplified by collapsing data across the factor that was not of theoretical interest—Gender—and by considering the temporally contiguous conditions within each block (i.e., each combination of the factors Facial manipulation and Expression) as a single factor with 4 levels. In this way, we reached a sufficient number of 40 trials per cell for a 2 (Medium: face, body) x 2 (Task: emotion recognition, gender recognition) x 4 (Condition: Bite-happy, Bite-fearful, Lip-happy, Lip-fearful) factorial design.

In both experiments, the trial sequence was as follows: a gray screen (1-s duration) indicated the beginning of the trial, followed by the test picture presented at the center of the screen for 500 ms (Fig 3). The picture was followed by a random-dot mask (obtained by scrambling the corresponding sample stimulus by means of a custom-made image segmentation software) lasting until the participant’s response. Participants were instructed to provide their answer by button press and to respond as quickly and accurately as possible. After the response, the screen appeared black for 2 s in the inter-trial interval.

Dispositional empathy

At the end of the experiment, participants were asked to complete the Interpersonal Reactivity Index (IRI) [75], a 28-item self-report questionnaire assessing dispositional empathy. The IRI consists of four scales. Two scales assess emotional empathy: the Empathic Concern (EC) scale, which assesses the tendency to feel sympathy and compassion for others in need, and the Personal Distress (PD) scale, which assesses the extent to which an individual feels distress in emotionally distressing interpersonal contexts. EC and PD correspond to the notions of other-oriented sympathy responses and self-oriented emotional distress, respectively. The other two subscales assess cognitive empathy: the Perspective Taking (PT) scale, which assesses the tendency to spontaneously imagine and assume the cognitive perspective of another person, and the Fantasy scale (FS), which assesses the tendency to project oneself into the place of fictional characters in books and movies.

Data analysis

Accuracy (% of correct responses) and response times (RTs) were processed off-line. RTs were extracted for each trial associated with a correct response (90% and 93% in the two experiments, respectively) and RTs longer than 3 s were removed from the analysis (<1% in both experiments) before computing the median RT value. To reduce skewness, median RTs were then log-transformed for data analysis. In Experiment 1, accuracy and RTs were analyzed with a 3-way repeated measures ANOVA with Medium (2 levels: Face and Body), Facial manipulation (3 levels: Rest, Bite and Lip) and Expression (3 levels: Happy, Neutral and Fear) as within-subjects factors. In Experiment 2, despite the original experimental design including 5 factors, to reach a sufficient number of trials per cell and simplify the analysis, we submitted accuracy and RTs to a 3-way repeated measures ANOVA with Medium (2 levels: Face and Body), Task (2 levels: Emotion and Gender discrimination) and Condition (4 levels: Bite-happy, Bite-fearful, Lip-happy, Lip-fearful) as within-subjects factors.

A further mixed factors ANOVA with the between-subjects factor Experiment (2 levels: Experiment 1 and Experiment 2) and the within-subjects factors Medium (2 levels: Face and Body), Facial manipulation (2 levels: Bite and Lip) and Expression (2 levels: Happy and Fear) was carried out to compare emotion recognition accuracy between the two experiments. In all the ANOVAs, post hoc comparisons were performed using Newman–Keuls tests to correct for multiple comparisons. Partial eta squared (ηp2) was computed as a measure of effect size for the main effects and interactions, whereas repeated measures Cohen’s d was computed for post hoc comparisons [85,86].

To test whether inter-individual differences in empathy influenced the effect observed in the ANOVAs (i.e., the drop in emotion recognition when biting a pen), further analyses were performed. An index reflecting the drop in accuracy in the Bite condition was computed as the mean emotion recognition accuracy for happy expressions in the bite condition minus the mean emotion recognition accuracy in the control conditions (lip-happy, bite-fear and lip-fear). This index was entered as a dependent variable into a general regression model with each subscale of the IRI as a continuous predictor and the factor Experiment as a categorical predictor. The model tested the influence of each predictor and 2-way interaction between the factor Experiment and each of the IRI subscales, to test whether similar relationships were found across the two experiments. Indices of effect size used for regression and correlation analyses included ηp2 and Cohen’s f2 [85,86].

Results

Experiment 1

The Medium x Facial Manipulation x Expression ANOVA carried out on accuracy data showed a significant main effect of the factor Medium (F1,23 = 16.27; p < 0.001; ηp2 = 0.41), indicating that participants discriminated emotional expressions with greater accuracy when emotions were conveyed by facial expressions (mean accuracy ± SD: 93 ± 5%) rather than by body expressions (87 ± 9%). The ANOVA also showed a significant main effect of the factor Expression (F2,46 = 5.16; p = 0.009; ηp2 = 0.18) that was qualified by the critical Facial Manipulation x Expression interaction (F4,92 = 2.61; p = 0.04; ηp2 = 0.10; Fig 4). We observed no other significant main effects or interactions in the ANOVA (all F ≤ 1.58, all p ≥ 0.22), including the higher order interaction with the factor Medium (F4,92 = 0.22, p = 0.93), suggesting that the influence exerted by facial manipulation on emotion recognition was comparable for facial and bodily stimuli.

thumbnail
Fig 4. Mean accuracy in Experiment 1.

When biting a pen, participants’ ability to recognize happy expressions was decreased relative to all the other conditions. Asterisks indicate significant comparisons. Error bars indicate s.e.m.

https://doi.org/10.1371/journal.pone.0229364.g004

Post hoc analysis of the Facial Manipulation x Expression interaction indicated that, in all the facial manipulation conditions, recognition of happy expressions (range: 85.7–88%) was less accurate than recognition of neutral and fearful expressions (range: 91.3–92.9%; all p < 0.001; all Cohen’s d ≥ 0.57), which in turn did not differ from one another (all p ≥ 0.36). Importantly, recognition of happy expressions was worse in the Bite condition (85.7 ± 10%) than in the Rest (88.0 ± 10%; p = 0.03; Cohen’s d = 0.55) and Lip conditions (87.4 ± 11%; p = 0.051; Cohen’s d = 0.30), which in turn did not differ from one another (p = 0.52). In contrast, recognition of neutral and fearful expressions was comparable across facial manipulations (all p ≥ 0.36).

In sum, biting a pen reduced recognition of happy, but not neutral or fearful, expressions. However, we further interrogated our data to rule out a possible alternative hypothesis. Indeed, our forced choice task implied that, in the Bite condition, participants tended to miss happy expressions to a greater extent than in the two control conditions. In principle, these changes in performance could be due to discomfort experienced by participants asked to keep the pen between their teeth for prolonged periods. This potentially negative state could have encouraged the use of “negative” responses to happy expressions in the Bite condition relative to the other control conditions. To rule out this possibility, we performed a further Facial manipulation x Error type ANOVA on the percentage of erroneous responses to happy expressions, to check whether the Bite condition tended to change the proportion of the erroneous responses “neutral” or “fear” relative to the two control conditions. Overall, participants tended to provide the erroneous response “neutral” to a greater extent than the erroneous response “fear” (10 ± 9% vs. 3 ± 2%; F1,23 = 14.27, p < 0.001; ηp2 = 0.38). However, this tendency was comparable across facial manipulations, as no interaction with that factor was detected (F2,46 = 0.03, p = 0.97).

The analysis of RTs showed that participants responded differently to facial and body emotional expressions (S2 Text). Critically, however, there was no influence of facial manipulation, thus indicating that biting a pen affected response accuracy but not speed.

Experiment 2

The Medium x Task x Condition ANOVA carried out on accuracy data showed a significant main effect of the factor Condition (F3,69 = 4.87; p = 0.004; ηp2 = 0.17), which was qualified by a Task x Condition interaction (F3,69 = 4.07; p = 0.01; ηp2 = 0.15; Fig 5). The interaction showed that the reduction in accuracy for happy expressions while biting a pen was specific to the emotion recognition task. Post hoc analysis showed that, during this task, participants’ ability to identify happy expressions while biting a pen (89 ± 10%) was lower than when they had to identify happy expressions while holding the pen with their lips (92 ± 6%, p = 0.01, Cohen’s d = 0.43) and or when they had to identify fearful expressions while biting a pen (95 ± 4%, p < 0.001, Cohen’s d = 0.64) or holding it with their lips (95 ± 5%, p < 0.001, Cohen’s d = 0.58). During the emotion recognition task, we observed no significant differences between the Bite-fearful, Lip-happy and Lip-fearful conditions (all p ≥ 0.29). Also, we observed no significant differences between conditions in the gender recognition task (accuracy range: 92–94%; all p ≥ 0.46). Lastly, performance in the critical condition, i.e., Bite-happy of the emotion recognition task, was lower than performance in any of the conditions of the gender recognition task (all p ≤ 0.02, all Cohen’s d ≥ 0.48). As in Experiment 1, the disruptive effect of biting a pen on the recognition of happy expressions was similar across facial and body expressions, as no higher-order Medium x Task x Condition interaction was found (F3,69 = 0.17; p = 0.92).

thumbnail
Fig 5. Mean accuracy in Experiment 2.

When biting a pen, participants’ ability to accurately identify happy expressions was decreased relative to all the other conditions. Asterisks indicate significant comparisons. Error bars indicate s.e.m.

https://doi.org/10.1371/journal.pone.0229364.g005

The ANOVA also showed a Medium x Task interaction (F1,23 = 32.36; p < 0.001; ηp2 = 0.58), indicating that participants were better at discriminating emotions from bodies than from faces (94 ± 5% vs. 92 ± 6%; p = 0.01; Cohen’s d = 0.67) and better at discriminating gender from faces than from bodies (95 ± 5% vs. 91 ± 6%; p < 0.001; Cohen’s d = 0.78). The difference between these results and what we observed in Experiment 1 (i.e., greater accuracy at discriminating emotions from faces than from bodies) may be ascribed to physical differences between the stimuli. Faces in Experiment 2 were cropped in an elliptical shape to exclude hair, ears and necks, thus eliminating their potential influence on gender or emotion recognition, and the chest of each body was occluded using a black strip. Alternatively, the difference could be due to the fact that we removed the neutral expressions in Experiment 2, or that participants were asked to perform a gender task, as well.

The Medium x Task interaction effect was similar across the factor Condition, as the 3-way interaction was not significant. No other significant main effects or interactions were observed in the ANOVA (all F ≤ 2.14, all p ≥ 0.16).

The analysis of RTs showed that when participants had to discriminate the emotion, they performed better (faster RTs) with bodies than with faces, whereas when they had to discriminate gender, they performed better with faces than with bodies (S2 Text), a trend in line with the accuracy results. Critically, there was no effect of Condition, indicating that biting a pen affected recognition accuracy but not response speed.

Emotion recognition performance across Experiments 1 and 2

The Experiment x Medium x Facial manipulation x Expression ANOVA showed a main effect of Facial manipulation (F1,46 = 4.76, p = 0.03; ηp2 = 0.09), a significant main effect of Expression (F1,46 = 22.04, p < 0.001; ηp2 = 0.32) and importantly, a significant Facial Manipulation x Expression interaction (F1,46 = 4.35, p = 0.04; ηp2 = 0.09), which confirmed the reduction in happy expression recognition accuracy when biting a pen (88 ± 6%) relative to the other conditions (range: 90–93%; all p ≤ 0.002). This effect was comparable across the two experiments and face/body mediums, as we observed no higher-order interactions involving those factors (all F ≤ 0.29, all p ≥ 0.59).

Relation between changes in accuracy and dispositional empathy

Based on the results of the ANOVAs, we extracted an index of the drop in accuracy reflecting the costly effect of biting a pen on emotion recognition (i.e., accuracy in the Bite-happy condition minus mean accuracy across Lip-happy, Bite-fear and Lip-fear conditions). Negative values of this behavioral index indicate greater interference. We tested whether inter-individual differences in dispositional empathy predict that index across the two experiments. The regression model with the IRI subscales, Experiment and their interactions as predictors was not significant (R2 = 0.11; F9,38 = 0.50, p = 0.87), and showed a non-significant improvement after the removal of two outliers with standard residuals >2.5σ (R2 = 0.19; F9,36 = 0.92, p = 0.52). The EC subscale was the only significant predictor of the behavioral index (β = 0.50, F1,39 = 5.49, p = 0.02, ηp2 = 0.13); there was no relation between the bite-related drop in accuracy and the other IRI subscales, the factor Experiment or any interactions between IRI subscales and Experiment (all F ≤ 2.77, all p ≥ 0.11). To further check whether the influence of EC on the behavioral index was comparable in the two experiments, a further regression model with the factor EC, Experiment, and their interaction was performed. The model was marginally significant (R2 = 0.16; F3,42 = 2.76, p = 0.054, f2 = 0.20) and the factor EC remained a significant predictor of the behavioral index (β = 0.40, F1,42 = 7.49, p = 0.009, ηp2 = 0.15). Again, we observed no influence of the predictor Experiment or the interaction Experiment x EC (all F ≤ 2.61, all p ≥ 0.11; Fig 6). These findings indicate that dispositional emotional empathy similarly influenced performance in the emotion recognition task across the two experiments. Specifically, the positive relationship between the behavioral index and the IRI’s EC subscale indicates that participants with lower levels of emotional empathy were highly affected by biting a pen, whereas participants with higher levels of emotional empathy showed little or no interference.

thumbnail
Fig 6. Relation between changes in accuracy and individual differences in empathy disposition.

Scatter plot of the correlation between the behavioral index reflecting the costly influence of biting a pen on recognition of happy faces (accuracy in the Bite-happy condition minus mean accuracy across Lip-happy, Bite-fear and Lip-fear conditions) and individual scores on the IRI’s EC subscale (r = 0.34, p = 0.02). Grey and black dots represent participants in Experiment 1 and 2, respectively.

https://doi.org/10.1371/journal.pone.0229364.g006

Simple correlations computed across the two experiments confirmed that behavioral interference was associated with individual differences in EC scores (r = 0.34; p = 0.02, f2 = 0.13), but not FS (r = 0.14; p = 0.37), PT (r = 0.13; p = 0.39) or PD scores (r = –0.09; p = 0.57).

Discussion

Based on previous experimental evidence for the role of facial mimicry in the recognition of others’ emotions, and in the light of the (sensorimotor) simulation models previously proposed, the main purpose of this work was to investigate whether facial mimicry plays a critical role not only in recognizing facial expressions of emotion, but also in recognizing bodily expressions of emotions. Our finding that interfering with facial mimicry not only affected the recognition of emotions expressed through others’ faces but also emotions expressed through others’ bodies supports a simulation model in which facial feedback has a critical role in modulating sensorimotor simulation of a rich expressive motor program (i.e., not only the facial component) associated with the observed emotion. This evidence suggests that the simulation is not limited to the available visual information (i.e., the isolated part of the expression that is visible to the observer), but extends to a whole, rich sensorimotor program associated with the perceived emotion. Remarkably, this evidence demonstrates the functional relevance of these sensorimotor processes to visual emotion recognition.

Our design also allowed us to expand on prior work by comparing recognition of happy expressions with recognition of expressions that have not been consistently tested (e.g., neutral expressions) or have brought less consistent results (e.g., fearful expressions), and by testing the moderating role of dispositional empathy.

In order to test our hypotheses, we implemented two experiments that involved manipulating facial mimicry during visual recognition of emotions displayed by facial and body expressions (i.e., happiness, fear and a neutral state). In Experiment 2, we included a control task in which participants had to recognize the gender (rather than the emotion) of the actor in order to provide clear-cut evidence that interfering with facial mimicry has a selective impact on emotion recognition. The results of both experiments unambiguously demonstrated that manipulation of facial mimicry (in particular, the "Bite" condition specifically implemented to interfere with the activation of facial muscles involved in smiling) affected the recognition of happy expressions, while the recognition of neutral (Experiment 1) and fearful (Experiment 1 and Experiment 2) expressions was not disturbed by the mimicry manipulation. These findings provide evidence in line with studies that demonstrated a correspondence between blocking specific facial muscles and the resulting impairments in recognizing selected facial expressions involving the same muscles [48,49,56,57].

However, as we discuss in the following paragraph, this selectivity does not take for granted that the mechanism underlying facial mimicry is based on a one-to-one match between the observed visual display and the (en)acted facial expression. Indeed, and crucially for the purpose of the present investigation, the “Bite” condition impaired the recognition of happy expressions conveyed by both faces and bodies, thus suggesting that altering facial mimicry interfered with a global sensorimotor simulation of the emotion of happiness. This result provides even clearer evidence when considering that the manipulation of facial mimicry did not affect the recognition of the actors’ gender, revealing a selective role of facial mimicry in the recognition of others’ emotions.

Overall, our findings speak against simulation models in which facial mimicry is considered irrelevant to emotion recognition. Moreover, they do not support a simulation process based on a one-to-one visuo-motor matching mechanism. Rather, our results, in the first instance, seem to support the thesis that what is simulated is a rich (sensori)motor program associated with an emotion. From this perspective, interfering with a component of the (sensori)motor program (i.e., a facial expression of happiness) could affect the whole program—and, as a consequence, it could also interfere with visual recognition of the associated facial and body expressions (of happiness).

Our findings are in keeping with the correlational evidence that emotionally congruent facial EMG responses also occur when seeing emotional expressions conveyed solely by the body [6972] or the eye regions [87] in the absence of facial visual cues, and with the study of Sel and collegues [88] showing similar widespread recruitment of facial and body neural representations within the somatosensory cortex during discrimination of emotional faces, but not during gender discrimination. Importantly, our findings expand these previous results by demonstrating a functional relevance of these sensorimotor processes to visual recognition of emotional expressions.

Our findings could be explained in the light of a recent sensorimotor simulation model proposed by Wood and colleagues [31]. This model considers facial mimicry as a spill-over of the sensorimotor simulation, but at the same time able to influence, through feedback to the (pre)motor and somatosensory regions, the simulation itself. The model proposes that the observer uses his/her own sensorimotor system to simulate the observed emotional expression, which would then be followed by multiple, cascading activations of a series of other systems, including the limbic system and regions mostly involved in reasoning about others’ mental and affective states. Crucially, the model also establishes that feedback from facial mimicry may shape the entire activated emotional state by means of connections between regions devoted to sensorimotor simulation and regions associated with the processing of emotions and the ability to reason about others’ mental and affective states.

Notably, Wood and colleagues [31] proposed the existence of a direct path from the visual system to the emotional system. The recruitment of the sensorimotor system, from this perspective, could also follow (rather than precede) the activation of the emotional system. This is in keeping with scholars who proposed that facial mimicry could reflect the activation of an affective state in the observer rather than a one-to-one visuo-motor matching response [6972,87]. Studies have also supported the proposal that observers “mimic” their interpretation of the observed expression rather than the specific motor patterns observed [71,89,90]. Taken together, these lines of evidence suggest an alternative pathway to facial mimicry which follows the activation of the emotional system and, likely, of other systems responsible for reasoning about others’ mental and affective states. Therefore, facial mimicry can be the result of top-down processes in addition to the result of a bottom-up mechanism linked with the perception-action loop.

It should be noted however, that sensorimotor simulation could occur at different levels of complexity, as documented by action perception literature. While seeing an (emotionally neutral) action facilitates corresponding motor circuits for making the same observed action, thus supporting one-to-one matching mechanisms [9193], studies have shown that such a facilitation occurs even when the observed action is occluded from view and can be only inferred based on contextual cues [94,95]. Moreover, when an individual observes a goal-directed action, the motor system can encode the distal goal of the action rather than the specific movements being observed [9698]. These findings have been interpreted either in terms of learned associations between multiple action cues that are encoded within the sensorimotor system [e.g., 95,99] or top-down influences from other brain systems (e.g., semantic or theory of mind systems) [100,101].

From this perspective, seeing a happy body expression could induce a sensorimotor simulation of the whole sensorimotor program associated with happiness (including happy facial expressions) either based on learned associations between happy bodily and facial cues or via top-down influences from other systems involved in processing social and emotional signals (e.g., emotion/theory of mind systems). Our findings, although intriguing, do not allow us to disentangle between these two different possible mechanisms for facial mimicry—one directly mediated by the visual input via sensorimotor simulation and one mediated by the emotion/theory of mind systems via sensorimotor simulation—and investigations aimed at clarifying the time course and computational steps involved in the entire process are required. Our view, which also seems to be embraced in the recent model by Wood and colleagues [31], is that facial mimicry can result from an interplay between simulative bottom-up processes and top-down processes, for example, linked to the observer’s interpretation of the other person’s expression.

We also observed a relationship between individual levels of empathy and performance impairment in the emotion discrimination task, such that the participants with higher/lower scores on the IRI EC subscale were those who showed a lesser/greater impairment due to the mimicry manipulation. On the other hand, we found no relationship between behavioral interference and cognitive subscales of the IRI. Cognitive and emotional empathy correspond to the abilities to cognitively understand and affectively share what others think and feel. In particular, the IRI’s EC scale assesses the tendency to experience other-oriented feelings such as sympathy and compassion which do not involve affect sharing and “feeling as” the other, but rather a “feeling for” and thus concern and attention to others and their needs [75]. Neuroscientific studies have linked this empathic disposition with brain regions associated with affective responses, reward processing, action and cognition, such as the anterior insula, anterior/subgenual cingulate and inferior frontal cortex [102106]. Interestingly, previous studies have consistently documented that people scoring high on EC or similar emotional empathy scales (rather than on cognitive empathy scales) display greater emotionally congruent facial mimicry when exposed to emotional faces [14,16,18,74], possibly because of their greater motivation to attend to others’ emotional signals [75]. However, the correlational nature of these prior studies did not allow them to determine whether participants with high emotional empathy also required facial mimicry to a greater extent in order to accurately recognize others’ emotional expressions.

In striking contrast with this possibility, our study suggests that participants scoring high at the IRI’s EC subscale can rely on facial mimicry to a lesser extent in order to achieve accurate emotion recognition, as these participants were less affected by mimicry blocking. Although highly empathic individuals were reported to display greater spontaneous facial mimicry [14,16,18,74], the processing underlying facial mimicry appears to be less functionally relevant for achieving accurate emotion recognition in these individuals. This does not imply that mimicry blocking is simply not effective in these participants. Indeed, a previous study [53] reported that people scoring high on a global scale of empathy (involving not only emotional but also cognitive empathy traits) displayed greater effects of mimicry blocking on brain responses to emotional faces (i.e., on an electro-cortical component of event-related potentials thought to reflect the quality of visual working memory representation). Rather, in light of this prior work, our findings suggest that highly empathic participants tend to compensate for interference with facial mimicry by using alternative pathways that do not rely on sensorimotor simulation, such as from visual to emotional and theory of mind systems.

In keeping with this proposal, a recent TMS study investigated the causal role of sensorimotor and mentalizing brain networks in understanding others’ emotional feelings when watching smiling expressions, and the moderating role of inter-individual differences in empathy [43]. In line with our behavioral results, that study found that more empathic participants were less disrupted by rTMS over the face representation in sensorimotor brain regions. On the other hand, they showed impaired performance only when the TPJ (a key area in the theory of mind system) was targeted. All in all, these studies indicate that more empathic individuals tend to spontaneously show behavioral markers of sensorimotor simulation, which may reflect greater attention to others’ emotional signals, but at the same time (and probably because of such greater attention) they are able to compensate for interference with sensorimotor simulation by using other cognitive/neural resources.

A limitation of our study is the sample size, which appears smaller than in some previous studies that tested the effect of mimicry blocking on emotion recognition [49,56,57]. Yet, to implement the mimicry manipulation, we used a repeated measures design which is more sensitive to detecting significant differences. Moreover, potential concerns of data replicability (e.g., [107]) are at least partially attenuated by the consideration that the effect of mimicry blocking on recognition of happy expressions—at least facial expressions—is consistent with previous reports [48,49,56,57], and here it was replicated across the two experiments. Future studies with larger sample sizes may seek to confirm the present findings.

Supporting information

S1 Text. Pilot study 1 and 2.

Description of the procedures used to select visual stimuli for Experiment 1 and 2.

https://doi.org/10.1371/journal.pone.0229364.s001

(DOCX)

S2 Text. Analysis of RTs in Experiment 1 and 2.

Description of the statistical analyses performed on RTs data in the main experiments.

https://doi.org/10.1371/journal.pone.0229364.s002

(DOCX)

S1 Table. Mean ratings of stimuli selected in pilot study 1.

Mean ± S.D. ratings of happiness and fear reported on a 9-point Likert scale ranging from 1 (no emotion) to 9 (maximal intensity of the emotion).

https://doi.org/10.1371/journal.pone.0229364.s003

(TIF)

S2 Table. Mean recognition accuracy of stimuli selected in pilot study 2.

Mean ± S.D. of emotion and gender recognition accuracy (% of correct response).

https://doi.org/10.1371/journal.pone.0229364.s004

(TIF)

Acknowledgments

This work was supported by grants from Cogito Foundation, Switzerland [R-117/13 and 14e139-R], Fondazione del Monte di Bologna e Ravenna, Italy [339bis/2017], the Bial Foundation [347/18] and Ministero dell’Istruzione, dell’Università e della Ricerca [2017N7WCLP] awarded to A.A., and by grants from Ministero della Salute, Italy [GR-2018-12365733] awarded to S.B. The authors thank Chiara Facchetti for her help in data collection and Brianna Beck for proofreading the manuscript.

References

  1. 1. Seibt B, Mühlberger A, Likowski KU, Weyers P. Facial mimicry in its social setting. Front Psychol. 2015;6:1122. pmid:26321970
  2. 2. Dimberg ULF, Thunberg M. Rapid facial reactions to emotional facial expressions. Scand J Psychol. 1998;39(1):39–45. pmid:9619131
  3. 3. Dimberg U, Thunberg M, Elmehed K. Unconscious facial reactions to emotional facial expressions. Psychol Sci. 2000;11(1):86–9. pmid:11228851
  4. 4. Hess U, Fischer A. Emotional mimicry as social regulation. Personal Soc Psychol Rev. 2013;17(2):142–57.
  5. 5. Moody EJ, McIntosh DN, Mann LJ, Weisser KR. More than mere mimicry? The influence of emotion on rapid facial reactions to faces. Emotion. 2007;7(2):447–57. pmid:17516821
  6. 6. Rymarczyk K, Zurawski Ł, Jankowiak-Siuda K, Szatkowska I. Do dynamic compared to static facial expressions of happiness and anger reveal enhanced facial mimicry? PLoS One. 2016;11(7):e0158534. pmid:27390867
  7. 7. Achaibou A, Pourtois G, Schwartz S, Vuilleumier P. Simultaneous recording of EEG and facial muscle reactions during spontaneous emotional mimicry. Neuropsychologia. 2008;46(4):1104–13. pmid:18068737
  8. 8. Dimberg ULF, Petterson M. Facial reactions to happy and angry facial expressions: Evidence for right hemisphere dominance. Psychophysiology. 2000;37(5):693–6. pmid:11037045
  9. 9. Korb S, Grandjean D, Scherer KR. Timing and voluntary suppression of facial mimicry to smiling faces in a Go/NoGo task—an EMG study. Biol Psychol. 2010;85(2):347–9. pmid:20673787
  10. 10. Lakin JL, Jefferis VE, Cheng CM, Chartrand TL. The Chameleon effect as social glue: Evidence for the evolutionary significance of nonconscious mimicry. J Nonverbal Behav. 2003;27(3):145–62.
  11. 11. Kulesza W, Dolinski D, Huisman A, Majewski R. The Echo Effect: The power of verbal mimicry to influence prosocial behavior. J Lang Soc Psychol. 2014;33(2):183–201.
  12. 12. Bailenson JN, Yee N. Digital Chameleons: Automatic assimilation of nonverbal gestures in immersive virtual environments. 2005;16(10):814–9.
  13. 13. Hsu C, Sims T, Chakrabarti B. How mimicry influences the neural correlates of reward: An fMRI study. Neuropsychologia. 2018;116:61–7. pmid:28823750
  14. 14. Balconi M, Canavesio Y. Is empathy necessary to comprehend the emotional faces? The empathic effect on attentional mechanisms (eye movements), cortical correlates (N200 event-related potentials) and facial behaviour (electromyography) in face processing. Cogn Emot. 2016;30(2):210–24. pmid:25531027
  15. 15. Bos PA, Jap-tjong N, Spencer H, Hofman D. Social context modulates facial imitation of children ‘ s emotional expressions. PLoS One. 2016;11(12):1–11.
  16. 16. Dimberg U, Andréasson P, Thunberg M. Emotional empathy and facial reactions to facial expressions. J Psychophysiol. 2011;25(1):26–31.
  17. 17. Prochazkova E, Kret ME. Connecting minds and sharing emotions through mimicry: A neurocognitive model of emotional contagion. Neurosci Biobehav Rev. 2017;80:99–114. pmid:28506927
  18. 18. Sonnby-Borgström M. Automatic mimicry reactions as related to differences in emotional empathy. Scand J Psychol. 2002;43(5):433–43. pmid:12500783
  19. 19. Franzen A, Mader S, Winter F. Contagious yawning, empathy, and their relation to prosocial behavior. J Exp Psychol Gen. 2018;147(12):1950–8. pmid:29771564
  20. 20. Neal DT, Chartrand TL. Embodied emotion perception: Amplifying and dampening facial feedback modulates emotion perception accuracy. Soc Psychol Personal Sci. 2011;2(6):673–678.
  21. 21. Niedenthal PM, Brauer M. When did her smile drop? Facial mimicry and the influences of emotional state on the detection of change in emotional expression. Cogn Emot. 2001;15(6):853–64.
  22. 22. Chartrand TL, Bargh JA. The chameleon effect: the perception-behavior link and social interaction. J Pers Soc Psychol. 1999;76(6):893–910. pmid:10402679
  23. 23. Preston SD, de Waal FBM. Empathy: Its ultimate and proximate bases. Behav Brain Sci. 2002;25(1):1–20; discussion 20–71. pmid:12625087
  24. 24. Proffitt DR. Embodied perception and the economy of action. Perspect Psychol Sci. 2006;1(2):110–22. pmid:26151466
  25. 25. Miellet S, Hoogenboom N, Kessler K. Visual and embodied perception of others: The neural correlates of the Body Gestalt effect. J Vis. 2012;12(9):824–824.
  26. 26. Vernon D. Cognitive vision: The case for embodied perception. Image Vis Comput. 2008;26(1):127–40.
  27. 27. Carr EW, Winkielman P. When mirroring is both simple and “smart”: how mimicry can be embodied, adaptive, and non-representational. Front Hum Neurosci. 2014;8:1–7.
  28. 28. Goldman AI, Sripada CS. Simulationist models of face-based emotion recognition. Cognition. 2005;94(3):193–213. pmid:15617671
  29. 29. Niedenthal PM, Mermillod M, Maringer M, Hess U. The Simulation of Smiles (SIMS) model: Embodied simulation and the meaning of facial expression. Behav Brain Sci. 2010;33(6):417–33. pmid:21211115
  30. 30. Vitale J, Williams M, Johnston B, Boccignone G. Affective facial expression processing via simulation: A probabilistic model. Biol Inspired Cogn Archit. 2014;10:30–41.
  31. 31. Wood A, Rychlowska M, Korb S, Niedenthal P. Fashioning the face: Sensorimotor simulation contributes to facial expression recognition. Trends Cogn Sci. 2016;20(3):227–40. pmid:26876363
  32. 32. Carr L, Iacoboni M, Dubeau M-C, Mazziotta JC, Lenzi GL. Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas. Proc Natl Acad Sci. 2003;100(9):5497–502. pmid:12682281
  33. 33. Leslie KR, Johnson-Frey SH, Grafton ST. Functional imaging of face and hand imitation: towards a motor theory of empathy. Neuroimage. 2004;21(2):601–7. pmid:14980562
  34. 34. van der Gaag C, Minderaa RB, Keysers C. Facial expressions: What the mirror neuron system can and cannot tell us. Soc Neurosci. 2007; 2(3–4):179–222. pmid:18633816
  35. 35. Vicario CM, Rafal RD, Martino D, Avenanti A. Core, social and moral disgust are bounded: A review on behavioral and neural bases of repugnance in clinical disorders. Neurosci Biobehav Rev. 2017;80:185–200. pmid:28506923
  36. 36. Likowski KU, Mühlberger A, Gerdes ABM, Wieser MJ, Pauli P, Weyers P. Facial mimicry and the mirror neuron system: simultaneous acquisition of facial electromyography and functional magnetic resonance imaging. Front Hum Neurosci. 2012;6:1–10.
  37. 37. Rymarczyk K, Żurawski Ł, Jankowiak-Siuda K, Szatkowska I. Neural correlates of facial mimicry: Simultaneous measurements of EMG and BOLD responses during perception of dynamic compared to static facial expressions. Front Psychol. 2018;9:52. pmid:29467691
  38. 38. Schilbach L, Eickhoff SB, Mojzisch A, Vogeley K. What’s in a smile? Neural correlates of facial embodiment during social interaction. Soc Neurosci. 2008;3(1):37–50. pmid:18633845
  39. 39. Adolphs R, Damasio H, Tranel D, Cooper G, Damasio AR. A role for somatosensory cortices in the visual recognition of emotion as revealed by three-dimensional lesion mapping. J Neurosci. 2000;20(7):2683–90. pmid:10729349
  40. 40. Korb S, Malsert J, Rochas V, Rihs TA, Rieger SW, Schwab S, et al. Gender differences in the neural network of facial mimicry of smiles—An rTMS study. Cortex. 2015;70:101–14. pmid:26211433
  41. 41. Pitcher D, Garrido L, Walsh V, Duchaine BC. Transcranial magnetic stimulation disrupts the perception and embodiment of facial expressions. J Neurosci. 2008;28(36):8929–33. pmid:18768686
  42. 42. Penton T, Dixon L, Evans LJ, Banissy MJ. Emotion perception improvement following high frequency transcranial random noise stimulation of the inferior frontal cortex. Sci Rep. 2017;7(1):11278. pmid:28900180
  43. 43. Paracampo R, Pirruccio M, Costa M, Borgomaneri S, Avenanti A. Visual, sensorimotor and cognitive routes to understanding others’ enjoyment: An individual differences rTMS approach to empathic accuracy. Neuropsychologia. 2018;116:86–98. pmid:29410266
  44. 44. Paracampo R, Tidoni E, Borgomaneri S, di Pellegrino G, Avenanti A. Sensorimotor network crucial for inferring amusement from smiles. Cereb Cortex. 2016;27(11):5116–29.
  45. 45. De Stefani E, Nicolini Y, Belluardo M, Francesco P. Congenital facial palsy and emotion processing: The case of Moebius syndrome. Genes, Brain Behav. 2019;18(1):e12548.
  46. 46. Kilts CD, Egan G, Gideon DA, Ely TD, Hoffman JM. Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions. Neuroimage. 2003;18(1):156–68. pmid:12507452
  47. 47. Trautmann SA, Fehr T, Herrmann M. Emotions in motion: Dynamic compared to static facial expressions of disgust and happiness reveal more widespread emotion-specific activations. Brain Res. 2009;1284:100–15. pmid:19501062
  48. 48. Oberman LM, Winkielman P, Ramachandran VS. Face to face: Blocking facial mimicry can selectively impair recognition of emotional expressions. Soc Neurosci. 2007;2(3–4):167–78. pmid:18633815
  49. 49. Ponari M, Conson M, D’Amico NP, Grossi D, Trojano L. Mapping correspondence between facial mimicry and emotion recognition in healthy subjects. Emotion. 2012;12(6):1398–403. pmid:22642357
  50. 50. Hyniewska S, Sato W. Facial feedback affects valence judgments of dynamic and static emotional expressions. Front Psychol. 2015;6:1–8.
  51. 51. Ipser A, Cook R. Blocking facial mimicry reduces perceptual sensitivity for facial expressions. J Vis. 2015;15(12):1376.
  52. 52. Stel M, van Knippenberg A. The role of facial mimicry in the recognition of affect. Psychol Sci. 2008;19(10):984–5. pmid:19000207
  53. 53. Sessa P, Schiano Lomoriello A, Luria R. Neural measures of the causal role of observers’ facial mimicry on visual working memory for facial expressions. Soc Cogn Affect Neurosci. 2018;13(12):1281–91. pmid:30365020
  54. 54. Lee T, Josephs O, Dolan RJ, Critchley HD. Imitating expressions: emotion-specific neural substrates in facial mimicry. Soc Cogn Affect Neurosci. 2006;1(2):122–35. pmid:17356686
  55. 55. Hennenlotter A, Dresel C, Castrop F, Ceballos-baumann AO, Wohlschla AM, Haslinger B, et al. The Link between facial feedback and neural activity within central circuitries of emotion—new insights from botulinum toxin—induced denervation of frown muscles. Cereb Cortex. 2009;19(3):537–42. pmid:18562330
  56. 56. Maringer M, Krumhuber EG, Fischer AH, Niedenthal PM. Beyond smile dynamics: Mimicry and beliefs in judgments of smiles. Emotion. 2011;11(1):181–7. pmid:21401238
  57. 57. Rychlowska M, Cañadas E, Wood A, Krumhuber EG, Fischer A, Niedenthal PM, et al. Blocking mimicry makes true and false smiles look the same. PLoS One. 2014;9(3):e90876. pmid:24670316
  58. 58. Borgomaneri S, Gazzola V, Avenanti A. Motor mapping of implied actions during perception of emotional body language. Brain Stimul. 2012;5(2):70–6. pmid:22503473
  59. 59. Borgomaneri S, Gazzola V, Avenanti A. Transcranial magnetic stimulation reveals two functionally distinct stages of motor cortex involvement during perception of emotional body language. Brain Struct Funct. 2015;220(5):2765–2781. pmid:25023734
  60. 60. Borgomaneri S, Vitale F, Gazzola V, Avenanti A. Seeing fearful body language rapidly freezes the observer’s motor cortex. Cortex. 2015;65:232–45. pmid:25835523
  61. 61. Borgomaneri S, Vitale F, Avenanti A. Early changes in corticospinal excitability when seeing fearful body expressions. Sci Rep. 2015;5:14122. pmid:26388400
  62. 62. de Gelder B, Snyder J, Greve D, Gerard G, Hadjikhani N. Fear fosters flight: A mechanism for fear contagion when perceiving emotion expressed by a whole body. Proc Natl Acad Sci. 2004;101(47):16701–6. pmid:15546983
  63. 63. Tamietto M, de Gelder B. Neural bases of the non-conscious perception of emotional signals. Nat Rev Neurosci. 2010;11(10):697–709. pmid:20811475
  64. 64. Kret ME, Denollet J, Grèzes J, de Gelder B. The role of negative affectivity and social inhibition in perceiving social threat: an fMRI study. Neuropsychologia. 2011;49(5):1187–93. pmid:21315749
  65. 65. de Gelder B, Van den Stock J, Meeren HKM, Sinke CBA, Kret ME, Tamietto M, et al. Standing up for the body. Recent progress in uncovering the networks involved in the perception of bodies and bodily expressions. Neurosci Biobehav Rev. 2010;34(4):513–27. pmid:19857515
  66. 66. Engelen T, de Graaf TA, Sack AT, de Gelder B. A causal role for inferior parietal lobule in emotion body perception. Cortex. 2015;73:195–202. pmid:26460868
  67. 67. Mazzoni N, Jacobs C, Venuti P, Silvanto J, Cattaneo L. State-dependent TMS reveals representation of affective body movements in the anterior intraparietal cortex. J Neurosci. 2017;37(30):7231–9. pmid:28642285
  68. 68. Ferrari C, Ciricugno A, Urgesi C, Cattaneo Z. Cerebellar contribution to emotional body language perception: a TMS study. Soc Cogn Affect Neurosci. Forthcoming 2019; pii:nsz074 pmid:31588511
  69. 69. Kret ME, Roelofs K, Stekelenburg JJ, de Gelder B. Emotional signals from faces, bodies and scenes influence observers ‘ face expressions, fixations and pupil-size. Front Hum Neurosci. 2013;7:1–9.
  70. 70. Kret ME, Stekelenburg JJ, Roelofs K, de Gelder B. Perception of face and body expressions using electromyography, pupillometry and gaze measures. Front Psychol. 2013;4:28. pmid:23403886
  71. 71. Magnée MJCM, Stekelenburg JJ, Kemner C, de Gelder B. Similar facial electromyographic responses to faces, voices, and body expressions. Neuroreport. 2007;18(4):369–72. pmid:17435605
  72. 72. Tamietto M, Castelli L, Vighetti S, Perozzo P, Geminiani G, Weiskrantz L, et al. Unseen facial and bodily expressions trigger fast emotional reactions. Proc Natl Acad Sci. 2009;106(42):17661–6. pmid:19805044
  73. 73. Moody EJ, Reed CL, Van Bommel T, App B, Mcintosh DN. Emotional mimicry beyond the face? Rapid face and body responses to facial expressions. Soc Psychol Personal Sci. 2017;9(7):844–852.
  74. 74. Harrison NA, Morgan R, Critchley HD. From facial mimicry to emotional empathy: A role for norepinephrine? Soc Neurosci. 2010;5(4):393–400. pmid:20486012
  75. 75. Davis MH. Empathy: A Social Psychological Approach. Boulder, CO: Westview Press.; 1996.
  76. 76. Borgomaneri S, Vitale F, Avenanti A. Behavioral inhibition system sensitivity enhances motor cortex suppression when watching fearful body expressions. Brain Struct Funct. 2017;222(7):3267–82. pmid:28357586
  77. 77. Tottenham N, Tanaka JW, Leon AC, McCarry T, Nurse M, Hare TA, et al. The NimStim set of facial expressions: judgments from untrained research participants. Psychiatry Res. 2009;168(3):242–9. pmid:19564050
  78. 78. Avenanti A, Annella L, Candidi M, Urgesi C, Aglioti SM. Compensatory plasticity in the action observation network: virtual lesions of STS enhance anticipatory simulation of seen actions. Cereb Cortex. 2013;23(3):570–80. pmid:22426335
  79. 79. Avenanti A, Paracampo R, Annella L, Tidoni E, Aglioti SM. Boosting and decreasing action prediction abilities through excitatory and inhibitory tDCS of Inferior Frontal Cortex. Cereb Cortex. 2018;28(4):1282–96. pmid:28334143
  80. 80. Vicario CM, Rafal RD, Borgomaneri S, Paracampo R, Kritikos A, Avenanti A. Pictures of disgusting foods and disgusted facial expressions suppress the tongue motor cortex. Soc Cogn Affect Neurosci. 2017;12(2).
  81. 81. Sweeny TD, Grabowecky M, Suzuki S, Paller KA. Long-lasting effects of subliminal affective priming from facial expressions. Conscious Cogn. 2009;18(4):929–38. pmid:19695907
  82. 82. Martin D, Greer J. Getting to know you: from view-dependent to view-invariant repetition priming for unfamiliar faces. Q J Exp Psychol. 2011;64(2):217–23.
  83. 83. Lee SY, Kang JI, Lee E, Namkoong K, An SK. Differential priming effect for subliminal fear and disgust facial expressions. Atten Percept Psychophys. 2011;73(2):473–81. pmid:21264732
  84. 84. Goshen-Gottstein Y, Ganel T. Repetition priming for familiar and unfamiliar faces in a sex-judgment task: evidence for a common route for the processing of sex and identity. J Exp Psychol Learn Mem Cogn. 2000;26(5):1198–214. pmid:11009253
  85. 85. Cohen J. Statistical power analysis for the behavioral sciences. New York: Academic Press.; 1977.
  86. 86. Wolf FM. Meta-Analysis: Quantitative Methods for Research Synthesis. Beverly Hills, CA: Sage.; 1986.
  87. 87. Varcin KJ, Grainger SA, Richmond JL, Bailey PE, Henry JD. A role for affectivity in rapid facial mimicry: An electromyographic study. Soc Neurosci. 2019;14(5):608–17. pmid:30669959
  88. 88. Sel A, Forster B, Calvo-Merino B. The emotional homunculus: ERP evidence for independent somatosensory responses during facial emotional processing. J Neurosci. 2014;34(9):3263–7. pmid:24573285
  89. 89. Hess U, Fischer A. Emotional mimicry: Why and when we mimic emotions. Soc Personal Psychol Compass. 2014;8(2):45–57.
  90. 90. Fino E, Menegatti M, Avenanti A, Rubini M. Enjoying vs. smiling: Facial muscular activation in response to emotional language. Biol Psychol. 2016;118:126–35. pmid:27164178
  91. 91. Fadiga L, Craighero L, Olivier E. Human motor cortex excitability during the perception of others’ action. Curr Opin Neurobiol. 2005;15(2):213–8. pmid:15831405
  92. 92. Avenanti A, Candidi M, Urgesi C. Vicarious motor activation during action perception: beyond correlational evidence. Front Hum Neurosci. 2013;7:185. pmid:23675338
  93. 93. Mukamel R, Ekstrom AD, Kaplan J, Iacoboni M, Fried I. Single-neuron responses in humans during execution and observation of actions. Curr Biol. 2010;20(8):750–6. pmid:20381353
  94. 94. Umiltà MA, Kohler E, Gallese V, Fogassi L, Fadiga L, Keysers C, et al. I know what you are doing. a neurophysiological study. Neuron. 2001;31(1):155–65. pmid:11498058
  95. 95. Valchev N, Zijdewind I, Keysers C, Gazzola V, Avenanti A, Maurits NM. Weight dependent modulation of motor resonance induced by weight estimation during observation of partially occluded lifting actions. Neuropsychologia. 2015;66:237–45. pmid:25462196
  96. 96. Cattaneo L, Caruana F, Jezzini A, Rizzolatti G. Representation of goal and movements without overt motor behavior in the human motor cortex: a transcranial magnetic stimulation study. J Neurosci. 2009;29(36):11134–8. pmid:19741119
  97. 97. Jacquet PO, Avenanti A. Perturbing the Action Observation Network during perception and categorization of actions’ goals and grips: State-dependency and virtual lesion TMS effects. Cereb cortex. 2015;25(3):598–608. pmid:24084126
  98. 98. Cattaneo L, Maule F, Barchiesi G, Rizzolatti G. The motor system resonates to the distal goal of observed actions: testing the inverse pliers paradigm in an ecological setting. Exp Brain Res. 2013;231(1):37–49. pmid:23949381
  99. 99. Keysers C, Gazzola V. Hebbian learning and predictive mirror neurons for actions, sensations and emotions. Philos Trans R Soc Lond B Biol Sci. 2014;369(1644):20130175. pmid:24778372
  100. 100. Kilner JM. More than one pathway to action understanding. Trends Cogn Sci. 2011;15(8):352–7. pmid:21775191
  101. 101. Amoruso L, Finisguerra A, Urgesi C. Contextualizing action observation in the predictive brain: Causal contributions of prefrontal and middle temporal areas. Neuroimage. 2018;177:68–78. pmid:29753844
  102. 102. Lamm C, Rütgen M, Wagner IC. Imaging empathy and prosocial emotions. Neurosci Lett. 2019;693:49–53. pmid:28668381
  103. 103. Banissy MJ, Kanai R, Walsh V, Rees G. Inter-individual differences in empathy are reflected in human brain structure. Neuroimage. 2012;62(3):2034–9. pmid:22683384
  104. 104. Avenanti A, Minio-Paluello I, Bufalari I, Aglioti SM. The pain of a model in the personality of an onlooker: influence of state-reactivity and personality traits on embodied empathy for pain. Neuroimage. 2009;44(1):275–83. pmid:18761092
  105. 105. Eres R, Decety J, Louis WR, Molenberghs P. Individual differences in local gray matter density are associated with differences in affective and cognitive empathy. Neuroimage. 2015;117:305–10. pmid:26008886
  106. 106. Shamay-Tsoory SG, Aharon-Peretz J, Perry D. Two systems for empathy: a double dissociation between emotional and cognitive empathy in inferior frontal gyrus versus ventromedial prefrontal lesions. Brain. 2009;132(Pt 3):617–27. pmid:18971202
  107. 107. Acosta A, Adams RB Jr, Albohn DN, Allard ES, Beek T, Benning SD, Blouin- Hudon EM et al. Registered Replication Report: Strack, Martin, & Stepper (1988). Perspect Psychol Sci. 2016;11(6):917–928. pmid:27784749