Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

Facial mimicry is the spontaneous response to others’ facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing.


Introduction
The perception and interpretation of emotional facial expressions is crucial for appropriate behaviour in social contexts. It is well documented that people react to such expressions with specific, congruent facial muscle activity, a phenomenon called facial mimicry, which can be reliably measured by electromyography (EMG; e.g. [1,2]). The presentation of angry faces evokes increased activity in the corrugator supercilii (CS), the facial muscle responsible for frowning, while presentation of happy images is associated with increased activity in the zygomaticus major (ZM), the facial muscle involved in smiling, and decreased CS activity [3,4].
According to the matched motor hypothesis, facial mimicry is an automatic matched motor response, based on a perception-behaviour link [5,6], and is part of a more widely defined motor mimicry, not limited to the face. In other words, perception of others' emotional facial expressions automatically evokes the same behaviour in the perceiver, and the facial expression is spontaneously copied. This idea is consistent with results of some neuroimaging studies, which have shown that both perception and execution of the same action engage overlapping areas called the mirror neuron system (MNS) [7][8][9][10]. For example, Carr, Iacobbini, Dubeau, Mazziotta & Lenzi [11], using a paradigm in which subjects had to observe and imitate static displays of several basic emotions, found activation of both the inferior frontal gyrus and posterior parietal cortex, supporting the involvement of the MNS in understanding facial expressions.
Recent evidence suggested that mimicry may not be only an automatic reaction, but may be dependent on many factors [12]. Some studies have found that type of task in which the participant is engaged [13], as well as social context [14], might influence facial mimicry [12,15,16]. Thus, it seems that mimicry of facial emotional expressions is not merely a simple motor reaction, but also a result of a more generic process of interpreting the expressed emotion. According to this idea, neuroimaging data suggest that observation of the emotional facial expressions of others activates not only motor pathways [11], but also brain structures (e.g. amygdala, insula) regarded as part of the extended MNS [17,18] and thought to be responsible for emotional information processing. Moreover, emotional brain structures were more active when subjects perceived dynamic emotional stimuli compared to static stimuli [19][20][21]. It is also possible that heightened activity of the brain regions related to mirror neurons underlie the relationship between facial mimicry and emotional experience involved in processing dynamic facial expressions.
Dynamic facial expressions are more natural and powerful than static ones [22]. It has been shown that that during dynamic presentation of facial expressions, emotional recognition of subtle facial expressions is improved [23], emotional arousal is enhanced [24], and valence rating [25] or judgement of emotional intensity [26] can be predicted. In addition, a few EMG studies have shown that passively observed dynamic emotional facial expressions are associated with higher EMG muscle responses than static ones are [27][28][29]; however, the data are still inconsistent. Using avatars (computer synthesized faces), Weyers and others [28] have shown stronger facial reactions to dynamic than static happy facial expressions, as well as increased activity of the ZM and decreased activity of the CS. Contrary to the authors' assumption, angry expressions elicited no significant CS activation. In a study by Rymarczyk et al. [29] that employed morphs (stimuli selected from a video database of facial expressions of emotion prepared by computer-morphing techniques), results were similar; happy dynamic expressions produced faster and stronger mimicry than static ones. CS reactions were small, revealing faster CS activation only for dynamic angry faces. Contrary to this, Sato et al. [27] showed that dynamic angry facial expressions evoked stronger EMG responses in the CS than static expressions, but they did not find differences between modality-specific stimuli in the responses to happy facial expressions in this muscle. Moreover, they reported higher EMG response in ZM for happy dynamic compared to static facial expressions. Taken together, most researchers agree that the process of facial mimicry is facilitated when expressions are dynamic rather than static [27][28][29][30]. However, because of various methodological issues, e.g. different kinds of stimuli used across studies, this suggestion needs further study.
The present study investigated the impact of dynamic facial expression on facial mimicry and judgment of emotional intensity. We prepared videos of actors' emotional facial expressions. Actors were our first choice due to their proficiency in being a "clear carrier of emotional signals". We chose anger and happiness, since those emotions are commonly studied in EMG literature. The apex image from each dynamic facial expression was used for static conditions. We measured facial EMG responses from three muscles: the ZM, CS, and orbicularis oculi (OO), while participants passively viewed emotional facial expressions. Based on electromyography (EMG) and neuroimaging (fMRI) data, we assumed that dynamic facial expression triggered simulation of a state in motor and affective systems that represented the meaning of the expression to the subject. Thus, we assumed stronger facial mimicry [27] and higher ratings of emotional intensity [31] for dynamic displays compared to static images. We expected greater CS activity during anger mimicry due to the nature of this threatening, negative stimulus, but previous data were inconsistent [28,29]. For positive emotions, we expected lower CS activity and higher responses in the ZM and OO.
It seems that activity of the OO indexes the emotional intensity of both negative and positive facial emotional expressions [32]. According to Ekman [33], the OO activity could be considered as an additional marker of genuine happiness following activity in the ZM. This specific type of expression, known as the Duchenne smile, appears when a person expresses true and genuine happiness and is thought to beassociated with true positive emotions [34]. Hence, we expected co-occurrence of ZM and OO activity when happiness stimuli were presented (either dynamic or static), and assumed that significantly greater activity of the ZM and OO in response to dynamic than static happiness could indicate stronger motor and emotional reactions.
Additionally, due to gender differences in emotional processing, we decided to test moderation of facial mimicry in the described setting with regard to subject gender. According to conventional wisdom that women are more "emotional" than men [35], we expected greater EMG responses for women than men, especially for dynamic happiness displays. To our knowledge, this study was the first to address differences in facial mimicry in response to dynamic and static presentations and the gender effect of emotional facial expressions using measures of the OO activity apart from the CS and ZM in the passive viewing paradigm.

Participants
Thirty-six healthy volunteers (18 females; mean age = 23.5 ± 5.3, age min = 19, age max = 34) participated in the study. All the subjects had no history of neurological or head trauma and had normal or corrected to normal vision. The study was conducted in accordance with guidelines for ethical research and approved by the Ethics Committee at the University of Social Sciences and Humanities. Each participant signed an informed consent form just after hearing an explanation of the experimental procedures, and was paid 50 PLN (~12 EUR) for participation in the study.

Experimental design
The study was constructed as a two-factors within-participants design, with emotion (happiness, anger) and stimulus modality (static, dynamic) as factors.
Stimuli. The process of the selection of stimulus clips included the following steps. First, each of 20 professional actors expressing emotions naturally and freely (at their own pace) was filmed to generate stimulus clips. Actors were informed of the recording procedure and supervised by a trained psychologist who encouraged them to self-induce expressed emotions. After recording sessions, expressions were checked for visibility of key features of the facial expressions of happiness and anger by two independent psychologists and only the unambiguous were later rated online. Later, randomised subsets of dynamic clips were rated by 900 people online on five-point scale of emotional intensity of visible expression for each of six emotions (anger, happiness, sadness, fear, disgust and surprise). Subjects made their answers on scale of every emotion that ranged from 1-low intensity to 5-high intensity. Stimuli clips of two actresses and actors, evaluated as expressing a discrete emotion, were used in the experimental procedure. The clip presented a single emotion if mean of its emotional category rating was the highest and if the 95% confidence interval for that emotion did not overlap with the confidence intervals for the other emotional categories. Emotional characteristics of stimuli are provided in the Table 1.
Each stimulus clip presented the human face (shown from the front), starting with a view of the neutral, relaxed face of the model (no emotional expressions visible). Dynamic stimulus presentation lasted two seconds and ended with peak expression of a single emotion as the last frame of the stimulus. This occurred at approximately one second and remained visible for another second. Conversely, static stimuli consisted of a single frame of the peak expression and lasted two seconds. Stimuli were 880 pixels in height and 720 pixels in width. Stimuli used in the experimental procedure were a subset of recordings of a larger sample of facial expressions created for a purpose of a different project and as such were not published.
Procedure. Each experimental procedure was conducted individually in a sound-attenuated and electromagnetically shielded room. To conceal facial electromyography registration, participants were told that the experiment concerned sweat gland activity while watching the faces of actors selected for commercials by an external marketing company.
In the consent form, we informed subjects that the experiment would last for approximately 40 minutes and be divided into 4 parts: preparation, observation of stimuli, evaluation of stimuli, and completion of a demographic questionnaire. After participants signed the consent form, the EMG electrodes were attached. To enhance subjects' comfort, participants were verbally encouraged to feel comfortable and behave naturally. We also asked participants to complete a dummy questionnaire before the experimental session.
During experimental sessions, participants passively viewed stimuli on a grey background in the centre of a screen. Consistent with Dimberg [1], stimuli were presented in block design of 8 stimuli, randomised within and between blocks. Each stimulus was preceded by a white fixation cross, 80 pixels in diameter, appearing two seconds before stimulus and sound. Interstimulus intervals with a blank grey screen lasted 8.75-11.25 s. Within each block, randomised stimuli of two opposite-sex pairs of each emotional expression (happiness, anger) were presented. No stimuli of the same actor were shown consecutively and within each block each stimulus was repeated once. In summary, each stimulus was shown 4 times within each trial type, for a total of 16 presentations within each condition. Participants then watched all stimuli again to evaluate the emotional intensity. After each stimulus presentation, rating was performed with a computer mouse on a moving slider that increased in vertical size from left (low intensity) to right (high intensity). Finally, the electrodes were removed, after which participants completed the demographics questionnaire and were informed of the real aim of the study.

Apparatus
Experimental events were controlled using Presentation1 software (version 14.6, www. neurobs.com) running on an IBM computer with Microsoft Windows operating system. Rating of stimuli was performed with a program written in Presentation1. All procedures were displayed on a 19-inch LCD monitor (NEC multisync LCD 1990 FX; 1280 x 1024 pixels resolution; 32 bit colour rate; 75 Hz refresh rate) from a viewing distance of approximately 65 cm.

EMG recordings
Facial electromyography activity was measured using Ag/AgCl miniature electrodes with a diameter of 4 mm. Electrodes were filled with electrode paste (Brain Products GmbH, Munich, Germany) and positioned over three muscles-the CS, ZM, and OO-on the left side of the face [36]. A reference electrode, 10 mm in diameter, was attached to the forehead. Before placement of the electrodes, the skin was cleaned with alcohol and a thin coating of electrode paste applied. This procedure was repeated until electrode impedance was reduced to 5 kΩ or less. EMG recordings were made using a BrainAmp amplifier (Brain Products) and BrainVision Recorder (version 1.2). Signal was hardware low-pass filtered at 560 Hz, digitized using a 24-bit A/D converter with a sampling rate of 2 kHz, and finally stored on a computer running MS Windows XP.

Data analysis
Pre-processing. Data were analysed with BrainVision Analyser (version 2.1.0.327). Raw EMG data were off-line re-referenced to bipolar measures and filtered with 30 Hz high-pass, 500 Hz low-pass, and 50 Hz notch filters. Signals were rectified, integrated with a moving average filter integrating over 50 ms, resampled to 1 kHz, and tested for artefact. In order to exclude trials with excessive facial muscle activity, if the averaged signal activity of a single muscle was above 8 μV at the baseline (visibility of fixation cross), the trial was classified as artefact and excluded from further analysis (5% of trials were excluded). Both periods lasted 2 s, and the baseline period started 2 s before the stimulus onset of each presentation. Each remaining trial was blind-coded and visually checked for artefact; no additional trials were excluded. Next, trials were baseline corrected such that the EMG response was measured as the difference of averaged signal activity between the stimuli duration and baseline period. The signal was averaged for each condition of each participant and imported to Statistical Package for the Social Sciences (SPSS) 21 for statistical analysis. Data were also checked for outlier trials, in which the EMG responses exceeded the mean ± 3standard deviations; no additional trials were excluded.
Facial EMG. For testing differences in EMG responses, two-way repeated-measures ANO-VAs with two within-subjects factors (emotion: happiness, anger; stimulus modality: static, dynamic) and one between-subjects factor (sex: females, males) were used. Separate ANOVAs was calculated for responses from a single muscle and differences are reported with a Bonferroni correction for multiple comparisons.
In order to confirm that EMG activity changed from baseline and facial mimicry occurred, the EMG data of each significant effect were tested for a difference from a zero (baseline) using one-sample, two-tailed t-tests.
Additionally, Pearson correlations were calculated in order to confirm co-occurrence of muscle responses within each condition/significant effect between muscles correlations.
Psychological ratings. Testing differences between the rating of emotional intensity for categories of stimuli with regard to subjects' sex was done with two-way repeated-measures ANOVAs with two within-subjects factors (emotion: happiness, anger; stimulus modality: static, dynamic) and one between-subjects factor (sex: females, males). Differences were reported with a Bonferroni correction for multiple comparisons.  (Fig 1.).
Between muscles correlations. EMG data has shown the existence of correlations between the activity of the CS and ZM (see Table 2 for correlation coefficients in all experimental conditions). In other words, dynamic anger expressions were rated as the most intense, different from other conditions (Fig 3).

Discussion
The present study examined facial mimicry and judgement of emotional intensity in dynamic emotional facial expressions. Additionally, we tested whether the strength of facial mimicry could be affected by subjects' gender. We found that all muscle responses to happy stimuli measured by EMG differed from analogous responses to angry stimuli. However, the responses of the ZM and OO were different depending on whether the stimulus was dynamic or static. Furthermore, the ZM response depended on whether the observer was female or male. Subjects reacted spontaneously to happy facial expressions with increased ZM and OO activity and with decreased CS activity. These results together with positive correlations of ZM and OO activity when happy stimuli were presented indicated a Duchenne smile-like pattern of muscle facial mimicry [37,38]. In all three muscles, the change in the facial muscle reactions was greater in response to dynamic than static happy displays. Similar results were obtained in the ratings of emotional intensity for happiness and anger. Moreover, we found that women exhibited greater ZM muscle activity for dynamic happiness than for static stimuli. Similar to previous studies, neither static [28] nor dynamic [29] angry facial expressions evoked any significant response in the EMG activity of the CS. Our results concerning happiness agree with those of previous EMG studies, in which passive observation of happy facial expressions elicited an expected pattern of ZM and CS muscle activity interpretable as facial mimicry [39,40]. We observed that the response of both muscles was more pronounced when dynamic stimuli were presented, similar to other studies that applied avatars (computer synthesized faces) [28], morphs (stimuli selected from a video database of facial expressions of emotion prepared by computer-morphing techniques) [29], or human expressions [27]. Most researchers [12,39] argue that such facial muscle reactions during passive observation of emotional display are automatic. However, based on the majority of studies, it is difficult to conclude whether such reactions involved only motor or motor and emotional components. Thus, in our study, the activity of the OO was measured apart from the ZM and CS. As mentioned earlier, the activity of the OO following activity of the ZM is typical for the experience of positive emotions. This finding that happy dynamic displays were mimicked by increased responses in the ZM and OO is in line with previous studies [14,41], however, many of them refer to social context and differ from our study methodology, which applied passive viewing paradigms. For example, Hess and colleagues [42] found increased EMG activity in the OO when their subjects had to judge dynamic happiness expressions similar to those encountered in everyday life. Conversely, van der Schalk et al. [43], using static happiness pictures, did not find increased OO activity and interpreted response in the OO as indicative of an intense experience of happiness, which the participants of their study "were not likely to feel". To sum up, it seems that more natural situations, i.e. social interactions, as well as perception of dynamic happy displays, could result in more evident facial mimicry in the ZM and OO muscles, suggesting experience of true happiness. In line with the neuroimaging data described in the introduction section, we assumed that the stronger facial muscle activity of the OO in response to dynamic vs. static happy stimuli could mean that subjects recruited both motor and emotion-related brain structures.
In our study, we found no facial mimicry of anger in the CS in response to either static or dynamic emotional presentations. It is a little puzzling, because most studies have shown increased CS activity in response to angry facial expressions and have interpreted this as automatic facial activity [39,40]. However, some authors who applied passive viewing paradigms also failed to observe any significant difference in the mean EMG activity of the CS muscle in response to either static or dynamic angry facial expressions [28,29]. The lack of mimicry in some studies has been explained because the expression of emotions is regulated by social and cultural norms [41] or because in the artificial situation of the laboratory setting, negative stimuli can lose their valence [2]. On the other hand "anger mimicry need not actually be an anger expression at all" [12]. The increasing activity in the CS could also reflect global negative affect [2] or mental effort [44]. Thus, it is possible that in our study, in a laboratory setting with a passive viewing paradigm, no mental effort was engaged or the anger lost its valence. Furthermore, some interesting explanations concerning mimicry of anger come from studies using not only passive paradigms but also more interactive social contexts [12]. These studies also showed that an angry expression is not an automatic reaction. Some authors found that anger was mimicked only when it was clearly directed at a common foe [14,43], and anger directed at the observer was not mimicked [14]. Recently, Carr et al. [45] showed that "high power individuals" did not show pure anger mimicry to angry expressions of other high power individuals. It seems that whether anger mimicry occurs depends on many factors.
In our study, we found that subjects tended to mimic happy emotional expressions. This seems logical, because smiles create and support good social relations, improve well-being, and have a low social cost [14]. EMG activity of muscles related to happiness during happy video clips predicted increased prosocial behaviour (i.e. smiling measured in the ZM) [46]. Moreover, it has been shown that smiles serve as social rewarding stimuli [47] evoking a positive response and a tendency to return the reward [48]. These results correspond with findings of neuroimaging studies that revealed that the processing of happy expressions activated the orbitofrontal regions related to reward processing [20,49].
In our study, we found that women also reacted with increased activity of the ZM for dynamic happiness vs. static facial expressions. However, we did not observe differences between man and women in any other muscle responses. Women are commonly thought to be more emotional than men. For example, several studies have reported a female advantage in the decoding of non-verbal emotional cues [50] or higher scores than males on self-report empathy scales [51,52]. Moreover, neuroimaging research on empathy has shown that females recruit areas containing mirror neurons to a higher degree than males do during related processing in empathic face-to-face interactions [53]. However, it is not clear whether women also express their emotions more than men do. Some earlier EMG studies have found women to be more expressive than men [54][55][56], but there is no consistent pattern of sex differences in facial mimicry. For example, women reacted more strongly with the ZM in response to happy faces [57]. However, other research does not support those findings [58]. Our study suggests that some subtle pattern of muscle activity could be attributed to female susceptibility to emotional expressions in facial mimicry, i.e. dynamic characteristic of happy displays could be an important factor in eliciting facial mimicry reaction in the smiling muscle (ZM). The result is partially in line with socially defined display rules, since women tend to smile more than men do [59,60]. Taken together, it seems that because women are more often involved in positive interactions (i.e. nursing) than men, happiness seems to be an emotion worth mimicking in real life situations. Further studies are needed to evaluate the role of dynamic emotional expression in the facial mimicry in both sexes as well as with regard to stimulus sex.
Our next goal was to assess the role of dynamic stimuli on judgement of the emotional intensity of facial expressions. We found that dynamic expressions were rated as more intense than static ones; moreover, angry expressions were rated as more intense than happiness. These data seem to be consistent with our assumptions regarding the higher strength of facial mimicry in response to dynamic expressions. Similar effects of dynamic expressions have been demonstrated in studies rating experienced and recognized emotional arousal [27,61]. Our results, together with others [26,62], suggest that the dynamic properties of stimuli convey unique information and enrich emotional expression, so dynamic stimuli convey higher complexity cues important for communication in social interactions. Such an explanation seems to be consistent with neuroimaging data revealing that perception of dynamic facial displays engaged brain regions sensitive to motion stimuli [63] as well as stimuli-signalling intentions [64], i.e. the superior temporal sulcus. Next, the observed difference between the perceived intensities of anger and happiness may be interpreted in the context of evolutionary psychology. It is well known that angry facial expressions signal negative intentions [65], and the understanding of those contributes significantly to better adaptation [66]. In other words, it is better to overestimate the intensity of anger than underestimate this potential danger signal.
One may ask why happy stimuli were mimicked, while angry stimuli were not, even though the latter were rated as more intense than the former. Our data rules out greater salience as the reason why happy stimuli are more readily mimicked, since intensity ratings for anger (especially dynamic anger) are the highest. It seems that such intensity ratings engage cognitive processes rather than emotional ones [67]. More research is needed to clarify this issue.
In summary, our findings partially confirmed the impact of dynamic facial expressions in facial mimicry, i.e. there was higher mimicry of dynamic than static happiness and no mimicry of angry expressions. The discussion of the mechanisms underlying facial mimicry is ongoing [12,27,68]. Sato and colleagues [27] proposed that the MNS might play an important role in facial mimicry by matching motor outputs of facial motions with visual inputs of facial motions. Other interpretations arise from the results of neuroimaging studies [19,20,63], which showed that passive observation of dynamic emotional stimuli activates motor and affective brain areas. Some insights about the nature of automatic facial mimicry were provided by a study simultaneously measuring BOLD and facial EMG in a MRI scanner [17]. A prominent part of the classic MNS (i.e. the inferior frontal gyrus, IFG), as well as areas responsible for emotional processing (i.e. the insula) were activated when subjects passively viewed static happy, sad, and angry avatar expressions. The authors proposed that during an initial emotional experience all the sensory, affective, and motor neural systems are activated together [17].
To conclude, our data suggest that in case of happiness the strength of facial mimicry could be modulated by the modality of stimuli (dynamic, static) presentation, as well as the subject's sex. Future research is needed to further explore the role of facial mimicry with respect to contextual variables and individual differences in emotional processing of other emotions, e.g. disgust or fear.
Supporting Information S1 Fig. Mean (± SE) EMG activity changes for zygomaticus major during presentation conditions moderated by sex. Asterisks indicate significant differences from baseline EMG responses. Ã : p < 0.05. Asterisks with lines beneath indicate significant differences between conditions (simple effects) in EMG responses: ÃÃ : p < 0.05. (TIF)