Figures
Abstract
Facial mimicry, the tendency to imitate facial expressions of other individuals, has been shown to play a critical role in the processing of emotion expressions. At the same time, there is evidence suggesting that its role might change when the cognitive demands of the situation increase. In such situations, understanding another person is dependent on working memory. However, whether facial mimicry influences working memory representations for facial emotion expressions is not fully understood. In the present study, we experimentally interfered with facial mimicry by using established behavioral procedures, and investigated how this interference influenced working memory recall for facial emotion expressions. Healthy, young adults (N = 36) performed an emotion expression n-back paradigm with two levels of working memory load, low (1-back) and high (2-back), and three levels of mimicry interference: high, low, and no interference. Results showed that, after controlling for block order and individual differences in the perceived valence and arousal of the stimuli, the high level of mimicry interference impaired accuracy when working memory load was low (1-back) but, unexpectedly, not when load was high (2-back). Working memory load had a detrimental effect on performance in all three mimicry conditions. We conclude that facial mimicry might support working memory for emotion expressions when task load is low, but that the supporting effect possibly is reduced when the task becomes more cognitively challenging.
Citation: Holmer E, Rönnberg J, Asutay E, Tirado C, Ekberg M (2024) Facial mimicry interference reduces working memory accuracy for facial emotion expressions. PLoS ONE 19(6): e0306113. https://doi.org/10.1371/journal.pone.0306113
Editor: Steven R. Livingstone, University of Ontario Institute of Technology, CANADA
Received: March 8, 2023; Accepted: June 11, 2024; Published: June 26, 2024
Copyright: © 2024 Holmer et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data file (.csv) and analysis script (in R) for this study are available on the Open Science Framework (see: https://osf.io/yhq47/?view_only=9d22e7feb6e043829c1735bc191fbc44).
Funding: This work was partly supported by Linnaeus Centre HEAD excellence center grant (349-2007-8654) from the Swedish Research Council (https://www.vr.se) and by a program grant (2012-1693) from FORTE (https://forte.se/), both awarded to JR. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Working memory is the cognitive system used when we are engaged in an activity and try to deal with both internal and external input streams, from which we need to select the relevant bits and suppress the irrelevant, when we solve the task [1]. In everyday communication, this system is critical not only for making meaning out of verbal input, but also to integrate verbal utterances with information in non-verbal expressions [2], such as emotion expressions [3]. In optimal conditions, working memory underlies the effective use of multimodal and multilevel abstractions [4,5]. The aim of the present study was to investigate whether working memory for facial emotion expressions is affected by behavioral interference of presumed facial mimicry, when working memory load is low and high.
According to resource models of working memory, the amount of information to be stored and the precision at which this information is internally represented determines successful processing [4–6]. In line with this perspective, increased demand on working memory storage consistently leads to poorer memory performance [1,7], and performance is poorer when stimuli are abstract [8,9] or of low quality [10,11]. Negative effects on working memory performance have also been reported for facial emotion expressions [12–16]. Further, when working memory resources are occupied, the otherwise prioritized processing of emotions might be suppressed [17,18], and the recognition of emotional content might be impaired [19–21]. Thus, working memory resources seem to be critical for the processing of emotional input.
From another perspective, numerous studies show that the recognition of emotional states when working memory demands are low is influenced by humans’ tendency to imitate other’s facial expressions (i.e., facial mimicry) [22–32]. Thus, when the cognitive system allows for it, facial mimicry might be a prioritized route for emotion recognition. It might be the case, that facial mimicry contributes to the precision of the working memory representation of facial emotion expressions. However, it remains unclear whether facial mimicry supports the processing of emotional face expressions regardless of the level of working memory load.
Simulation, i.e., the reenactment of an observed facial emotion expression in the observer [33], may contribute to the observer’s recognition of an emotion expression [34–36]. Thus, it influences the precision of representations of emotion expressions. It has been known for some time that watching faces expressing different emotion expressions elicits facial mimicry responses [37]. This finding makes the notion that facial feedback contributes to the accurate representation of emotion expressions plausible. The strongest evidence for this contribution comes from experimental work that has shown that facial mimicry interference impairs [25,26,28,29] or slows down [27,31] perception of emotion expressions from faces. Others have reported enhanced perception [32] and short-term memory [24] of happy emotion expressions, when provoking smiling in participants. In addition, concurrent movement of facial muscles, suppressing facial mimicry, seems to impair emotion categorization precision [30]. Further evidence for the role of facial mimicry in the recognition of emotion expressions comes from a study of alexithymia, which is the impaired ability to represent, recognize, and verbally label emotional states. The study suggests that this population has weaker facial mimicry responses than healthy controls when watching emotion expressions [38]. On a related note, a recent meta-analysis showed that facial feedback (i.e., using facial movement to provoke a certain emotional response in yourself) has a small but detectable effect on emotional experience [39]. However, studies of medical conditions that lead to facial paralysis (i.e., Moebius syndrome), and thus the inability to produce facial mimicry responses, have shown preserved emotion recognition ability [40], suggesting that facial mimicry is not critical to successfully represent emotions. Alternative perspectives on the role of facial mimicry regard it as a communicative signal that is selectively used as a means of communication, depending on the context of the interaction [41–43]. Further, studies suggest that cognitive load might suppress facial mimicry responses and that facial mimicry is not critical for the processing of emotion expressions in demanding settings [44,45], such as when working memory load is high.
To the best of our knowledge, the existing studies investigating the effect of facial mimicry manipulations in relation to working memory processing have done so in short-term memory tasks with a low level of working memory load. For example, Kuehne et al. [24] observed that short-term memory for happy emotion expressions improved when smiling was provoked by asking participants to engage in a pen-between-teeth procedure (i.e., participants had to bite on a pen, without touching the pen with their lips). In another study, Sessa, Lomoriello, and Luria [46] investigated the effect of facial mimicry interference on short-term memory storage of facial emotion expressions, and the neural signature of the effect was measured using electroencephalography. Although facial mimicry interference did not influence accuracy in their change-detection short-term memory task, a decreased amplitude in an event-related potential component usually seen as a marker of visual working memory was observed in participants with medium to high levels of affective empathy. These earlier findings suggest that not only perception of emotion expressions might be influenced by facial mimicry, but perhaps also short-term and working memory storage. However, previous studies did not manipulate both working memory load and facial mimicry interference, and the relationship between these mechanisms is thus not known.
According to Wood, Rychlowska, et al. [35], interfering with facial mimicry produces somatosensory feedback that is incongruent with simulation of the emotion expression. This in turn leads to poorer precision of the representation. Distorted feedback reduces the speed and accuracy of perception. Wood, Lupyan, et al. [23] proposed that incongruent facial mimicry may interfere with the precision of visual working memory for emotion expressions. Wood, Rychlowska et al. further proposed that somatosensory simulation may extend visual working memory capacity for facial expressions. These arguments suggest that simulation and facial feedback might impact the precision of working memory for facial emotion expressions and that a negative effect of facial mimicry interference on the precision of representations might become more profound when working memory load is high. There are few working memory models focusing on how emotional input is stored in working memory. One model assumes that a dedicated cognitive structure (i.e., the hedonic detector) is used for evaluating the valence and intensity of an input that is represented and stored in working memory [47–49]. This structure has a neutral valence point from which positive or negative deviations are identified, and this information is used to form a representation of the specific emotion. Thus, if we assume that facial mimicry is related to the emotional experience of an observer [39], it follows that the precision of the representation will be reduced by facial mimicry interference. This assumption, together with the resource model perspective on working memory [5,6], inspired our expectation that task load and mimicry interference will produce interacting negative effects in the context of working memory for facial emotion expressions.
The present study
We sought to examine whether interfering with facial mimicry negatively affects memory precision of facial emotion expressions when working memory load was low and high. To investigate this, we used an n-back working memory task [50]. The task is commonly used to investigate working memory in experimental settings [1,7,51]. The paradigm involves memory storage and updating as well as executive control, thus, it is designed to deplete the working memory system. In everyday communication, multiple sources of information are tracked simultaneously to build a coherent representation of the meaning of the interaction [5]. This complexity is well simulated in a resource-demanding task such as the n-back task. In a typical design of the task, the participant is required to keep track of a sequence of items (e.g., digits, words, pictures), and to make a response if the current item matches the item n steps back in the sequence [1,7,50]. By manipulating n, often from one to three, working memory resources are increasingly taxed.
For the purpose of the present study, we used one (low, 1-back) and two (high, 2-back) item working memory loads in a facial emotion expressions n-back task. The targets were happy, neutral, and angry expressions corresponding to the positive-negative range on an assumed internal valence scale [47]. The n-back paradigm has been used to investigate working memory for facial emotion expressions in some previous studies [12–14,52,53]. Results indicate that working memory load has a detrimental effect on performance. Thus, representations of facial emotion expressions are, like other stimuli, vulnerable to working memory load. Because of this, we limited working memory load to two items. Piloting indicated that three-item working memory load might be too difficult [16]. Most typically, n-back tasks are episodic and each item is an exact match to a previous item. However, in some versions of the task, responses are made based on one feature of the stimulus at the same time as other features are suppressed. In the present study, we wanted to investigate the precision of memory representations of emotion expressions and not the exact episodic memory trace. Therefore, we prepared an n-back task where the participant had to monitor the invariance of facial expressions across different faces [54,55]. We predicted that performance accuracy would decrease with increasing working memory load. We expected this effect to be the strongest when facial mimicry interference was high.
Methods
Participants
Native Swedish-speaking individuals with normal or corrected vision and between 18 and 35 years old were recruited for this study. All participants met the following inclusion criteria: absence of disabilities including sensory and physical disabilities but excluding corrected visual deficits, absence of neuropsychiatric and developmental disabilities, and absence of psychological disorders. The sample size was determined based on resource limitations and heuristics [56]. More specifically, previous studies with similar manipulations and designs showed statistically significant effects with samples in the range of 20 to 50 participants [22,24,28,29,57] and such an approach was also deemed feasible in our study given the available funding and the planned balancing of conditions. Thirty-six participants were recruited, and our expected sample size was reached. Participants were recruited by advertisement at the University campus, and by approaching groups of students to inform them about the study. The age of the participants spanned between 20 and 28 (M = 23.7, SD = 1.99), and most were female (72%). The Matrices subtest from WAIS-IV [58] indicated better-than-normal non-verbal ability (M = 13.1, SD = 3.08), and the Letter-number sequencing subtest from WAIS-IV revealed typical, or slightly higher, levels of verbal working memory (M = 11.4, SD = 2.25). The study adheres to the ethical principles of the Declaration of Helsinki [59], was approved by the Regional Ethical Review Board in Linköping (dnr 2017/141-31), and the participants gave their written informed consent for participation. The individual displayed in Fig 1 in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details. Data was collected in the spring of 2019 and pseudonymized before analysis.
Graphic example of the high interference (left) and low interference (right) facial mimicry manipulations. The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details.
Materials
DmDx version 5.3.1.13 [60] was used for the n-back task and for the physical matching practice task [61] which we implemented to familiarize participants with the response procedure in the n-back task. The stimuli, taken from The Radboud Faces Database [62], consisted of a total of 27 pictures depicting angry, happy, and neutral facial emotion expressions, from nine different persons. Pictures were presented with a resolution of 800 by 600 pixels in the center of the screen on a white background. We used photographs of individuals between 18 and 35 years old to match the age span to that of the potential participants and consequently minimizing the own-age bias in face processing known to impact attention-distraction [63] and memory [64]. The persons depicted in the pictures were all female, since evidence suggests that women show slightly more emotional expressivity than men, especially for positive emotions and internalized negative emotions [65]. For the facial mimicry manipulations applied while performing the n-back task, we used plastic foam rods which were approximately 9 mm in diameter and two different sizes of a strong, non-elastic sports support tape, 4 by 2.5 cm in width. The tape is coated with a zinc oxide latex-free adhesive which is hypoallergenic.
Design
Facial mimicry manipulations.
In the high interference condition (see Fig 1), participants were instructed to pull back their lips over their teeth and bite down on a foam plastic rod with a constant moderate level of pressure [57]. We implemented a taping procedure modelled on previous studies [31,66], along with the use of the plastic foam rod. The purpose of the manipulations was to interfere with the activity of the zygomaticus major (by biting the foam rod) and corrugator supercilia (the taping) muscles that activate when expressing happiness through smiling and anger through frowning [37,67]. Based on previous research [31,57,66], we hypothesized that these procedures would interfere with facial mimicry involving muscles in the upper and lower part of the face, leading to facial feedback that was incongruent with sensorimotor simulation.
In the low interference condition (see Fig 1), participants were instructed to merely let the plastic foam rod rest between their lips without applying any pressure [57]. A taping procedure, based on the control condition from Wood et al. and Carpenter and Niedenthal studies [31,66], was implemented along with the use of the foam plastic rod manipulation. We reasoned that this procedure would not interfere with frowning, and that it would slightly interfere with smiling. To ensure adherence to the manipulations, participants were shown the pictures in Fig 1 and were given some time to adjust to appropriately follow the instructions. Participants were informed that the purpose of the manipulations was to control for facial muscle tension [25].
N-back experiment.
Participants performed six blocks of the n-back task. They performed either a block of 1-back (low working memory load) first and then a block of 2-back (high working memory load), or the reverse, under each of three conditions of facial mimicry interference: no interference, low interference, and high interference (see Fig 2, for an illustration of the structure of the experiment). The order of facial mimicry manipulation and working memory load was counterbalanced across the participants. Within a block, nine of the 27 pictures from the Radboud Faces Database [62] were used, representing three individuals depicting the three emotion expressions. The specific pictures used were balanced across participants and conditions. Each block started with the presentation of a cue, i.e., “1-back” or “2-back”, for 4 seconds, after which a fixation cross was on screen for 1 second before the presentation of the first trial. Blocks consisted of 54 trials (i.e., pictures of facial emotion expressions), 18 trials of each emotion category (i.e., happy, neutral, and angry), and in each trial, a picture of a facial expression was displayed for 1.25 seconds, which was followed by a fixation cross presented for 0.75 seconds, before the presentation of the next trial. Response times for a trial extended until the display of the next trial, thus, the response window on each trial was 2 seconds. The participant had to respond whether the facial emotion expression matched the facial emotion expression n steps back (1-back or 2-back) in the sequence by pressing designated buttons for “yes” and “no”. Pictures that matched on an emotion expression n step back never matched on the identity of the person in the picture. The three facial emotion expressions occurred as n-back targets an equal number of times, n = 6, in every sequence and the order of expressions presented was balanced. Between the two-block sequences representing a sequence for any facial mimicry manipulation, participants were given the opportunity to pause briefly (no longer than a few minutes) before continuing to the next block. Between blocks within any facial mimicry manipulation, there was a five-second pause. Before every second block, the material used for mimicry manipulation conditions was changed.
An overview of how the facial mimicry interference (no, low, and high interference) and working memory load (low, 1-back, and high, 2-back) conditions were distributed across the six experimental blocks, each including 54 trials, for one participant. The order of conditions was balanced across participants.
Valence and arousal ratings.
After finishing the n-back task, any remaining tape was removed, and participants viewed the 27 pictures that were included in the task, one after another in a fixed random order, and rated the valence and arousal of the expressions. The concepts of valence and arousal were demonstrated through pictures of self-assessment manikins [68], and assessed on a 9-point Likert scale, from 1 (negative/weak) to 9 (positive/strong). The main purpose of the valence and arousal ratings was to validate the perceived emotional content of the pictures. The ratings also provided estimates of the participants’ internal representation of the stimuli set, which we used as control variables in the analysis.
Statistical analysis
To investigate our main prediction that a negative effect of facial mimicry on emotion expression precision increases with increasing working memory load, we performed a generalized linear mixed effects model with the within-group factors specified as working memory load (two levels: low, 1-back, and high, 2-back) and facial mimicry interference (three levels: no, low, and high), as well as their interaction. The dependent variable was whether the response on a given trial was correct. Control factors included the block order to account for potential habituation and mnemonic effects, as well as individual mean valence and mean arousal estimates for the three types of emotional expressions in the stimuli material (angry, happy, and neutral) to model individual differences along these dimensions. The random effect structure included correlated intercepts and slopes for load at the level of the individual. Analysis was performed in R statistical software [69] using the glmer function from the lme4 package [70] for model estimation. For testing the simple effects of interactions, the emmeans package [71] was used. To deal with the binary outcome variable, a logit-link function was applied. The model was estimated using maximum likelihood estimation with the bobyqa optimizer and 1000 iterations. Satterthwaite approximation of degrees of freedom was applied to test fixed effects. Working memory load was dummy-coded, using one variable with low load as the reference at 0. Facial mimicry interference was also dummy-coded, using two variables, one for low interference and another for high interference, with no interference as the reference at 0 in both. We used the package sjPlot [72] to run comprehensive regression diagnostics for our main model, including assessments for variance, homoscedasticity, normality, random effects, and outliers, which consistently validated the model’s conformity to the assumptions. To test the validity of the emotion expressions in the pictures, mean valence and mean arousal ratings for each of the categories angry, neutral, and happy expressions were compared, in two separate analyses. The random effect structure in these analyses included random intercepts at the level of the individual. A more complex random effect structure created convergence issues. The assumption of normality was violated for the rating data since it showed clustering at several areas of the curve. Hence, a Gaussian Mixture Model (GMM) approach was used for these analyses. This approach was applied by using the package mixtools [73] and the function normalmixEM to generate the best fit. It allowed the model to capture complex patterns in the data (non-linear) and perform with accuracy despite the assumption of normality being violated. The effects of angry and happy emotion expressions were modelled using two dummy-variables with neutral emotion expressions as the reference at 0. The model was estimated using maximum likelihood estimation with the Nelder-Mead optimizer and 1000 iterations. No data was missing, and the significance level was set to α = .05 in all statistical analyses. The data file (.csv) and analysis script (in R) for this study are available on the Open Science Framework (see: https://osf.io/yhq47/?view_only=9d22e7feb6e043829c1735bc191fbc44).
Procedure
Testing was conducted in a silent room at the University with only the test administrator and a participant present. To familiarize the participant with the response procedure of the n-back experiment (responding yes and no after making a decision based on the features of a stimulus), a simple physical matching task [61,74] was performed. In the physical matching task, pairs of letters that either matched or did not match were presented, and the task was to indicate with a button press when a pair matched. The physical matching task was followed by practice trials on the n-back emotion expressions task, which consisted of two shorter sequences with 1-back and 2-back working memory load but without any facial mimicry manipulation. Participants practised until they achieved a total error rate of 30% or less in a sequence. We assumed that potential baseline differences between participants would be negated through the implementation of the practice trials before the experiment.
Results
Accuracy
For accuracy, there was a main effect of working memory load (β = -1.30, df = 11652, p < .001), which means that the higher the load the lower the performance accuracy (see Table 1). Further, there was a negative main effect of high interference, β = -0.61, df = 11652 p = .009, but no main effect of low interference, suggesting that performance accuracy in the high interference condition only was poorer than in the no interference condition. We found positive main effects of valence (β = 0.078, df = 11652, p < .001) and arousal (β = 0.26, df = 11652, p = .003), which means that performance accuracy improved when the mean valence and mean arousal were higher. Finally, the main effect of block order was also positive and statistically significant (β = 0.076, df = 11652, p < .001), therefore, participants improved as they progressed in the task. However, the critical test of our study was the interaction between working memory load and high interference. Surprisingly, the statistically significant interaction was in the opposite direction of our prediction (β = 0.30, df = 11652, p = .024, displayed in Fig 3). This result suggests that the effect of mimicry interference was suppressed, rather than enhanced, by increased working memory load. As expected, the interaction between working memory load and low interference was not statistically significant (see Table 1). Following up the simple main effects on the statistically significant interaction between working memory load and high interference, revealed that when working memory load was high (2-back), the effect of high interference was not significant (β = 0.00, df = 11652, p = 1.00). The pattern and direction of the main and interaction effects suggest that there was a negative effect of the high interference condition when working memory load was low, but not when the load was high (see Fig 3).
The estimated probability of a correct responses (y-axis) for the high (turquoise graph) and no (red graph) interference conditions at low and high levels of working memory load (x-axis), based on the generalized linear mixed effects model. The error bars indicate the 95% confidence intervals of the estimates.
Since no proper power analysis was performed prior to the study, a sensitivity analysis based on post hoc power simulations was conducted. Simulations followed the guidelines from Kumle et al. [75], and we only considered the main manipulations of the experiment (working memory load and facial mimicry interference), which means that the simulation results can only be compared to the effects of those factors in the results from the generalized linear mixed effects model reported above. All effects but the interaction between high interference and working memory load (which was the critical test in the design) were set to a fixed value. The overall pattern of the simulation results suggested that beta weights larger than approximately .30 could be detected with at least 80% power, whereas beta weights in a lower range (.10-.20) revealed a power of less than 50% (for more details, see S1 Appendix). Thus, we cannot fully reject the possibility that poor sensitivity of our design might explain why we did not observe an effect of high interference when working memory load was high.
Stimuli valence and arousal ratings
For arousal (Fig 4A), there was a significant main effect of the conditions happy (β = 0.17, df = 11640, t = 50.02, p < .001) and angry (β = 0.17, df = 11640, t = 49.78, p < .001). The same was visible for valence (Fig 4B) in conditions happy (β = 2.30, df = 11660, t = 228.1, p < .001) and angry (β = -1.37, df = 11660, t = -167.0, p < .001). Overall, pictures of angry expressions were rated as more negative compared to pictures of neutral and happy expressions, and more arousing than neutral expressions. Happy expressions were rated as more positive compared to pictures of neutral and angry expressions, and more arousing than neutral expressions. These results provide an approximate validation of the perceived emotional content of the pictures used in the experiment.
A) The average arousal rating (y-axis) displayed in bar plots for angry (red), neutral (yellow), and happy (green) emotion expressions (x-axis). The error bars indicate the 1.5 interquartile range, and the black dots are the individual means per participant. B) The average valence rating (y-axis) displayed as bar plots for angry (red), neutral (yellow), and happy (green) emotion expressions (x-axis). The error bars indicate the 1.5 interquartile range, and the black dots are the individual means per participant.
Discussion
In the present work, we investigated the effect of facial mimicry interference on working memory for facial emotion expressions. Specifically, we tested whether a negative effect of behavioral facial mimicry interference on working memory precision increased with increasing working memory load. Working memory load had a strong negative effect on precision, but contrary to what was predicted, an effect of facial mimicry interference was only observed when working memory load was low. Thus, we found partial support for the notion that incongruent sensorimotor feedback impairs working memory for facial emotion expressions [35].
From a resource model perspective on working memory [5,6], load and precision of representations determine processing accuracy. We observed that working memory load had the expected detrimental effect on accuracy [1,7], corroborating earlier findings in the broader literature on the n-back tasks [1,76,77], and more specifically from the context of working memory for emotion expressions [12,52]. Further, precision was poorer when facial mimicry interference was high, compared to when it was weak or absent, specifically when working memory load was low. Thus, as tentatively proposed in Baddeley’s model of working memory for emotions [47–49], simulation of emotional content, as reflected by e.g. facial mimicry, might be a resource that improves the representational precision of emotion expressions in working memory, but perhaps only when load is low. Kuehne et al. [24] reported that invoked smiling enhanced short-term memory storage of happy emotion expressions, supporting the notion that simulation and sensorimotor feedback, as reflected in facial mimicry, influences not only perception but also brief memories of emotional input [35]. Our finding, that facial mimicry interference does not seem to influence precision when working memory load is high, might seem to contradict the finding of Kuehne et al. However, their experiment did not include an active manipulation of working memory load and was likely to induce only a weak working memory load. Thus, the results reported by Kuehne et al. might correspond to our finding at the low working memory load (1-back), and here we extend their work by showing that when working memory load is high (2-back), facial mimicry might play a limited role in successful performance. It should be noted that the facial mimicry manipulation applied by Kuehne et al. was intended to enhance a facial mimicry response, whereas here we wanted to interfere with facial mimicry. This might also contribute to the seemingly different results across studies.
To explain the unexpected finding that working memory load seems to suppress the negative effect of facial mimicry interference on precision, we cautiously propose that a general principle of the neurocognitive system is that when working memory demands increase, the system responds by filtering out potentially distracting information. The ability to suppress external stimuli—like background noise—is working memory dependent. Especially in high load or highly distracting conditions. This is well articulated in the task-engagement/distraction trade-off (TEDTOFF) model [78] and shown across different populations. Because the TEDTOFF model was primarily conceived and developed for audio-verbal stimuli, we only tentatively suggest that suppression and focal engagement of working memory resources in high-load conditions also apply to facial mimicry. However, we assume that the high interference condition produced incongruent information about the stimuli, and consequently, reduced the precision of the representation of the emotional expressions (i.e., internal interference). Thus, when processing becomes more difficult, the focal task might shield out potential distractors, and this could apply both when interference is external and when it is internal.
One previous study suggested that sensorimotor and visual information differentially contribute to the perception of facial emotion expressions [79]. This notion finds support in earlier models of face perception emphasizing the tracking of invariant visual features across individuals for successful identification of emotion expressions [80]. Situations in which a less costly, visual route for facial emotion processing could be utilized are likely characterized by higher cognitive demands, such as in larger group settings where it would be impossible to use sensorimotor simulation to process emotion expressions for everyone present [79]. As working memory demands increase, lexico-semantic representations and mechanisms subserving the online use of those representations become critical for the successful understanding of communicative signals [5] and other types of processing are down-prioritized [78,81]. The present study did not test whether access to semantic labels for facial input determines processing efficiency in demanding settings, and this possibility should be investigated in future studies. As a note related to this, neural responses during the blocking of facial mimicry indicate greater reliance on semantic retrieval [57], which links facial mimicry interference to active semantic processing. Further, one study reported that verbal but not visual working memory load reduced emotion recognition [21], suggesting that denied access to semantic labels reduces the precision of representations of emotion expressions. However, in a situation where the observer only needs to process one face at a time, sensorimotor simulation might be useful both for representing the emotional state of the interlocutor [82] and to indicate an empathetic response or social relatedness [41–43].
Studies suggest that cognitive load might alter facial mimicry activity [44,45]. For example, Blocker and McIntosh [45] investigated facial mimicry responses, recorded using electromyography, to smiling and frowning faces that represented either individuals that the observer liked or individuals that were neutral to the observer. Recordings of mimicry responses were performed at both low and high working memory load, and when load was high, evidence of facial mimicry for smiling faces was only observed in response to facial expressions of individuals that the observer liked. Thus, working memory load suppressed mimicry responses when the motivation for social relatedness was low but not as much when motivation was high. It might be the case, that facial mimicry is suppressed by cognitive load of any kind, but that contextual factors might in turn surpass this inhibiting effect [42,43,83]. Blocker and McIntosh finding, that increased working memory load suppress facial mimicry responses, suggests that facial mimicry might not play a crucial role in accurately representing facial emotion expressions when working memory load is high. However, in the present study, we did not see that facial mimicry interference influenced precision when working memory load was high. It should be acknowledged that this surprising result could be due to some characteristics of the present sample or features of the task design, possibly limiting the precision of the statistical model even though the total number of observations per condition was high (36 participants, and 54 trials per load by mimicry interference condition). Thus, the effect of facial mimicry when working memory load is high should be investigated in larger studies using different experimental settings before any firm conclusions can be drawn.
In one model of working memory for emotions [47–49], inputs are assumed to be evaluated on a positive-negative internal valence scale inducing a feeling state simulation. Although this is related to facial mimicry, as sensorimotor simulation could activate feeling states [35], it is not likely to be determined by facial mimicry alone [39]. Thus, we cannot say that participants did not at all use simulation to solve the task, since simulation of feeling states might be driven by mechanisms not targeted with the type of manipulations applied in the present study [84]. Unfortunately, we did not assess participants’ feeling states during the experiment, nor which strategies they used to solve the task. However, based on the valence and intensity ratings that participants completed, the stimuli seemed to invoke the sought emotional dimensions along the positive-negative continuum. This provides some evidence that the different categories of emotion expressions invoked different feelings in the participants. That the valence and arousal ratings were associated with task performance, further suggests that the perceived emotional content of the stimuli might explain some variability in the precision of working memory representations of emotion expressions. The evidence is somewhat mixed and heterogeneous [85], but some previous studies suggest that the emotional valence of the stimuli might have an impact on working memory performance [86–88], and specifically, working memory for facial emotion expressions [12,13]. In addition, it has been reported that positive emotions are stored with higher precision in working memory than negative emotions [89], a finding that we replicated here. Future studies investigating the effects of facial mimicry and working memory load should consider potential interactions with the perceived valence and arousal dimensions of the stimuli.
One limitation in this and most of the previous studies investigating the role of facial mimicry in the identification and processing of facial emotion expression, is that little is known about how the behavioral mimicry manipulations applied interfere with mimicry activity. When an effect is observed on behavioral precision, it is typically assumed that there also was an effect on mimicry activity, without this being directly observed. Facial mimicry is measured with electromyography (EMG) [37], a method that might not be compatible with most mimicry interference manipulations since the method is based on the placement of electrodes to the face and sensitive to disturbances. The present study did not include a validation of whether the facial mimicry manipulations that we used induced the sought effects by applying EMG measurements. Instead, we assumed that the manipulations were valid based on observed changes in behavioral performance reported in previous studies [31,66]. In addition to EMG, it might be possible to test the validity of facial mimicry manipulations by coding filmed emotion expressions when manipulations are active, using the Facial Action Coding System [90]. Whether and how facial mimicry manipulations interfere with mimicry activity should be considered carefully in future studies and validated using either EMG or video coding. A final and potentially major limitation was the lack of non-face control stimuli. With this control, we could have tested whether the interference effect that we observed for the low-load condition is, as would be expected, specific for facial input. At the same time, it was only the high interference and not the low interference condition that impaired precision, which speaks for that the observed pattern of results was not driven by a general interference effect. However, had the design included a control condition where the task was to store non-face stimuli, we would feel more confident with our conclusion.
Conclusions
Facial mimicry might influence the precision of representations of facial emotion expressions when load on working memory is low. Thus, sensorimotor feedback represents a useful source of information for the processing of emotion expressions when the conditions allow for it. Everyday life involves effortful processing of emotion, and an impaired ability to recognize and make use of emotion expressions might fall short due to impaired precision of such expressions in working memory.
References
- 1. Redick TS, Lindsey DRB. Complex span and n-back measures of working memory: A meta-analysis. Psychon Bull Rev. 2013;20: 1102–1113. pmid:23733330
- 2.
Hagoort P, Levinson SC. Neuropragmatics. 5th ed. In: Gazzaniga MS, Mangun GR, editors. The cognitive neurosciences. 5th ed. Cambridge: MIT Press; 2014. pp. 667–674.
- 3. Jack RE, Schyns PG. The human face as a dynamic tool for social communication. Curr Biol. 2015;25: R621–R634. pmid:26196493
- 4. Rönnberg J, Signoret C, Andin J, Holmer E. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model. Front Psychol. 2022;13. pmid:36118435
- 5.
Rönnberg J, Holmer E, Rudner M. The Ease of Language Understanding Model. In: Schwieter JW, Wen Z (Edward), editors. The Cambridge Handbook of Working Memory and Language. Cambridge: Cambridge University Press; 2022. pp. 197–218.
- 6. Ma WJ, Husain M, Bays PM. Changing concepts of working memory. Nat Neurosci. 2014;17: 347–356. pmid:24569831
- 7. Szmalec A, Verbruggen F, Vandierendonck A, Kemps E. Control of interference during working memory updating. J Exp Psychol Hum Percept Perform. 2011;37: 137–151. pmid:20731517
- 8. Alvarez GA, Cavanagh P. The capacity of visual short-term memory is set both by visual information load and by number of objects. Psychol Sci. 2004;15: 106–111. pmid:14738517
- 9. Eng HY, Chen D, Jiang Y V. Visual working memory for simple and complex visual stimuli. Psychon Bull Rev. 2005;12: 1127–1133. pmid:16615339
- 10. Andin J, Holmer E, Schönström K, Rudner M. Working memory for signs with poor visual resolution: fMRI evidence of reorganization of auditory cortex in deaf signers. Cereb Cortex. 2021;31: 3165–3176. pmid:33625498
- 11. Rudner M, Toscano E, Holmer E. Load and distinctness interact in working memory for lexical manual gestures. Front Psychol. 2015;6: 1147. pmid:26321979
- 12. Wante L, Mueller SC, Cromheeke S, Braet C. The impact of happy and angry faces on working memory in depressed adolescents. J Exp Child Psychol. 2018;169: 59–72. pmid:29342446
- 13. Levens SM, Gotlib IH. Updating positive and negative stimuli in working memory in depression. J Exp Psychol Gen. 2010;139: 654–664. pmid:21038984
- 14. Levens SM, Gotlib IH. The effects of optimism and pessimism on updating emotional information in working memory. Cogn Emot. 2012;26: 341–350. pmid:22233460
- 15. Román FJ, García-Rubio MJ, Privado J, Kessel D, López-Martín S, Martínez K, et al. Adaptive working memory training reveals a negligible effect of emotional stimuli over cognitive processing. Pers Individ Dif. 2015;74: 165–170.
- 16.
Holmer E, Rudner M, Rönnberg J. Working memory for emotions, persons, and pictures: The FACEBACK task. 4th International Conference on Cognitive Hearing Science for Communication. Linköping, June 18–21; 2017.
- 17. Sassi F, Campoy G, Castillo A, Inuggi A, Fuentes LJ. Task difficulty and response complexity modulate affective priming by emotional facial expressions. Q J Exp Psychol. 2014;67: 861–871. pmid:24063691
- 18. Moriya J, Koster EHW, De Raedt R. The Influence of Working Memory on Visual Search for Emotional Facial Expressions. J Exp Psychol Hum Percept Perform. 2014;40: 1874–1890. pmid:24999613
- 19. Neumann R, Völker J, Hajba Z, Seiler S. Lesions and reduced working memory impair emotion recognition in self and others. Cogn Emot. 2021;35: 1527–1542. pmid:34623214
- 20. Phillips LH, Channon S, Tunstall M, Hedenstrom A, Lyons K. The role of working memory in decoding emotions. Emotion. 2008;8: 184–191. pmid:18410192
- 21. Reed P, Steed I. Interference with facial emotion recognition by verbal but not visual loads. Res Dev Disabil. 2015;47: 441–450. pmid:26519662
- 22. Borgomaneri S, Bolloni C, Sessa P, Avenanti A. Blocking facial mimicry affects recognition of facial and body expressions. PLoS One. 2020;15: 1–21. pmid:32078668
- 23. Wood A, Lupyan G, Sherrin S, Niedenthal P. Altering sensorimotor feedback disrupts visual discrimination of facial expressions. Psychon Bull Rev. 2016;23: 1150–1156. pmid:26542827
- 24. Kuehne M, Zaehle T, Lobmaier JS. Effects of posed smiling on memory for happy and sad facial expressions. Sci Rep. 2021;11: 10477. pmid:34006957
- 25. Rychlowska M, Cañadas E, Wood A, Krumhuber EG, Fischer A, Niedenthal PM. Blocking mimicry makes true and false smiles look the same. PLoS One. 2014;9: 1–8. pmid:24670316
- 26. Oberman LM, Winkielman P, Ramachandran VS. Face to face: Blocking facial mimicry can selectively impair recognition of emotional expressions. Soc Neurosci. 2007;2: 167–178. pmid:18633815
- 27. Stel M, Van Knippenberg A. The Role of Facial Mimicry of Affect Recognition theory. Psychol Sci. 2014;19: 984–985. pmid:19000207
- 28. Ponari M, Conson M, D’Amico NP, Grossi D, Trojano L. Mapping correspondence between facial mimicry and emotion recognition in healthy subjects. Emotion. 2012;12: 1398–1403. pmid:22642357
- 29. Neal DT, Chartrand TL. Embodied emotion perception: Amplifying and dampening facial feedback modulates emotion perception accuracy. Soc Psychol Personal Sci. 2011;2: 673–678.
- 30. Ipser A, Cook R. Inducing a concurrent motor load reduces categorization precision for facial expressions. J Exp Psychol Hum Percept Perform. 2016;42: 706–718. pmid:26618622
- 31. Wood A, Martin JD, Alibali MW, Niedenthal PM. A sad thumbs up: incongruent gestures and disrupted sensorimotor activity both slow processing of facial expressions. Cogn Emot. 2018. pmid:30428767
- 32. Marmolejo-Ramos F, Murata A, Sasaki K, Yamada Y, Ikeda A, Hinojosa JA, et al. Your Face and Moves Seem Happier When I Smile. Exp Psychol. 2020;67: 14–22. pmid:32394814
- 33. Barsalou LW. Grounded cognition. Annu Rev Psychol. 2008;59: 617–645. pmid:17705682
- 34. Goldman AI, Sripada CS. Simulationist models of face-based emotion recognition. Cognition. 2005;94: 193–213. pmid:15617671
- 35. Wood A, Rychlowska M, Korb S, Niedenthal P. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition. Trends Cogn Sci. 2016;20: 227–240. pmid:26876363
- 36. Gallese V, Caruana F. Embodied Simulation: Beyond the Expression/Experience Dualism of Emotions. Trends Cogn Sci. 2016;20: 397–398. pmid:27101879
- 37. Dimberg U, Thunberg M, Elmehed K. Unconcious facial reactions to emotional facial expressions. Psychol Sci. 2000;11: 86–89. pmid:11228851
- 38. Franz M, Nordmann MA, Rehagel C, Schäfer R, Müller T, Lundqvist D. It Is in Your Face—Alexithymia Impairs Facial Mimicry. Emotion. 2021;21: 1537–1549. pmid:34793185
- 39. Coles NA, Larsen JT, Lench HC. A meta-analysis of the facial feedback literature: Effects of facial feedback on emotional experience are small and variable. Psychol Bull. 2019;145: 610–651. pmid:30973236
- 40. R Bogart K, Matsumoto D. Facial mimicry is not necessary to recognize emotion: Facial expression recognition by people with Moebius syndrome. Soc Neurosci. 2010;5: 241–251. pmid:19882440
- 41. Hess U, Fischer A. Emotional Mimicry as Social Regulation. Personal Soc Psychol Rev. 2013;17: 142–157. pmid:23348982
- 42. Seibt B, Mühlberger A, Likowski KU, Weyers P. Facial mimicry in its social setting. Front Psychol. 2015;6. pmid:26321970
- 43. Hess U, Fischer A. Emotional mimicry as social regulator: theoretical considerations. Cogn Emot. 2022;36: 785–793. pmid:35920780
- 44. Hess U, Philippot P, Blairy S. Facial Reactions to Emotional Facial Expressions: Affect or Cognition? Cogn Emot. 1998;12: 509–531.
- 45. Blocker HS, McIntosh DN. Automaticity of the interpersonal attitude effect on facial mimicry: It takes effort to smile at neutral others but not those we like. Motiv Emot. 2016;40: 914–922.
- 46. Sessa P, S Lomoriello A, Luria R. Neural measures of the causal role of observers’ facial mimicry on visual working memory for facial expressions. Soc Cogn Affect Neurosci. 2018; 1281–1291. pmid:30365020
- 47.
Baddeley AD. Working memory, thought, and action. Oxford: Oxford University Press; 2007.
- 48. Baddeley AD, Banse R, Huang YM, Page M. Working memory and emotion: Detecting the hedonic detector. J Cogn Psychol. 2012;24: 6–16.
- 49. Baddeley AD. Working memory and emotion: Ruminations on a theory of depression. Rev Gen Psychol. 2013;17: 20–27.
- 50. Kirchner WK. Age differences in short-term retention of rapidly changing information. J Exp Psychol. 1958;55: 352–358. pmid:13539317
- 51. Schmiedek F, Lövdén M, Lindenberger U. A task is a task is a task: Putting complex span, n-back, and other working memory indicators in psychometric context. Front Psychol. 2014;5: 1475. pmid:25566149
- 52. Cromheeke S, Mueller SC. The power of a smile: Stronger working memory effects for happy faces in adolescents compared to adults. Cogn Emot. 2016;30: 288–301. pmid:25650124
- 53. Kessel D, García-Rubio MJ, González EK, Tapia M, López-Martín S, Román FJ, et al. Working memory of emotional stimuli: Electrophysiological characterization. Biol Psychol. 2016;119: 190–199. pmid:27402441
- 54. Neta M, Whalen PJ. Individual differences in neural activity during a facial expression vs. identity working memory task. Neuroimage. 2011;56: 1685–1692. pmid:21349341
- 55. Xin F, Lei X. Competition between frontoparietal control and default networks supports social working memory and empathy. Soc Cogn Affect Neurosci. 2015; 1144–1152. pmid:25556209
- 56. Lakens D. Sample Size Justification. Collabra Psychol. 2022;8: 1–28.
- 57. Davis JD, Winkielman P, Coulson S. Sensorimotor simulation and emotion processing: Impairing facial action increases semantic retrieval demands. Cogn Affect Behav Neurosci. 2017;17: 652–664. pmid:28255798
- 58.
Wechsler D. Wechsler Adult Intelligence Scale-Fourth Edition. San Antonio: PsychCorp; 2008.
- 59. World Medical Association. World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA. 2013;310: 2191–2194. pmid:24141714
- 60. Forster KI, Forster JC. DMDX: A windows display program with millisecond accuracy. Behav Res Methods, Instruments, Comput. 2003;35: 116–124. pmid:12723786
- 61. Holmer E, Heimann M, Rudner M. Evidence of an association between sign language phonological awareness and word reading in deaf and hard-of-hearing children. Res Dev Disabil. 2016;48: 145–159. pmid:26561215
- 62. Langner O, Dotsch R, Bijlstra G, Wigboldus DHJ, Hawk ST, van Knippenberg A. Presentation and validation of the radboud faces database. Cogn Emot. 2010;24: 1377–1388.
- 63. Ebner NC, Johnson MK, Ebner NC, Johnson MK. Age-group differences in interference from young and older emotional faces. Cogn Emot. 2010;34: 1095–1116. pmid:21286236
- 64. Rhodes MG, Anastasi JS. The own-age bias in face recognition: A meta-analytic and theoretical review. Psychol Bull. 2012;138: 146–174. pmid:22061689
- 65. Chaplin TM. Gender and emotion expression: A developmental contextual perspective. Emot Rev. 2015;7: 14–21. pmid:26089983
- 66. Carpenter SM, Niedenthal PM. Disrupting facial action increases risk taking. Emotion. 2020;20: 1084–1092. pmid:31192668
- 67. Neumann R, Schulz SM, Lozo L, Alpers GW. Automatic facial responses to near-threshold presented facial displays of emotion: Imitation or evaluation? Biol Psychol. 2014;96: 144–149. pmid:24370542
- 68. Bradley MM, Lang PJ. Measuring emotion: The self-assessment manikin and the semantic differential. J Behav Ther Exp Psychiatry. 1994;25: 49–59. pmid:7962581
- 69.
R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria; 2022. https://www.r-project.org/.
- 70. Bates D, Mächler M, Bolker BM, Walker SC. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015;67.
- 71.
Lenth R. emmeans: Estimated Marginal Means, aka Least-Squares Means. 2023. https://cran.r-project.org/package=emmeans.
- 72.
Lüdecke D. sjPlot: Data Visualization for Statistics in Social Science. 2021. https://cran.r-project.org/web/packages/sjPlot/index.html.
- 73. Benaglia T, Hunter DR, Young DS. mixtools: An R Package for Analyzing Mixture Models. J Stat Softw. 2009;32.
- 74. Rönnberg J, Lunner T, Ng EHN, Lidestam B, Zekveld AA, Sörqvist P, et al. Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study. Int J Audiol. 2016;55: 1–20. pmid:27589015
- 75. Kumle L, Võ MLH, Draschkow D. Estimating power in (generalized) linear mixed models: An open introduction and tutorial in R. Behav Res Methods. 2021;53: 2528–2543. pmid:33954914
- 76. Rudner M. Working Memory for Linguistic and Non-linguistic Manual Gestures: Evidence, Theory, and Application. Front Psychol. 2018;9: 679. pmid:29867655
- 77. Wang H, He W, Wu J, Zhang J, Jin Z, Li L. A coordinate-based meta-analysis of the n-back working memory paradigm using activation likelihood estimation. Brain Cogn. 2019;132: 1–12. pmid:30708115
- 78. Sörqvist P, Rönnberg J. Individual differences in distractibility: An update and a model. PsyCh J. 2014;3: 42–57. pmid:25632345
- 79. de la Rosa S, Fademrecht L, Bülthoff HH, Giese MA, Curio C. Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition. Psychol Sci. 2018;29: 1257–1269. pmid:29874156
- 80. Haxby J V., Hoffman EA, Gobbini MI. The distributed human neural system for face perception. Trends Cogn Sci. 2000;4: 223–233. pmid:10827445
- 81. Sörqvist P, Dahlström Ö, Karlsson T, Rönnberg J. Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction. Front Hum Neurosci. 2016;10. pmid:27242485
- 82. Wood A, Lupyan G, Niedenthal P. Why Do We Need Emotion Words in the First Place? Commentary on Lakoff (2015). Emot Rev. 2016;8: 274–275.
- 83. Hess U, Fischer A. Emotional mimicry: Why and when we mimic emotions. Soc Personal Psychol Compass. 2014;8: 45–57.
- 84. Barrett LF. The theory of constructed emotion: an active inference account of interoception and categorization. Soc Cogn Affect Neurosci. 2017;12: 1–23. pmid:27798257
- 85. Schweizer S, Satpute AB, Atzil S, Field AP, Hitchcock C, Black M, et al. The impact of affective information on working memory: A pair of meta-analytic reviews of behavioral and neuroimaging evidence. Psychol Bull. 2019;145: 566–609. pmid:31021136
- 86. Yang H, Yang S, Isen AM. Positive affect improves working memory: Implications for controlled cognitive processing. Cogn Emot. 2013;27: 474–482. pmid:22917664
- 87. Storbeck J, Maswood R. Happiness increases verbal and spatial working memory capacity where sadness does not: Emotion, working memory and executive control. Cogn Emot. 2016;30: 925–938. pmid:25947579
- 88. Carpenter SM, Peters E, Västfjäll D, Isen AM. Positive feelings facilitate working memory and complex decision making among older adults. Cogn Emot. 2013;27: 184–192. pmid:22764739
- 89. Waugh CE, Running KE, Reynolds OC, Gotlib IH. People are Better at Maintaining Positive Than Negative Emotional States. Emotion. 2018. pmid:29565611
- 90.
Ekman P, Friesen W V. Facial Action Coding System. Palo Alto, CA: Consulting Psychologist Press; 1978.