Figures
Abstract
Background
Fluid and effective social communication requires that both face identity and emotional expression information are encoded and maintained in visual short-term memory (VSTM) to enable a coherent, ongoing picture of the world and its players. This appears to be of particular evolutionary importance when confronted with potentially threatening displays of emotion - previous research has shown better VSTM for angry versus happy or neutral face identities.
Methodology/Principal Findings
Using functional magnetic resonance imaging, here we investigated the neural correlates of this angry face benefit in VSTM. Participants were shown between one and four to-be-remembered angry, happy, or neutral faces, and after a short retention delay they stated whether a single probe face had been present or not in the previous display. All faces in any one display expressed the same emotion, and the task required memory for face identity. We find enhanced VSTM for angry face identities and describe the right hemisphere brain network underpinning this effect, which involves the globus pallidus, superior temporal sulcus, and frontal lobe. Increased activity in the globus pallidus was significantly correlated with the angry benefit in VSTM. Areas modulated by emotion were distinct from those modulated by memory load.
Citation: Jackson MC, Wolf C, Johnston SJ, Raymond JE, Linden DEJ (2008) Neural Correlates of Enhanced Visual Short-Term Memory for Angry Faces: An fMRI Study. PLoS ONE 3(10): e3536. https://doi.org/10.1371/journal.pone.0003536
Editor: David Whitney, University of California Davis, United States of America
Received: August 7, 2008; Accepted: October 1, 2008; Published: October 29, 2008
Copyright: © 2008 Jackson et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: MCJ and DEJL were supported by a Wellcome Trust grant to DEJL (grant number 077185/Z/05/Z). Aspects of this research were also supported by BBSRC (UK) grant BBS/B/16178 to JER and the Wales Institute of Cognitive Neuroscience (WICN).
Competing interests: The authors have declared that no competing interests exist.
Introduction
Visual short-term memory (VSTM) is an active system that temporarily stores and updates information over a period of a few seconds. It is particularly useful for maintaining a constant and coherent percept of the world in the face of eye, head, and object motion. In contrast, long-term memory (LTM) is a system dedicated to storing information over hours, days, and even decades; it is essential for learning and developing knowledge and skills.
Although it is well established that LTM is enhanced for images with an emotional, particularly negative, content [1]–[3], an effect thought to be driven by neural communication between LTM and limbic systems [4], the question of whether information to be retained in VSTM is influenced by its emotional content, and which brain mechanisms might be involved, has received little attention and results are varied. One study found no effect of valence on STM for fearful versus neutral faces, nor for taboo versus neutral words [5]. Two studies using emotive images from the International Affective Picture System (IAPS) found an influence of valence on STM. In one, participants judged the relative emotional intensity (“higher” or “lower”) of two successively presented images that were matched for valence (positive or negative) and were separated by a 3 second retention interval [6]. Young participants were more likely to make accurate relativity judgments for negative compared to positive images (accuracy was based on whether judgments matched previously established ratings obtained from an independent group of young participants). The authors interpreted this to reflect enhanced STM for negative images, and report the opposite effect with older participants. However, their task was not a direct test of the effect of valence on STM for visual content per se. In a functional magnetic resonance imaging (fMRI) study, participants were required to state whether a positive, negative, or neutral image seen 11.5 seconds earlier was present or not in an array of nine valence-matched images [7]. Increased activity in dorsolateral prefrontal cortex (DLPFC) and decreased orbitofrontal cortex (OFC) activity was found for positive versus negative images, but these data are difficult to interpret because task accuracy during scanning did not show a difference in STM for positive (65%) versus negative (65%) images. The above studies, while interesting in measuring responses to emotional stimuli, provide little clear insight into whether visual information with an emotional content can influence VSTM and, if so, what brain mechanisms might be involved. [Note that because our aim is to measure the neural correlates of information retained in VSTM with an emotional versus neutral content, we do not review here studies of the effect of emotional distraction or induced mood state on VSTM for neutral stimuli.]
Previous behavioural research of ours [8] has shown that VSTM for face identities is significantly enhanced when faces display an angry compared to a happy or neutral expression. We replicated this result a number of times and were able to eliminate several possible accounts of the effect. We showed that that the anger benefit for faces in VSTM was not due to low-level feature recognition: inverting the faces abolished the effect; a perceptual discrimination task in which participants stated whether two faces matched identity or not showed no difference in accuracy or reaction time between angry, happy, or neutral face conditions. We also showed that heightened physiological arousal is unlikely to underpin the effect: the presence of calming or energizing background music during the task did not differentially influence VSTM nor interact with emotional expression conditions, suggesting that enhanced VSTM for angry faces is valence-driven. Perceptual encoding limitations were excluded as an account because the angry benefit remained present when the original study time of 2000 ms was doubled. Finally, the effect was replicated using a different set of faces from another database that were also equated for expression intensity, providing evidence that enhanced VSTM for angry faces is not specific to the faces used, nor due to the potential for angry faces to be more intense in expression than happy or neutral faces.
In the current study, we again used angry, happy, and neutral faces to investigate the neural correlates of VSTM for information with an emotional versus neutral content. Faces are well suited for this purpose because not only are they ecologically valid, they also allow the presentation of differently valenced emotional information in the same individual exemplars. This reduces variability of low-level featural information among different emotion conditions, a factor that may have confounded results of previous studies using IAPS pictures [9]. Another person's emotional facial expression can convey critical information about his/her internal mood state and, in turn, affect one's own behavioral decisions, e.g., whether to approach or avoid, or what manner of speech to adopt. Successful and appropriate face-to-face interactions depend not only on recognition of emotional expression, but often also require accurate face identification. Critically, our ability to select an appropriate social response in a timely and effective manner depends on our ability to identify who is expressing what emotion, and this information must be retained in memory for a period sufficient to develop an action plan. Thus, storage of face identity information in VSTM forms a crucial bridge between immediate encoding of emotionally charged information and execution of appropriate behavior.
Here, during fMRI participants were required to memorize between one and four angry, happy, or neutral faces for 2,000 ms (the number of faces to be remembered is termed face load), and one second later they were asked to report whether a single face probe matched in identity to one of the previous to-be-remembered faces or not (Fig. 1). All faces (at both encoding and retrieval) in any one trial displayed the same emotion, thus emotional expression of the to-be-remembered faces was task-irrelevant. Our aim was to specifically examine the neural correlates of the angry benefit for faces in VSTM and determine how emotion and memory systems in the brain might interact to produce this effect. By manipulating face load, we were also able to examine any interactions between load and expression conditions. We predicted that the angry face benefit in VSTM is likely to recruit an interplay of brain regions involved in emotion processing, such as the amygdala, basal ganglia, and insula [9]–[11], short-term memory, such as the prefrontal cortex [12], and face processing, such as the fusiform gyrus [13] and superior temporal sulcus [14].
Between one and four faces (all expressing either angry, happy, or neutral emotion) were shown for encoding for 2000 ms, followed by a 1000 ms blank retention/maintenance phase, and a 2000 ms retrieval phase in which participants stated whether a single probe face had been present or not in the previous display. All faces in any one trial (i.e., at encoding and retrieval) displayed the same emotion. A jittered inter-trial interval (ITI) of between 4000 ms and 6500 ms separated each trial.
Methods
Participants
Thirty five right-handed healthy volunteers (mean age 29 years; 15females) from the student and community panels in Bangor participated in return for £20. Subjects reported no history of neurological or psychiatric disorder, had normal or corrected to normal vision, and provided informed written consent prior to participation. The study was approved by the School's ethics committee in Bangor.
Stimuli
Greyscale face images of six adult males each expressing three emotions (angry, happy, and neutral) were used [15]. Each image subtended approximately 1.43°×1.36°. Scrambled greyscale face images, selected at random from a set of eight different scrambled images, were used to fill memory display locations on trials in which fewer than four faces were presented.
Experimental Procedure
Participants were oriented to the centre of the computer screen by a small fixation cross presented for 1,000 ms and instructed to maintain fixation throughout each session in order to minimize eye movement artefacts in the functional data. To signal the start of a trial, the fixation cross increased in size for 1,000 ms, after which it returned to its original size for another 1,000 ms. On each trial, between one and four faces, each expressing the same emotion (angry, happy, or neutral) were presented for 2,000 ms in a 2×2 memory matrix with fixation at the centre. The centre of each image within the matrix was positioned at a visual angle of approximately 1.27° from fixation to ensure that the faces display was foveal, and thus minimize eye movements. Previous research has established that 2,000 ms is sufficient time to encode four faces [8], [16]. On trials in which fewer than four faces were presented, all other matrix locations were occupied by a scrambled face. Face locations were randomised within the matrix. After a 1,000 ms blank retention interval during which only the fixation cross was present, a single face probe (expressing the same emotion as the preceding matrix) was displayed in the centre of the screen for 2,000 ms. Participants were required to state, within the 2,000 ms single probe presentation duration, whether the probe person had been present or not in the immediately preceding display (50% probe present). The task involved an identity decision, thus emotional expression was irrelevant to the task. Participants used their right hand to respond “yes” or “no” using a simple button press. Feedback was not provided. A jittered fixation inter-trial interval (ITI) of between 4,000 and 6,500 ms separated each trial (Figure 1).
Sixteen experimental trials were presented for each load (1,2,3,4) in each emotion condition (angry, happy, neutral) in a pseudo-random order, resulting in 192 trials in total (event-related design). In order to minimize subject fatigue, the experiment was separated into four separate scanning blocks of 48 trials each, within a single scanning session. Each block lasted approximately 11 minutes. Before the main experiment began, participants were given a short practice session outside the scanner.
Data Acquisition
Behavioural data were acquired with a 14-inch Dell Latitude D610 laptop (32-bit true colour; resolution 1280×1024 pixels). The tasks were generated by E-Prime software [17]. fMRI data were acquired with a Philips 1.5T MRI scanner with a SENSE parallel head coil. We used a gradient echo echoplanar sequence sensitive to the blood oxygen dependent (BOLD) signal (TR = 2,000 ms; TE = 40 ms; matrix size = 96×96; FOV = 256×256 mm2; voxel size = 3×3×3 mm3; 90° flip angle; 20 axial slices). Two dummy volumes were acquired before each scan block to reduce possible T1 saturation effects. During the VSTM faces task, the fMRI sequence was synchronized with the fixation cross at the start of each trial (see Fig. 1). Anatomical data was acquired with a high resolution T1-weighted three-dimensional (3D) volume (1×1×1 mm3), and used for coregistration of functional data.
Data Analysis
Behavioural Data Analysis.
False alarm rates in all emotional expression conditions varied significantly as a function of face load, so we converted hits and false alarms into dprime (d') scores in order to provide a more sensitive measure of signal detection. d' is the z-normalised hit rate (probability of ‘yes’ responses when the probe was present) minus the z-normalised false alarm rate (probability of ‘yes’ responses when the probe was absent) [d' = zHit Rate – zFalse Alarm Rate].
FMRI Data Analysis.
Functional data were preprocessed and analysed using the BrainVoyager 1.79 software. We applied slice scan time correction using sinc interpolation and ascending slice scanning order, 3D motion correction using trilinear interpolation, spatial smoothing (8 mm Gaussian kernel), and a temporal high pass filter (3 cycles per time course). Three-D anatomical scans were transformed into Talairach space [18], the parameters of which were applied to the coregistered functional data.
All but one subject completed all four VSTM task runs (one subject completed only three runs due to technical scanning problems), and runs that were unsuitable for analysis were excluded from analysis (two runs in each of two subjects revealed head movements greater than 5 mm). In total, 135 z-normalised volume time courses were entered into a whole brain, random effects analysis of covariance (ANCOVA). Motion-corrected covariates were included in the model in order to optimize the elimination of task-correlated motion artifacts and maximize sensitivity to true activations [19], and to reduce inter- and intra-subject variability [20]. Functional data from all phases of the VSTM task (excluding the ITI) were entered into the analysis model: no distinctions were made between encoding, maintenance, or retrieval phases. In all analyses, regions of activation were determined using the False Discovery Rate (FDR) significance threshold of <.05. To examine emotional expression effects, we computed a repeated-measures ANCOVA (three within-factor levels: angry, happy, neutral) to assess the main effect of emotion, and we also computed specific emotion contrasts (angry - neutral, angry - happy, happy - neutral). In each identified emotion cluster, we conducted random effects GLM region of interest (ROI) analyses to extract beta values that were subsequently applied to statistical comparisons between emotional expression conditions, and correlated with VSTM task performance values. VSTM load effects were examined by contrasting loads 4, 3, and 2 with load 1. A repeated-measures ANCOVA with emotion and load as within factors assessed whether an emotion by load interaction was present at the whole brain level.
Correlation with behavioural data.
To examine whether there were any correlations between the magnitude of the angry face effect and brain activity levels, we used the mean behavioural dprime score across all face loads for each emotional expression condition to calculate difference scores for angry minus happy and angry minus neutral face contrasts (based on the angry face advantage observed in the behavioural results). These performance difference scores were correlated with related beta difference scores extracted from emotion-sensitive brain areas. To examine whether there were any correlations between STM capacity and brain activity levels, we calculated Cowan's K capacity estimates at each load [load*(hits – false alarms)] [21], averaged across emotion conditions, with related beta values extracted from load-sensitive brain areas. K and beta values were concatenated across all loads for this statistical comparison. Pearson's correlation coefficient (r2) was used in all cases.
Results
Behavioural Results
We conducted an emotion (angry, happy, neutral) by load (1, 2, 3, 4) repeated-measures ANOVA on the behavioural data, expressed in d' values. Consistent with our previous findings [8], we found that VSTM performance was significantly modulated by emotional expression, F(2, 68) = 3.17, p = .048, and that angry faces were significantly better remembered than happy faces (p<.05) (Fig. 2). It is clear from Figure 2 that the effect of emotional expression appears most pronounced at face loads 2 and 3, likely due to the fact that we can only store about two face identities in VSTM at any one time [16]. When only face loads 2 and 3 are analysed, the main effect of emotion becomes more significant (F(2, 16) = 4.01, p = .02) and the difference between angry and neutral faces also reaches significance (p<.05). A significant main effect of face load was observed, F(3,102) = 120.38, p<.001, but its interaction with emotional expression was not significant, F(6, 204)<1.0.
A maximum d' value of 4.66 indicates 100% performance, while a d' value of zero indicates performance at chance (50%). Participants performed significantly better on the VSTM task when the identities of angry faces were to be remembered, compared to happy or neutral faces. VSTM performance declined as face load increased for all emotional expression conditions. Bars represent±1 standard error.
Functional Imaging Results
Emotion Effects.
Using whole-brain analysis of variance (ANOVA), and an FDR significance threshold of p<.05, we found a significant main effect of emotion in three areas of the right hemisphere: superior temporal sulcus (STS), prefrontal cortex (PFC) along the anterior inferior frontal sulcus (IFS), and globus pallidus internus (GPi) (Fig. 3a). Talairach coordinates are provided in Table 1. There was no main effect of emotion in the left hemisphere. ROI analyses revealed that the main effect of emotion in the STS, PFC, and GPi was driven by significantly enhanced blood oxygen level dependent (BOLD) responses to angry faces (in all regions: angry vs. happy, p<.001; angry vs. neutral, p<.001) (Fig. 3b). There were no significant differences between happy and neutral face activations in any of these regions (p>.54 in all cases).
(A) Three coronal brain slices show modulation of brain activity by emotional expression of faces in the VSTM task in the superior temporal sulcus (STS), prefrontal cortex (PFC) along the inferior frontal sulcus (IFS), and globus pallidus internus (GPi), all in the right hemisphere. (B) Beta values for each emotion and face load condition are plotted for the STS, PFC, and GPi. Activity is greater for angry vs. happy and neutral face expression conditions in all three brain regions. Bars represent±1 standard error.
The angry minus neutral functional contrast showed the same pattern of activation as the main effect of emotion (higher activity for angry than neutral faces in rSTS, rPFC, and rGPi), but in addition this contrast revealed significantly greater angry vs. neutral activity in bilateral fusiform gyrus (p<.001 in both cases) (Fig. 4). In the right fusiform, analysis of extracted beta values also revealed significantly greater activation for angry vs. happy faces, p = .02. There were no load effects in these regions. At whole-brain level, the angry minus happy functional contrast similarly revealed rSTS activity (higher for angry) but did not show any additional regions of activation. No regions showed greater activation for happy vs. neutral faces. Talairach coordinates for the specific emotion contrasts are provided in Table 2.
Activity is greater for angry compared to neutral faces. Bars represent±1 standard error.
Correlation Between Behavioral and Functional data for Emotion Effects.
To test whether higher activation for angry faces reflected a generalized increase in response to angry faces or associated arousal levels, or whether it might represent the very brain mechanism that brings about the angry face benefit in VSTM, we investigated the relationship between brain activity and behavioral data. We correlated the behavioral scores (difference in d') for the angry minus happy and angry minus neutral differences with the corresponding beta value differences in each emotion-sensitive region. In GPi, behavioural difference scores significantly correlated with related beta difference scores in the angry-happy contrast, r2 = .44, p = .01 (Fig. 5a), and marginally correlated with related beta difference scores in the angry-neutral contrast, r2 = .32, p = .06 (Fig. 5b). Superior VSTM for angry faces was thus correlated with enhanced activity in the GPi, suggesting a key role for this region in the angry face benefit. There were no significant correlations between behavioural scores and beta values in STS, PFC, or fusiform regions. Because the behavioural angry vs. neutral benefit was driven by the differences at loads 2 and 3, we re-ran these correlations using just loads 2 and 3. We replicated the angry-neutral contrast marginal correlation between behavioural and brain data in the GPi (r2 = .33, p = .06), and additionally found a marginally significant angry-neutral contrast correlation in the right FFA (r2 = .29, p = .09) suggesting perhaps some role of this face processing region in the angry vs. neutral benefit. Correlations in all other emotion-sensitive regions yielded a p-value greater than .10. We also correlated these behavioural data with related activity in load-sensitive areas and found no significant results.
Load Effects.
We examined load effects by contrasting loads 4, 3, and 2 with load 1, with the view that higher activity at loads greater than 1 indicates a greater draw on resources used to encode and retain multiple face identities in VSTM. Several areas in bilateral dorsolateral, ventrolateral, and medial prefrontal cortex (DLPFC, VLPFC, MPFC), frontal eye field (FEF), inferior parietal sulcus (IPS), fusiform gyrus, and occipital cortex showed significantly higher activation when multiple faces were to be remembered compared to one face in both the right and left hemispheres (Fig. 6a). These results conform to previous studies of face load in STM [22]. Interestingly, we replicated the dissociation of load effects between parietal and prefrontal areas described previously [23], with activity in parietal areas peaking at load 3 and prefrontal activity rising further towards load 4 in a monotonic fashion (Fig. 6b). This dissociation was supported by a significant load by region interaction between beta values in right parietal cortex and right PFC, F(3, 102) = 16.05, p<.001. Talairach coordinates for the load contrasts are provided in Table 3.
(A) Face loads 4 (blue), 3 (green), and 2 (red) were contrasted with face load 1. Several regions of the PFC, the frontal eye fields (FEF), inferior parietal sulcus (IPS), fusiform gyrus, and occipital cortex, in both left and right hemispheres, showed greater activity when multiple faces were to be remembered compared to just one face. Brain regions modulated by emotion in the right hemisphere (pink = emotion main effect; white = angry minus neutral contrast; brown = angry minus happy contrast) are overlain to illustrate the anatomical distinction between emotional expression and face load effects. Some anatomical landmarks are provided to aid navigation: superior frontal sulcus (SFS); inferior frontal sulcus (IFS); silvian fissure (SF); inferior parietal sulcus (IPS); occipito-temporal sulcus (OTS). (B) Beta values from each load condition (averaged across emotions) illustrate the contrast between a monotonic increase of activity with load in right PFC (x = 41, y = 29, z = 26) and peaked activation at load 3 in right parietal cortex (x = 18, y = −69, z = 43). Bars represent±1 standard error.
The spatial dissociation of emotion and face load effects on brain activation is particularly striking. Although both emotion and load effects were observed in parts of the right PFC, these areas did not anatomically overlap (Fig. 6a). Similarly, the load effect in bilateral fusiform gyrus was anatomically different to fusiform activity modulated by the angry minus neutral contrast (the emotion region lies more anterior to the load region). Furthermore, a whole brain statistical analysis did not reveal any areas that showed an interaction between emotion and load.
Correlation Between Behavioral and Functional data for Load Effects.
We also examined correlations between STM capacity estimates, as indexed by Cowan's K, and brain activation levels in load-sensitive areas. K capacity estimates (collapsed across emotion conditions) were: load 1 = 0.93 (SE = .02); load 2 = 1.48 (SE = .07); load 3 = 1.70 (SE = .11); load 4 = 1.71 (SE = .13). Significant or marginally significant positive correlations were found in all regions except left VLPFC and right fusiform (Table 4): as the number of faces stored in STM (K) increased, activity also increased. We correlated these K data with load activity in emotion-sensitive areas and found no significant results, confirming the spatial dissociation between emotion and load effects.
Discussion
Our behavioral results show that VSTM is significantly enhanced for face identities when faces display an angry compared to a happy or neutral expression, replicating previous findings [8]. It has been suggested that effects of emotion on memory require time to emerge, allowing effective consolidation of such memories [4]. Yet here, as in our previous study, we show that the effects of emotion on memory can be more immediate – emotional expression can influence visual short-term memory for faces.
In the present study, a network of emotion-sensitive areas comprised STS, PFC, and GPi, all in the right hemisphere, in keeping with the view that the right hemisphere is more involved in the processing and generation of emotions and affect than the left [24], [25]. The specific areas all fit into current models of emotion processing. The STS has been identified as a key area for the extraction of emotional information from faces [14], [26], [27] and more generally for the evaluation of others' intentions [28]. The STS has also been specifically implicated in processing various forms of anger [29]. Regions of the PFC have been implicated in experience [30] and observation [31] of negative mood, and higher activity in response to negative than positive images has been evidenced in regions of the right ventrolateral PFC specifically [32]. Integration of emotional state and STM processes in regions of bilateral PFC has also been reported [33]. The GPi, a subcortical structure, is a major part of the basal ganglia which, beyond their function in the extrapyramidal motor circuit, are involved in a variety of cognitive functions including emotion processing [34].
What is striking about the present findings is that the right STS, PFC, and GPi were specifically recruited in the service of VSTM for angry faces. The GPi seems to be the main region responsible for enhanced VSTM for angry faces, and this finding concurs with a recent study that showed a positive correlation between increased globus pallidus activity and increased STM capacity for simple objects [35]. This study also outlined the role of the globus pallidus as an attentional filter that allows only relevant information access to VSTM. It is possible in our study that enhanced GPi activity to angry faces in VSTM might reflect heightened attention to angry faces, driven by the saliency of potential threat. Threat (anger and fear) expressions have frequently been reported as especially good at capturing attention [36]–[38], even when task-irrelevant [39]. However, these studies involve the capture of attention of a single angry face in a display of differently valenced faces, while in our study all faces in any one VSTM trial displayed the same emotion, thus removing any such competition for attention between different expressions. Furthermore, there is also evidence of rapid attentional orienting to happy faces [40] and more generally to stimuli with high emotional relevance [41]. Perhaps attention was heightened in general during angry face trials, in order to facilitate encoding and maintenance of person identity information in VSTM in the context of potential threat.
The prominent role of the GPi, which was the key area where neural activity was significantly correlated with behavioural performance, is in keeping with recent findings on the role of dopamine in recognition of angry expression. Selective impairment of angry face perception has been linked to: lack of dopamine in Parkinson's disease, which affects the information processing capacity of the GP [42], [43]; treatment with antidopaminergic drugs [44]; and deep brain stimulation of the subthalamic nucleus [45], which is directly connected with the limbic part of the GP [10]. The present study shows that the GPi, one of the main relay stations of the basal ganglia, is not only responsive to emotional stimuli but aids their processing in a way that allows the effective handling of evolutionarily salient information.
A specific angry vs. neutral contrast also revealed a role for the fusiform gyrus - a face-selective area [46] - in the angry benefit, wherein BOLD activity was higher for angry than neutral faces bilaterally and for angry than happy faces in the right hemisphere. Modulation of activity in the fusiform region by facial expression has been reported previously during passive viewing, identity matching, and emotion recognition tasks. For example, there is evidence that fearful [47]–[49], happy [47], [50], and angry faces [13] elicit greater fusiform activity than neutral faces. However, our study is the first to report modulation of the fusiform gyrus by facial expression during a VSTM task.
Traditionally, the amygdala has been implicated in the processing of emotional stimuli and in the long-term retention of emotional events or images wherein activity is often suggested to reflect heightened physiological arousal, which is thought to mediate emotional learning via direct and indirect neural pathways subserving short and long-term memory [4]. In our study, however, we did not find significant influence of the amygdala on the enhancement of VSTM for angry faces. There are a couple of explanations for this. First, the amygdala does not respond selectively to negative emotion: studies have shown activation in response to images of happy and neutral faces [51]. Thus, it is possible that the emotion contrasts computed here did not reveal modulation of the amygdala if all three emotions recruited this region to the same degree. Second, the angry face effect in VSTM is likely driven by image valence (i.e., negativity) rather than physiological arousal (i.e., excitability). In our previous behavioural study [8] we showed that music-induced arousal states did not modulate VSTM performance in general nor interact with expression conditions. We also found that arousal ratings, as measured by the Self-Assessment Manikin (SAM) rating scale [52], did not differ between angry and happy faces. Our behavioural data thus make a general arousal account of enhanced VSTM for angry versus happy faces less likely.
With regard to load modulated brain regions, higher activity in the fusiform gyrus can be explained by the larger number of faces in the memory encoding display, and may also reflect the involvement of this area in VSTM processes [22]. The activation increase in parietal and prefrontal areas reflects their role in supporting the attentional, encoding, and storage requirements of higher memory loads [23], [53]. Importantly, the bilateral fusiform regions that displayed load effects (e.g., load 4 – load 1; LH: x = −36, y = −66, z = −19; RH: x = 36, y = −66, z = −19) were anatomically distinct from the more anterior fusiform regions that displayed an angry face benefit (LH: x = −32, y = −42, z = −12; RH: x = 43, y = −43, z = −16). Face processing regions in the occipito-temporal cortex have been segregated previously into two distinct regions, the fusiform face area (FFA) and the occipital face area (OFA), the former located more anterior to the latter [54]. Our Talairach coordinates for the emotion- and load-affected fusiform regions correspond nicely with reported right hemisphere FFA and OFA coordinates respectively (FFA: x = 39, y = −44, z = −18; OFA: x = 39, y = −64, z = −20). None of the other load-related areas showed an additional modulation of their activity by emotional expression. This suggests that the enhancement of VSTM capacity by the angry expression operates mainly through the recruitment of emotion and face processing networks rather than through recruitment of additional neurons in the classical fronto-parietal STM network. The positive correlation between capacity estimates (K) and brain activation levels in most load-sensitive regions in the occipital, temporal, parietal, and frontal cortices, reflecting increased activity as the number of stored faces increased, suggests that activity in both low-level perceptual and higher-level cognitive areas is modulated by the amount of facial information stored in STM.
We propose a new neural mechanism that supports the angry face benefit in VSTM by facilitating processing and extending memory capabilities. Studies have reported several areas of the fronto-parietal STM network that pose a bottleneck for memory storage at high loads because they cannot respond by further increasing their levels of activity [23], [53], [55]. Our study suggests that VSTM for faces is not only supported by the recruitment of areas that are modulated by load, but also by areas that respond categorically and automatically to the presence of a certain type of stimulus content, in this case, emotion. In the present study, enhanced VSTM capacity for angry faces would thus have been supported by communication between emotion-sensitive areas (STS, IFS, GPi, and FFA) and face identification and VSTM areas (PFC, IPS, OFA).
Our findings also provide further perspective to the debate on whether or not there is independence between face identification and emotional expression decoding processes. While some studies have indicated dissociable neural representations for identity processing in the fusiform gyrus and facial expression processing in the anterior STS [26], [27], others suggest that neural circuits underpinning identity and expression processes overlap [56], [57]. We show that, in VSTM at least, the impact of (angry) emotional expression on face identification tends not to be achieved by multi-functionality of one region but by communication between different process-specific regions responsive to face expression or load. The dissociation between anger and load effects in anterior (FFA) and posterior (OFA) regions of the fusiform gyrus respectively is a novel finding, and perhaps suggests a more complex, fine-grained functional organisation of this region in supporting both expression and face identification processes.
Finally, our discovery of the pivotal role of the GPi at the interface between emotion and cognition may have profound implications for clinical neuropsychiatry. Deficits of social cognition, such as extraction of meaning from facial expressions, may be core elements of the psychopathology of schizophrenia and mood disorders. Whether these are linked to changes in the basal ganglia will have to be explored in future research. The basal ganglia also are the main target of deep brain stimulation for movement disorders and increasingly also for behavioural disorders, and a better understanding of their non-motor functions would be of great clinical importance.
Author Contributions
Conceived and designed the experiments: MCJ SJJ JR DEJL. Performed the experiments: MCJ CW SJJ. Analyzed the data: MCJ CW DEJL. Wrote the paper: MCJ JR DEJL.
References
- 1. Dolan RJ (2002) Emotion, cognition, and behaviour. Science 298(5596): 1191–1194.
- 2. Kensinger EA (2007) Negative emotion enhances memory accuracy: Behavioural and neuroimaging evidence. Curr Dir Psychol Sci 16(4): 213–218.
- 3. La Bar KS (2007) Beyond fear: Emotional memory mechanisms in the human brain. Curr Dir Psychol Sci 16(4): 173–177.
- 4. LaBar KS, Cabeza R (2006) Cognitive neuroscience of emotional memory. Nat Rev Neurosci 7: 54–64.
- 5. Kensinger EA, Corkin S (2003) Effect of negative emotional content on working memory and long-term memory. Emotion 3(4): 378–393.
- 6. Mikels JA, Larkin GR, Reuter-Lorenz PA, Carstenen LL (2005) Divergent trajectories in the ageing mind: Changes in working memory for affective versus visual information with age. Psychol Ageing 20(4): 542–553.
- 7. Perlstein WM, Elbert T, Stenger VA (2002) Dissociations in human prefrontal cortex of affective influences on working memory-related activity. P Natl Acad Sci. USA. 99(3): 1736–1741.
- 8. Jackson MC, Wu C-Y, Linden DEJ, Raymond JE (In Press) Enhanced visual short-term memory for angry faces. J Exp Psychol Human.
- 9. Davidson RJ, Irwin W (1999) The functional neuroanatomy of emotion and affective style. Trends Cogn Sci 3(1): 11–21.
- 10. Karachi C, Yelnik M, Tande D, Tremblay L, Hirsch EC, et al. (2005) The pallidosubthalamic projection: An anatomical substrate for nonmotor functions of the subthalamic nucleus in primates. Movement Disord 20(2): 172–80.
- 11. Stein MB, Simmons AN, Feinstein JS, Paulus MP (2007) Increased amygdala and insula activation during emotion processing in anxiety-prone subjects. Am J Psychiat 164(2): 318–327.
- 12. Linden DEJ (2007) The working memory networks of the human brain. Neuroscientist 13(3): 257–267 (2007).
- 13. Kesler/West ML, Andersen AH, Smith CD, Avison MJ, Davis CE, et al. (2001) Neural substrates of facial emotion processing using fMRI. Cognitive Brain Res 11: 213–226.
- 14. Narumoto J, Okada T, Sadato N, Fukui K, Yonekura Y (2001) Attention to emotion modulates fMRI activity in human right superior temporal sulcus. Cognitive Brain Res 12(2): 225–231.
- 15.
Ekman P, Friesen WV (1976) Pictures of facial affect. Palo Alto, CA: Consulting Psychologists Press.
- 16. Jackson MC, Raymond JE (2008) Familiarity enhances visual working memory for faces. J Exp Psychol Human 34(3): 556–568.
- 17.
Schneider W, Eschman A, Zuccolotto A (2002) E-Prime User's Guide. Pittsburgh: Psychology Software Tools Inc.
- 18.
Talairach J, Tournoux P (1988) Co-planar stereotaxic atlas of the human brain. New York: Thieme.
- 19. Johnstone T, Walsh KSO, Greischar LL, Alexander AL, Fox AS, et al. (2006) Motion correction and the use of motion covariates in multiple-subject fMRI analysis. Hum Brain Mapp 27: 779–788.
- 20. Lund TE, Nørgaard MD, Rostrup E, Rowe JB, Paulson OB (2005) Motion or activity: Their role in intra- and inter-subject variation in fMRI. Neuroimage 26: 960–964.
- 21. Cowan N (2001) The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behav Brain Sci 24: 87–185.
- 22. Druzgal TJ, D'Esposito M (2003) Dissecting contributions of prefrontal cortex and fusiform face area to face working memory. J Cognitive Neurosci 15(6): 771–784.
- 23. Linden DEJ, Bittner RA, Muckli L, Waltz JA, Kriegeskorte N, et al. (2003) Cortical capacity constraints for visual working memory: Dissociation of fMRI load effects in a fronto-parietal network. Neuroimage 20(3): 1518–1530.
- 24. Borod JC, Cicero BA, Obler LK, Welkowitz J, Erhan HM, et al. (1998) Right hemisphere emotional perception: Evidence across multiple channels. Neuropsychology 12: 446–458.
- 25.
Davidson RJ (1984) Affect, cognition, and hemispheric specialization. In: Izard CE, Kagan J, Zajonc R, editors. Emotion, Cognition and Behavior. New York: Cambridge Univ. Press. pp. 320–365.
- 26. Haxby JV, Hoffman EA, Gobbini MI (2000) The distributed human neural system for face perception. Trends Cogn Sci 4(6): 223–233.
- 27. Winston JS, Henson RNA, Fine-Goulden MR, Dolan RJ (2004) fMRI-adaptation reveals dissociable neural representations of identity and expression in face perception. J Neurophysiol 92(3): 1830–1839.
- 28. Allison T, Puce A, McCarthy G (2000) Social perception from visual cues: Role of the STS region. Trends Cogn Sci 4(7): 267–278.
- 29. Grosbras MH, Paus T (2006) Brain networks involved in viewing angry hands or faces. Cereb Cortex 16: 1087–1096.
- 30. Habel U, Klein M, Kellermann T, Shah NJ, Schneider F (2005) Same or different? Neural correlated of happy and sad mood in healthy males. Neuroimage 26(1): 206–214.
- 31. Stip E, Fahim C, Liddle P, Mancini-Marie A, Mensour B, et al. (2005) Neural correlates of sad feelings in schizophrenia with and without blunted affect. Can J Psychiat 50(14): 909–917.
- 32. Dolcos F, LaBar KS, Cabeza R (2004) Dissociable effects of arousal and valence on prefrontal activity indexing emotional evaluation and subsequent memory: An event-related fMRI study. Neuroimage 23: 64–74.
- 33. Gray JR, Braver TS, Raichle ME (2002) Integration of emotion and cognition in the lateral prefrontal cortex. P Natl Acad Sci USA 99(6): 4115–4120.
- 34. Utter AA, Basso MA (2008) The basal ganglia: An overview of circuits and function. Neurosci Biobehav R 32(3): 333–342.
- 35. McNab F, Klingberg T (2007) Prefrontal cortex and basal ganglia control access to working memory. Nat Neurosci 11(1): 103–107.
- 36. Fox E, Lester V, Russo R, Bowles RJ, Pichler A, et al. (2000) Facial expressions of emotion: Are angry faces detected more efficiently? Cognition Emotion 14(1): 61–92.
- 37. Hansen CH, Hansen RD (1988) Finding the face in the crowd: An anger superiority effect. J Pers Soc Psychol 54(6): 917–924.
- 38. Mogg K, Bradley BP (1999) Orienting of attention to threatening facial expressions presented under conditions of restricted awareness. Cognition Emotion 13(6): 713–740.
- 39. Eastwood JD, Smilek D, Merikle PM (2003) Negative facial expression captures attention and disrupts performance. Percept Psychophys 65(3): 352–358.
- 40. Juth P, Lundqvist D, Karlsson A, Ohman A (2005) Looking for foes and friends: perceptual and emotional factors when finding a face in the crowd. Emotion 5(4): 379–95.
- 41. Brosch T, Sander D, Pourtois G, Scherer KR (2008) Beyond fear: Rapid spatial orienting toward positive emotional stimuli. Psychol Sci 19(4): 362–370.
- 42. Heimer G, Rivlin M, Israel Z, Bergman H (2006) Synchronizing activity of basal ganglia and pathophysiology of Parkinson's disease. J Neural Transm-Supp 70: 17–20.
- 43. Lawrence AD, Goerendt IK, Brooks DJ (2007) Impaired recognition of facial expressions of anger in Parkinson's disease patients acutely withdrawn from dopamine replacement therapy. Neuropsychologia 45(1): 65–74.
- 44. Lawrence AD, Calder AJ, McGowan SW, Grasby PM (2002) Selective disruption of the recognition of facial expressions of anger. Neuroreport 13(6): 881–884.
- 45. Schroeder U, Kuehler A, Hennenlotter A, Haslinger B, Tronnier VM, et al. (2004) Facial expression recognition and subthalamic nucleus stimulation. J Neurol Neurosur Ps 75(4): 648–650.
- 46. Kanwisher N, Yovel G (2006) The fusiform face area: A cortical region specialized for the perception of faces. Philos T Roy Soc B 361(1476): 2109–2128.
- 47. Breiter HC, Etcoff NL, Whalen PJ, Kennedy WA, Rauch SL, et al. (1996) Response and habituation of the human amygdala during visual processing of facial expression. Neuron 17(5): 875–887.
- 48. Pessoa L, McKenna M, Guitierrez E, Ungerleider LG (2002) Neural processing of emotional faces requires attention. P Natl Acad Sci USA 99(17): 11458–11463.
- 49. Vuilleumier P, Armony JL, Driver J, Dolan RJ (2001) Effects of attention and emotion on face processing in the human brain: An event-related fMRI study. Neuron 30(3): 829–841.
- 50. Lewis S, Thoma RJ, Lanoue MD, Miller GA, Heller W, et al. (2003) Visual processing of facial affect. Neuroreport 14(14): 1841–1845.
- 51. Wright P, Liu YJ (2006) Neutral faces activate the amygdala during identity matching. Neuroimage 29(2): 628–636.
- 52. Bradley MM, Lang PJ (1994) Measuring emotion: The self-assessment manikin and the semantic differential. J Behav Ther Exp Psy 25(1): 49–59.
- 53. Mayer JS, Bittner RA, Nicolić D, Bledowski C, Goebel R, et al. (2007) Common neural substrates for visual working memory and attention. Neuroimage 36(2): 441–453.
- 54. Peelen MV, Downing PE (2005) Selectivity for the human body in the fusiform gyrus. J Neurophysiol 93: 603–608.
- 55. Todd JJ, Marois R (2004) Capacity limit of visual short-term memory in human posterior parietal cortex. Nature 428: 751–754.
- 56. Ganel T, Valyear KF, Goshen-Gottstein Y, Goodale MA (2005) The involvement of the “fusiform face area” in processing facial expression. Neuropsychologia 43(11): 1645–1654.
- 57. Vuilleumier P, Pourtois G (2007) Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia 45(1): 174–194.