Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

On the Blink: The Importance of Target-Distractor Similarity in Eliciting an Attentional Blink with Faces

  • Kathrin Müsch ,

    k.muesch@uke.uni-hamburg.de

    Affiliation Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

  • Andreas K. Engel,

    Affiliation Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

  • Till R. Schneider

    Affiliation Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

On the Blink: The Importance of Target-Distractor Similarity in Eliciting an Attentional Blink with Faces

  • Kathrin Müsch, 
  • Andreas K. Engel, 
  • Till R. Schneider
PLOS
x

Abstract

Temporal allocation of attention is often investigated with a paradigm in which two relevant target items are presented in a rapid sequence of irrelevant distractors. The term Attentional Blink (AB) denotes a transient impairment of awareness for the second of these two target items when presented close in time. Experimental studies reported that the AB is reduced when the second target is emotionally significant, suggesting a modulation of attention allocation. The aim of the present study was to systematically investigate the influence of target-distractor similarity on AB magnitude for faces with emotional expressions under conditions of limited attention in a series of six rapid serial visual presentation experiments. The task on the first target was either to discriminate the gender of a neutral face (Experiments 1, 3–6) or an indoor/outdoor visual scene (Experiment 2). The task on the second target required either the detection of emotional expressions (Experiments 1–5) or the detection of a face (Experiment 6). The AB was minimal or absent when targets could be easily discriminated from each other. Three successive experiments revealed that insufficient masking and target-distractor similarity could account for the observed immunity of faces against the AB in the first two experiments. An AB was present but not increased when the facial expression was irrelevant to the task suggesting that target-distractor similarity plays a more important role in eliciting an AB than the attentional set demanded by the specific task. In line with previous work, emotional faces were less affected by the AB.

Introduction

When we allocate attention to a flux of incoming stimuli, awareness for these stimuli is not constant over time but instead fluctuates from moment to moment. In order to study how visual awareness is changing over time during a stream of quickly succeeding information, rapid serial visual presentation (RSVP) paradigms are widely used. In these paradigms, one or more targets have to be reported in a stream of rapidly succeeding stimuli. If two task-relevant targets appear in close temporal proximity within a stream of irrelevant distractors, a period of limited awareness for the second target, called the AB, is often observed. The AB reflects a deficit in reporting the second target (T2) in case it follows the first task-relevant target (T1) with a temporal delay of 100–400 ms [1], [2], [3]. Single task control conditions in this type of experiments suggest an attentional rather than perceptual cause of the AB [2]. In single task conditions physical stimulation remains the same (presentation of T1 and T2) but attentional demands are decreased, as only the second stimulus is task-relevant. In single task conditions the AB is usually absent [2].

Traditional models have attributed the AB to attentional capacity limitations at a late processing stage [4], [5], [6], [7]. In particular, these models suggested that the perceptual representation for the second target T2, formed during an early processing stage, cannot be transferred into working memory, and thus will not be reported, until the system has successfully transferred the first target T1 into working memory at a late processing stage. However, limited capacity models cannot account for some recent findings of the AB [8]. More recent accounts suggest that the AB results from active control of attentional resources [9], [10], [11]. These models are able to explain why salient stimuli can outlive the AB: the encoding of salient stimuli needs fewer resources due to increased bottom-up strength, and thus less allocation of attentional resources is necessary [9], [11]. Saliency can either be driven by perceptual features, such as discernability of targets from distractors, or by contents (e.g., emotional vs. neutral stimuli).

Several studies employed neutral face stimuli for probing the AB achieving mixed results. Most studies found an AB for faces (Table S1), whereas others did not with famous faces [12], low T1 load [13], upright faces [14], or when T1 and T2 were faces [15], [16]. Landau and Bentin [13] suggested that the saliency of faces among nonface distractors was an important factor in determining the susceptibility of face targets to be blinked. However, they did not specifically investigate this claim. Taken together, these results suggest that face processing requires attentional resources and that the perceptual saliency of faces among distractors is critical for eliciting an AB.

The AB magnitude can be modulated by manipulating the allocation of attention towards T1 or T2 [8]. For example, AB magnitude was reduced by task-irrelevant mental performance in an additional memory task or by focusing less on the AB task [17]. The AB was also extinguished when highly familiar or famous faces were used [12]. In addition, emotional target stimuli seem to modulate blink magnitude as well. Several studies have demonstrated an influence of emotional information on the extent of the blink magnitude by using a variety of emotional stimuli including words [18], [19], [20], photographs of objects or scenes [21], [22], [23], [24] and emotional faces [25], [26], [27], [28], [29]. Interestingly, the AB is differentially modulated depending on whether T1 or T2 is emotionally salient. The AB is increased following an emotional T1, possibly due to a longer attentional dwell time on T1, leaving less capacity for the processing of T2 [29], [30]. In contrast, the AB is attenuated when emotional compared to unemotional stimuli are presented as T2, which suggests stronger attentional capture by emotional stimuli [20], [28]. Importantly, several studies found a robust AB for neutral compared to realistic [26], [29] or schematic emotional faces [31], [32].

In contrast to studies reporting an emotional modulation of the AB in healthy individuals [20], [24], [28], several studies reported an emotional modulation of the AB only in individuals with high anxiety scores [27], [33], with dysphoria [34], or with posttraumatic stress symptoms [35], yet failed to find an effect in healthy participants. Such an absence of the AB is unlikely to be caused by the type of stimulus material because similar stimuli were used as in experiments, which found an AB in healthy individuals (e.g., words in [33], [34], [35] and faces in [27]). Amir and colleagues [35] suggested that this absence might be related to the depth of target processing (e.g., semantic processing or explicit emotion processing). Accordingly, in a series of experiments semantic processing [30] or emotion processing [29] were shown to be a necessary condition for an increased AB following emotional stimuli as T1. For emotional T2 it has not yet been investigated systematically whether explicit emotion processing is required to decrease AB magnitude.

The aim of the present study was to systematically investigate the influence of target-distractor similarity. In total, six experiments were conducted in different groups of participants in order to investigate how emotional valence modulates the temporal allocation of attention. As only a shallow AB was elicited in Experiment 1, we selectively manipulated the T1 and T2 similarity (Experiment 2), similarity of targets and distractors (Experiment 3, 4, and 5), and the task relevance of the emotional expression (Experiment 6). Manipulating experimentally the similarity between targets and distractors revealed a strong effect of the type of distractors and accounted for the shallow and missing AB effect of the previous experiments. The final experiment demonstrated that the type of task (whether the emotional expression was explicitly or implicitly task-relevant) did not have an impact over and above the effect of target-distractor similarity in Experiments 3 and 4.

Experiment 1

Materials and Methods

Ethics Statement.

The participants of this and the subsequent experiments provided written, informed consent. All procedures were approved by the ethics committee of the Hamburg Medical Association.

Participants.

Fifteen participants (10 female, M ± SD = 24.0±2.3 years) were recruited from the University Medical Center Hamburg-Eppendorf and were paid for participation. All participants had normal or corrected to normal vision and normal color vision [36] and reported no history of psychiatric or neurological illness. One male participant had to be excluded due to performance at chance level.

Stimuli.

Emotional and neutral faces were embedded among distractors in a RSVP stream (Figure 1). Faces of 12 males and 12 females with neutral, fearful, and happy expressions from the Karolinska Directed Emotional Faces (KDEF; [37]) served as targets. These faces were selected for highest gender discernability as determined in a pilot rating. Distractors were phase-scrambled versions of 54 neutral faces. All stimuli were converted to gray-scale, matched for luminance and masked by an oval shape to remove hair, neck and background information. T1 faces were presented in a red tint (each pixel value of the red color channel multiplied by 2.25) in order to distinguish it from the other stimuli in the stream.

thumbnail
Figure 1. Illustration of a single trial and overview of experiments.

(A) After 500 ms fixation period, 25 stimuli including the two targets with a variable lag were rapidly presented (lag 3 in this example). The first and the second target were task-relevant. T1 was presented between position 9 and 15 in a stream of distractors followed by T2 at lags 1, 2, 3, 4, 5, 6, 8. (B) The experiments differed with regard to stimuli used as T1, T2, and distractors, and dual task demands. Abbreviations: Fix, fixation; T1, first target; T2, second target; D, distractors; RSVP, rapid serial visual presentation; Exp., experiment; 2AFC, two-alternative forced-choice.

https://doi.org/10.1371/journal.pone.0041257.g001

Design and Procedure.

Each trial consisted of a stream of 25 visual stimuli including scrambled distractors and target faces, starting with a 500 ms fixation period. Each stimulus was displayed for 70 ms at the center of the monitor, resulting in a stimulation frequency of 14.3 Hz (Figure 1). The first face (T1) always had a neutral expression whereas the expression of the second face (T2) was systematically varied (fearful, happy, and neutral expression 31.7% each; remaining 5% scrambled distractor). The temporal interval between T1 and T2 varied between lag 1 (70 ms, no intervening item between T1 and T2), lag 2 (140 ms, one intervening item and so forth), lag 3 (210 ms), lag 4 (280 ms), lag 5 (350 ms), lag 6 (420 ms), and lag 8 (560 ms) in order to cover the whole AB interval. The gender of the two targets was counterbalanced and two targets never had the same identity in a given trial. T1 appeared equally often at positions 9 to 15 of each stream.

After each trial, participants were first requested to report the gender of T1 (“male”, “female”) and then whether they had seen a second face (T2; “face”, “no face”) by button press on the keyboard with the left or right index finger, respectively. In case of a “face”, participants were asked to indicate whether the face was emotional or neutral (“emotional face”, “neutral face”). This two-step procedure allowed discriminating different levels of processing: face detection versus emotion detection of T2. The response button mapping was counterbalanced across participants. Seven blocks with 60 trials each were presented in random order. In total, 19 trials per condition were presented (7 lags ×3 emotions  = 399 trials). In 5% of the trials T2 was not present and replaced by a scrambled distractor. To familiarize participants with the experimental procedure, 10 practice trials were presented before each experiment. No speeded responses were demanded and participants received no feedback during the experiment. Stimuli were presented on a 22" CRT monitor at a refresh rate of 100 Hz and a viewing angle of approximately 5.4° using the Psychophysics Toolbox (3rd version; [38], [39]) and Matlab 7 (The MathWorks Inc, Natick, MA, USA).

Data Analysis.

Mean accuracy was calculated for T1 and T2, respectively. T2 report was analyzed contingent on correct T1 report. For the T2 task the percentage of correct responses was calculated as the proportion of detected relative to the total number of trials presenting a face as T2, separately for fearful, happy, and neutral faces. The detection of T2 was considered more relevant to the AB than the emotion detection because the amount of misses per lag directly reflects the impairment of visual awareness. In addition, false alarms were defined as the proportion of “face” responses to the number of T2-absent trials contingent on correct T1 report. Low values of false alarms indicate that participants were able to perform the task correctly. The percentage of correct responses on T1 and T2 report were subjected to a repeated measures analysis of variance (ANOVA) with lag (1, 2, 3, 4, 5, 6, 8) and emotion (fearful, happy, neutral) as separate within-subject factors. In addition, T1 error rates were compared for trials in which both targets had the same versus the opposite gender to check for a possible confusion between T1 and T2 in the gender discrimination task at each lag. Estimates were Greenhouse-Geisser-corrected whenever appropriate. Original degrees of freedom are reported. Five planned orthogonal contrasts were conducted as follow-up analysis: (1) the linear effect of lag; (2) neutral vs. emotional faces; (3) fearful vs. happy faces; (4) the interaction between lag and neutral vs. emotional faces (4), and (5) the interaction between lag and fearful vs. happy faces. Effect sizes were reported as eta-squared, representing the proportion of accounted variance (η2<0.1 =  small effect size; 0.1<η2<0.25 =  medium effect size; η2>0.25 =  large effect size).

Results

In Experiment 1, the comparison of T2 performance in a 7 (lag) x 3 (emotion) repeated measures ANOVA resulted in main effects of lag and emotion and a lag by emotion interaction (Table 1, Figure 2). The contrast analysis on the interaction effect revealed that the effect of lag was more pronounced for neutral faces compared to emotional faces, while the effect of lag was only a trend for the difference of fearful and happy faces (Table 2). The percentage of false alarms was quite low (M ± SD = 10.5±13.4). These results suggest a temporal impairment of visual awareness modified by emotional expression.

thumbnail
Figure 2. Mean accuracy for T1 and T2 in Experiment 1.

Performance is depicted separately for the different facial expressions of T2. T2 detection is conditional on T1 performance. Error bars represent standard errors of the means. Abbreviations: T1, first target; T2, second target; SOA, stimulus onset asynchrony.

https://doi.org/10.1371/journal.pone.0041257.g002

thumbnail
Table 1. Results of the repeated measures analysis of variance for each experiment.

https://doi.org/10.1371/journal.pone.0041257.t001

T1 performance was compared in a 7 (lag) x 3 (emotion) repeated measures ANOVA. The correct report of T1 was dependent on the lag (Table 1, Figure 2), which was reflected by a linear increase across lags (Table 2).

T1 error rates for trials in which T1- and T2-faces had opposite sex were higher compared to trials in which T1- and T2-faces were the same sex only at lag 1 (opposite sex M ± SD = 14.5±11.0, same sex M ± SD = 46.7±12.3; t13 = 6.34, p<0.001) but not at any other lag (all ts<1.33, all ps>0.205; except for lag 3, t13 = 2.20, p = 0.046, not significant following Bonferroni correction).

Discussion

The decreased performance on T2 could be interpreted as a genuine AB, which additionally was modulated by emotional expression. However, the profile of the AB was very shallow.

Performance on T1 was also reduced in the first two lags. Participants may have confused T1 and T2 at shorter lags, especially when there was no distractor in between. An additional analysis on T1 errors revealed preliminary evidence for this assumption: error rates for opposite-sex compared to same-sex trials were only higher at lag 1 but not at any other lag. Thus, it seems likely that participants confused T2 and T1 in the gender classification task. Earlier studies using letters also found increased order inversion effects for T1 at the first lag [4], [40], [41]. According to the 2-stage competition model [5] there is a trade-off between T1 and T2 performance when the lag between the targets is less than 100 ms. Hence, it seems inherent in the AB that correct report of T1 is compromised by correct report of T2 at the first lag [41]. However, the present results merely reflect a globally diminished performance for T1 and T2 instead of a trade-off between targets.

Experiment 2

To rule out the possibility that participants confused T2 with T1 stimuli in the gender classification task, neutral T1-faces were replaced by indoor and outdoor scenes in Experiment 2.

Materials and Methods

Participants.

Thirteen students (8 female, M ± SD = 24.1±1.5 years), none of whom participated in Experiment 1, were recruited from the same pool and were paid for participation. All participants had normal or corrected to normal vision and reported no history of psychiatric or neurological illness.

Stimuli.

Stimuli were identical to those of Experiment 1 except that gray-scale indoor and outdoor scenes instead of neutral faces were presented as T1. T1 scenes were not tinted because they could easily be discriminated from T2 faces (compare [26]). Visual scenes (equal in mean luminance) were selected according to highest discrimination performance and matched for visual complexity according to a pilot rating.

Design and Procedure.

Unlike in the previous experiment, the task on T2 consisted of only one question. An additional response option for “no face” was included, thus resulting in three response possibilities (“emotional face”, “neutral face”, “no face”) for each trial.

Data Analysis.

Data analysis was identical to Experiment 1 except for the following changes. False alarms in Experiments 2, 3, and 4 were defined as the proportion of “emotional face” or “neutral face” responses to the number of T2-absent trials contingent on correct T1 report.

Results

T1 performance and T2 performance were separately subjected to a 7 (lag) x 3 (emotion) within-subjects ANOVA. There were no significant effects on T1 performance and on T2 performance (Table 1, Table 2, Figure 3). As in the previous experiments, the percentage of false alarms was low (M ± SD = 2.2±4.2).

thumbnail
Figure 3. Mean accuracy for T1 and T2 in Experiment 2.

Performance is depicted separately for the different facial expressions of T2. T2 detection is conditional on T1 performance. Error bars represent standard errors of the means. Abbreviations: T1, first target; T2, second target; SOA, stimulus onset asynchrony.

https://doi.org/10.1371/journal.pone.0041257.g003

Discussion

Surprisingly, there were no effects of lag or emotion in Experiment 2 suggesting that the transient performance decrease in Experiment 1 resulted from a confusion of the target faces [4], [5], [11], [40], [41]. The absence of an AB is in direct contrast to the study by De Martino and colleagues [26] who reported an AB also using scenes as T1, faces as T2, and scrambled distractors. In their experiment performance for fearful T2 faces was higher than for neutral T2 faces only at lag 5 (350 ms), which was, however, the only lag tested in this experiment. The distractors in the experiment by De Martino and colleagues [26] differed from the ones in the present Experiments 1 and 2. The role of distractors in eliciting an AB for faces was addressed in the following three experiments.

Experiment 3, Experiment 4, & Experiment 5

In contrast to previous studies using upright neutral faces [27], 180° rotated neutral faces [25], or randomly rearranged parts of face or scene images [26], [28] as distractors, the distractors in the present Experiments 1 and 2 were phase-scrambled versions of the face stimuli and contained no meaningful high-level information. A previous study using letters reported that the AB could be eliminated when targets were embedded in highly discriminable distractors [4]. To investigate whether the shallow AB profile in Experiment 1 might have resulted from insufficient masking and from dissimilarity between targets and distractors, the similarity of the distractors with the target faces was varied in the following three experiments. They are reported together because every participant took part in two of the experiments.

Materials and Methods

Participants.

Twenty-eight participants (15 female, M ± SD = 26.5±4.0 years), none of whom participated in the previous Experiments 1 and 2, were recruited from the same pool and were paid for participation. All participants had normal or corrected to normal vision and reported no history of psychiatric or neurological illness.

Stimuli.

Target stimuli were identical to those of Experiment 1. Phase-scrambled distractors were replaced by three different types of distractors of the same 54 neutral faces resulting in three experiments. In Experiment 3, faces were divided into 20 randomly rearranged parts of 75×70 pixels and masked by an oval shape to remove hair, neck and background information. These distractors will be referred to as mosaic-scrambled faces. In Experiment 4, distractors consisted of 180° rotated faces with neutral expression. In Experiment 5, distractors were upright faces with neutral expression.

Design and Procedure.

Design and procedure were identical to that of Experiment 1 except for the following specifications: each participant took part in two experiments. The order of the experiments was counterbalanced across subjects resulting in final samples of 21 participants in Experiment 3 (12 female, M ± SD = 26.6±4.2 years), 15 participants Experiment 4 (8 female, M ± SD = 26.8±2.9 years), and 20 participants Experiment 5 (10 female, M ± SD  = 26.0±4.5 years). In Experiments 3 and 4 the task on T2 was identical to that of Experiment 2 providing three response options in each trial (“emotional face”, “neutral face”, “no face”). In Experiment 5, T2 was always present resulting in a total of 399 trials (7 lags x 3 emotion, 19 trials per condition). The T2 task remained an emotion detection task. However, since distractors were upright neutral faces, the option “no face” was inappropriate for Experiment 5 and only two of the previous response options were provided (“emotional face”, “neutral face”). Hence, participants replied with “neutral face” when they did not see an emotional face in a given trial.

Data Analysis.

Data analysis for the three experiments was identical to that of Experiment 1 except for Experiment 5 using neutral face distractors, in which the percentage of correct responses for the T2 task was calculated as the proportion of correct emotion detection. Only fearful and happy T2 were analyzed, as neutral T2 could not be differentiated from distractors and faces were always present as distractors. In Experiment 5, false alarms were calculated as the proportion of “emotional face” responses to the number of trials depicting neutral T2 faces contingent on correct T1 report. In addition, these false alarm rates were compared to hit rates for “emotional face” responses in order to clarify whether the absent AB was due to a floor effect.

Results

For Experiment 3 using mosaic-scrambled face distractors, T1 performance and T2 performance were separately compared in a 7 (lag) x 3 (emotion) within-subjects ANOVA. The ANOVA on T2 performance resulted in main effects of lag and emotion (Table 1, Figure 4A). The contrast analysis showed that the difference between neutral and emotional faces was larger than that between the two emotional faces (Table 2). Although the difference between emotional and neutral faces seemed to be greater at early relative to late lags, the interaction effect did not reach significance. Correct report of T1 depended on lag (Table 1, Figure 4A), which was reflected by a linear increase across lags (Table 2). These results indicate that an AB was found for faces, which was not modulated by emotional expression. However, performance for emotional faces was better than for neutral faces across all lags.

thumbnail
Figure 4. Mean accuracy for T1 and T2 in Experiment 3, 4, and 5.

Target stimuli were identical to Experiment 1. Distractors were either (A) mosaic-scrambled faces, (B) inverted faces with neutral expression, or (C) upright faces with neutral expression. Performance is depicted separately for the different facial expressions of T2. T2 performance is conditional on T1 performance. Error bars represent standard errors of the means. Abbreviations: T1, first target; T2, second target; SOA, stimulus onset asynchrony.

https://doi.org/10.1371/journal.pone.0041257.g004

For Experiment 4 presenting inverted face distractors, T1 performance and T2 performance were separately subjected to a 7 (lag) x 3 (emotion) within-subjects ANOVA. Comparison of T2 performance resulted in a main effect of lag and emotion (Table 1, Figure 4B). Follow-up analysis suggested a linear increase across lags and easier detection of emotional compared to neutral faces (Table 2). Notably, although the interaction effect did not reach statistical significance, the planned interaction contrast for the comparison of neutral to emotional faces was significant (Table 2), reflecting that the AB for neutral faces was more pronounced relative to emotional faces. Correct report of T1 depended on lag (Table 1, Figure 4B) reflected by a linear increase across lags (Table 2). These results suggest a transient impairment of visual awareness and an advantage for the detection of emotional faces.

For Experiment 5 using neutral face distractors, T1 performance and T2 performance were separately subjected to a 7 (lag) x 2 (emotion) within-subjects ANOVA. Only effects of emotion were found which were opposite for T1 and T2 performance (Table 1, Table 2, Figure 4C): T1 was reported correctly more often when it was followed by a fearful instead of a happy face, while T2 performance was higher for happy faces compared to fearful faces. These results indicate that performance differed according to the emotional expression, but no AB was found in Experiment 5.

The percentage of false alarms for T2 was M ± SD = 10.5±13.3 in Experiment 3 and M ± SD  = 12.2±14.4 in Experiment 4. For Experiment 5, the percentage of false alarms, reflected by the proportion of “emotional face” responses to the number of T2 trials containing neutral faces, was M ± SD = 18.9±12.3. This rate was significantly lower than the average number of correct responses for emotional T2 (M ± SD = 60.3±17.0; t19 = 9.54, p<0.001).

Discussion

As expected, increasing the similarity of distractors and targets in terms of facial features decreased the overall T2 performance. Importantly, the use of more similar distractors resulted in an AB when distractors were mosaic-scrambled and inverted faces, hence containing more feature information than the abstract phase-scrambled distractors used before. Therefore we conclude that dissimilarity between targets and distractors can account for the missing AB in Experiments 1 and 2.

There was no AB when distractors were upright faces. The absence of an AB with upright face distractors is in direct contrast to the experiment by Fox and colleagues using upright neutral faces as distractors [27]. Longer stimulus duration (110 ms) could account for the higher performance in Fox et al. [27]. However, T1 performance in Experiment 5 was lower than that of Experiments 3 and 4 and of [27]. In addition, performance for fearful faces in the T2 task was almost at chance level. In addition, T1 stimuli and T1 task were different (Table S1): flower T1 had to be discriminated from mushroom T1 [27], thus facilitating the T1 differentiation from T2 stimuli as well as from distractors. These results suggest that the task of Experiment 5 was more demanding than that of the previous experiments and that of [27]. However, participants were able to reliably detect emotional faces from the stream of neutral distractors, as reflected by significantly more hits than false alarms for “emotional face” responses. Thus, results of Experiment 5 corroborate the finding that faces with emotional expressions are spared the AB.

In line with results from Experiment 1, T2 performance depended on the emotional content of T2 suggesting a facilitated processing of fearful and happy faces over neutral faces. A superiority effect for happy faces was found except for the inverted face experiment, in which fearful faces tended to be better recognized than neutral faces. These results are in line with the assumption of enhanced bottom-up attention for emotional stimuli [42], [43].

As in Experiment 1, a decreased T1 performance at lag 1 in the mosaic-scrambled and the inverted face experiment reflected the competition for attentional resources of T1 with T2 at lag 1 [4], [40], [41]. T1 performance in the experiment with upright face distractors was greatly reduced across all lags. In this case upright T1 faces differed from the distractors only in color (red tint) and therefore may have been more difficult to extract from the RSVP stream. Thus, it is likely that participants reported the gender of neighboring faces instead that of T1.

Experiment 6

This final experiment investigated whether the specific attentional set, i.e. the allocation of attentional resources that is adjusted by the observer (top-down control), had an additional impact on the AB over and above the effect of target-distractor similarity. In contrast to all previous experiments, in which emotion recognition was explicitly demanded by the T2 task, in Experiment 6 the emotional expression of faces was irrelevant to the T2 task. For emotionally expressive T2, the influence of the type of task has never directly been investigated so far. Milders and colleagues [28] successfully elicited an AB with a very similar design but an implicit emotion recognition task.

Materials and Methods

Participants.

Seventeen participants (10 female, M ± SD = 28.4±4.1 years), none of whom participated in the previous experiments, were recruited from the same pool and were paid for participation. All participants had normal or corrected to normal vision and reported no history of psychiatric or neurological illness. One female subject had to be excluded due to performance at chance level.

Stimuli.

Stimuli were identical to those in Experiment 3.

Design and Procedure.

Design and procedure were identical to that of Experiment 1 except for the task on T2. Participants were solely requested to report whether they had seen an upright second face (“yes”, “no”).

Data Analysis.

Data analysis was identical to that of Experiment 1.

Results

The comparison of T2 performance in a 7 (lag) x 3 (emotion) within-subjects ANOVA resulted in main effects of lag, emotion, and an interaction between lag and emotion (Table 1, Figure 5). Follow-up contrast analysis on the interaction revealed a trend in that the linear effect of lag was more pronounced for neutral compared to emotional faces (Table 2). The percentage of false alarms was M ± SD = 7.6±9.7. These results indicate that an AB was found for faces, which was modulated by emotional expression.

The 7 (lag) x 3 (emotion) within-subjects ANOVA on T1 performance revealed a significant effect of lag (Table 1, Figure 5), which was reflected by a linear increase across lags (Table 2).

thumbnail
Figure 5. Mean accuracy for T1 and T2 in Experiment 6.

Performance is depicted separately for the different facial expressions of T2. T2 detection is conditional on T1 performance. Error bars represent standard errors of the means. Abbreviations: T1, first target; T2, second target; SOA, stimulus onset asynchrony.

https://doi.org/10.1371/journal.pone.0041257.g005

Discussion

Unlike in the previous experiments, the detection of the emotional facial expression of T2 was not relevant to solve the task in Experiment 6. In line with results from Milders and colleagues [28] an AB was observed. The trend of the interaction contrast suggested that the AB was attenuated for happy and fearful faces. Somewhat surprisingly, T2 performance for neutral faces did not recover to baseline level. Reasons for this might be twofold. First, AB patterns in the individual participants were highly heterogeneous (Figure S1). Second, T2 performance for neutral faces seemed to drop particularly from lag 6 to lag 8 (Figure S1; cf. subjects 6, 8, or 11), which most likely reflects an expectation effect: because lag 7 was omitted, participants might not have expected a second face anymore. Emotional but not neutral T2 at lag 8 were detected due to increased saliency. Results of Experiment 6 suggest that the attentional set or the demands of top-down control in the specific task do not have an incremental effect on eliciting the AB beyond the effect of target-distractor similarity of the previous experiments.

Discussion

The major goal of the present study was to systematically investigate the impact of target-distractor similarity under conditions of high attentional demands in a RSVP stream in which two targets were embedded. Contrary to our expectation, only a shallow AB was found in Experiment 1. However, this effect could not be replicated when we replaced neutral T1 faces by indoor and outdoor scenes in Experiment 2. To investigate whether the absence of an AB resulted from target-distractor dissimilarity and insufficient masking, Experiments 3, 4, and 5 selectively manipulated the distractors' similarity to the target faces. An AB was revealed in Experiment 3 using mosaic-scrambled distractors and Experiment 4 using inverted face distractors. Thus, similarity between targets and distractors seems to account for the strength of the AB in the present experiments. No AB was found, however, in Experiment 5, when targets were emotional faces and distractors were neutral faces. This result supports the notion that faces with emotional expression tend to be less likely to be blinked. Moreover, in Experiments 4 and 6 emotional faces were found to be less susceptible to the AB, further confirming the attentional advantage for emotional faces.

In the first two experiments, the nature of the abstract phase-scrambled distractors and their featural dissimilarity to the targets may have diminished appropriate masking of the target faces. Phase-scrambled distractors may not be sufficiently meaningful or may not contain enough high-level pattern information to function as effective masks. However, previous studies revealed that masks neither have to be meaningful [44] nor have to contain pattern information [45] to be effective. Landau and colleagues [13] suggested that the saliency of faces among nonface distractors was an important factor in determining the susceptibility of face targets to be blinked. Previous studies showing an AB effect on emotional T2 faces used a stream of neutral faces [27], 180° rotated neutral faces [25], or mosaic-scrambled distractors [26], [28] consisting of randomly rearranged parts of faces or scenes. Therefore, the masking effect on T1 by the subsequent distractors may have been stronger in previous studies using faces as targets [25], [26], [27], [28] resulting in larger attentional impairments for processing of T2 compared to our Experiments 1 and 2. This assumption is consistent with a series of AB experiments investigating the role of T1 and its subsequent item in the RSVP stream [46]. The authors reported a correlation between T1 performance and AB magnitude using letters as targets and concluded that masking influenced the AB deficit indirectly by increasing the processing load of T1. Furthermore, Jannati and colleagues (Experiment 2 in [47]) successfully elicited an AB for letters by increasing target-distractor similarity relative to a report using the same experimental design [44], when pseudoletters instead of digits were used as distractors. Our Experiments 3 and 4 also provide support for the role of target-distractor dissimilarity as causes for the missing AB in our first two experiments. The experiments using mosaic-scrambled and inverted face distractors successfully elicited an AB. Using upright neutral faces as distractors resulted in a drop in T1 and T2 performance except for happy faces. However, we did not observe an AB under conditions of minimal target saliency with upright neutral face distractors that were maximally similar to emotionally target faces, supporting the finding that emotional faces tend to outlive the AB. A similar finding of reduced performance without significant AB has also been reported by Awh and colleagues (Experiment 5 in [16]) when faces were masked by other faces. Taken together, the results from Experiments 3 and 4, specifically, corroborate the role of insufficient masking as a cause for the missing and shallow AB in our first two experiments.

Furthermore, results from Experiment 6 suggest that the nature of the (emotion recognition) task does not play a crucial role in shaping the AB over and above the role of target-distractor similarity. Similar to the results of Experiment 3 using an explicit emotion detection task and mosaic-scrambled distractors, an AB was also found in Experiment 6 when participants had to engage in a face detection task on T2, in which the emotional expression of T2 was task-irrelevant. Our result is in line with several other studies reporting an AB with an implicit face detection task [12], [13], [28]. Previous work demonstrated that increasing the task load and changing the instruction had an impact on AB magnitude [17], [48], [49], [50], suggesting that attentional set or top-down control of the specific task plays a role in the elicitation of the AB. However, it did not seem to make a difference for the present experiments, whether the emotional expression was relevant to the task or not.

Face stimuli in the RSVP may be more salient than letters or words and therefore require adequate masks to transiently impair awareness. Faces convey relevant information for social interactions. Several lines of research suggest that face processing differs from processing of other stimuli. Already newborns show increased attention to face compared to nonface stimuli (e.g. [51]). Furthermore, face recognition in contrast to word or object recognition seems to be holistic and configural [52]. Therefore it was hypothesized that faces are processed automatically by a pre-attentive mechanism as they pop out of visual search arrays with different distractors [53]. In addition, faces may be processed with little attentional resources, which is supported by studies showing that faces can be processed in the near-absence of attention [54] or outside of awareness [55], [56]. A recent study found that faces receive mandatory processing during a change detection task [57]. This attentional advantage for faces was still present when additional semantic information was given where to expect the change. These results suggest that even neutral T2 faces receive enhanced attention due to their saliency when presented during the AB interval. Support for this notion comes from several AB studies, which failed to find an AB for neutral faces masked either with nonface stimuli [13], [14], [15], [16] or with other neutral faces [16]. The amygdala has been suggested to be a neuroanatomical key region for the processing of emotionally and socially relevant stimuli [58] and is assumed to contribute to the modulating effect of emotional words on the AB [20]. However, even neutral faces are highly salient and result in increased amygdala activity, and therefore attentional resources may be sufficient to process both target face stimuli irrespective of the emotional expression of T2 in Experiments 1 and 2. Although the majority of studies employing faces as T2 actually found an AB for faces, it is evident that the experimental paradigms reported in the literature are very heterogeneous. Currently it does not seem possible to isolate a single factor or a combination of factors that is able to predict the occurrence or absence of an AB in experiments using face stimuli as targets (Table S1).

In conclusion, our experiments demonstrate that the AB for faces is minimal or absent when targets can be easily discriminated from distractors. When distractors are more similar to target faces, an AB for faces can be reliably obtained. In addition, our results support the notion that the AB is modulated by emotional expression in that neutral faces tend to be blinked more likely than emotional faces.

Supporting Information

Figure S1.

Mean accuracy for T2 of each participant in Experiment 6. Performance is depicted separately for the different facial expressions of T2. T2 detection is conditional on T1 performance. Error bars represent standard errors of the means. Abbreviations: T1, first target; T2, second target; SOA, stimulus onset asynchrony.

https://doi.org/10.1371/journal.pone.0041257.s001

(TIF)

Table S1.

Studies investigating the AB for T2 face stimuli. This table summarizes the experimental designs and results of all studies performing an RSVP and presenting faces as T2. Literature search was based on PubMed search terms “attentional blink” and one of the following: “face”, “fear”, “emotion”, or “anxiety”. Abbreviations: T1, first target; T2, second target; SOA, stimulus onset asynchrony; ISI, interstimulus interval; RSVP, rapid serial visual presentation; AB, Attentional Blink; FE, fearful; HA, happy; NE, neutral; 2AFC, 2 alternatives forced choice; SA, sad.

https://doi.org/10.1371/journal.pone.0041257.s002

(PDF)

Acknowledgments

We would like to thank Karin Deazle for support in data acquisition and Nicole David and Constanze Hipp for helpful discussions and comments on this manuscript.

Author Contributions

Conceived and designed the experiments: KM TRS AKE. Performed the experiments: KM. Analyzed the data: KM. Wrote the paper: KM TRS AKE.

References

  1. 1. Broadbent DE, Broadbent MHP (1987) From detection to identification – response to multiple targets in rapid serial visual presentation. Perception & Psychophysics 42: 105–113.DE BroadbentMHP Broadbent1987From detection to identification – response to multiple targets in rapid serial visual presentation.Perception & Psychophysics42105113
  2. 2. Raymond JE, Shapiro KL, Arnell KM (1992) Temporary suppression of visual processing in an RSVP task – an attentional blink? Journal of Experimental Psychology: Human Perception and Performance 18: 849–860.JE RaymondKL ShapiroKM Arnell1992Temporary suppression of visual processing in an RSVP task – an attentional blink?Journal of Experimental Psychology: Human Perception and Performance18849860
  3. 3. Weichselgartner E, Sperling G (1987) Dynamics of automatic and controlled visual attention. Science 238: 778–780.E. WeichselgartnerG. Sperling1987Dynamics of automatic and controlled visual attention.Science238778780
  4. 4. Chun MM, Potter MC (1995) A 2-stage model for multiple-target detection in rapid serial visual presentation. Journal of Experimental Psychology: Human Perception and Performance 21: 109–127.MM ChunMC Potter1995A 2-stage model for multiple-target detection in rapid serial visual presentation.Journal of Experimental Psychology: Human Perception and Performance21109127
  5. 5. Potter MC, Staub A, O'Connor DH (2002) The time course of competition for attention: attention is initially labile. Journal of Experimental Psychology: Human Perception and Performance 28: 1149–1162.MC PotterA. StaubDH O'Connor2002The time course of competition for attention: attention is initially labile.Journal of Experimental Psychology: Human Perception and Performance2811491162
  6. 6. Shapiro KL, Arnell KM, Raymond JE (1997) The attentional blink. Trends in Cognitive Sciences 1: 291–296.KL ShapiroKM ArnellJE Raymond1997The attentional blink.Trends in Cognitive Sciences1291296
  7. 7. Shapiro KL, Raymond JE, Arnell KM (1994) Attention to visual-pattern information produces the attentional blink in rapid serial visual presentation. Journal of Experimental Psychology: Human Perception and Performance 20: 357–371.KL ShapiroJE RaymondKM Arnell1994Attention to visual-pattern information produces the attentional blink in rapid serial visual presentation.Journal of Experimental Psychology: Human Perception and Performance20357371
  8. 8. Martens S, Wyble B (2010) The attentional blink: past, present, and future of a blind spot in perceptual awareness. Neuroscience and Biobehavioral Reviews 34: 947–957.S. MartensB. Wyble2010The attentional blink: past, present, and future of a blind spot in perceptual awareness.Neuroscience and Biobehavioral Reviews34947957
  9. 9. Bowman H, Wyble B (2007) The simultaneous type, serial token model of temporal attention and working memory. Psychological Review 114: 38–70.H. BowmanB. Wyble2007The simultaneous type, serial token model of temporal attention and working memory.Psychological Review1143870
  10. 10. Vul E, Nieuwenstein M, Kanwisher N (2008) Temporal selection is suppressed, delayed, and diffused during the attentional blink. Psychological Science 19: 55–61.E. VulM. NieuwensteinN. Kanwisher2008Temporal selection is suppressed, delayed, and diffused during the attentional blink.Psychological Science195561
  11. 11. Wyble B, Bowman H, Nieuwenstein M (2009) The attentional blink provides episodic distinctiveness: sparing at a cost. Journal of Experimental Psychology: Human Perception and Performance 35: 787–807.B. WybleH. BowmanM. Nieuwenstein2009The attentional blink provides episodic distinctiveness: sparing at a cost.Journal of Experimental Psychology: Human Perception and Performance35787807
  12. 12. Jackson MC, Raymond JE (2006) The role of attention and familiarity in face identification. Perception & Psychophysics 68: 543–557.MC JacksonJE Raymond2006The role of attention and familiarity in face identification.Perception & Psychophysics68543557
  13. 13. Landau AN, Bentin S (2008) Attentional and perceptual factors affecting the attentional blink for faces and objects. Journal of Experimental Psychology: Human Perception and Performance 34: 818–830.AN LandauS. Bentin2008Attentional and perceptual factors affecting the attentional blink for faces and objects.Journal of Experimental Psychology: Human Perception and Performance34818830
  14. 14. Darque A, Del Zotto M, Khateb A, Pegna AJ (2011) Attentional Modulation of Early ERP Components in Response to Faces: Evidence From the Attentional Blink Paradigm. Brain Topography. A. DarqueM. Del ZottoA. KhatebAJ Pegna2011Attentional Modulation of Early ERP Components in Response to Faces: Evidence From the Attentional Blink Paradigm.Brain Topography
  15. 15. Serences J, Scolari M, Awh E (2009) Online response-selection and the attentional blink: Multiple-processing channels. Visual Cognition 17: 531–554.J. SerencesM. ScolariE. Awh2009Online response-selection and the attentional blink: Multiple-processing channels.Visual Cognition17531554
  16. 16. Awh E, Serences J, Laurey P, Dhaliwal H, van der Jagt T, et al. (2004) Evidence against a central bottleneck during the attentional blink: multiple channels for configural and featural processing. Cognitive Psychology 48: 95–126.E. AwhJ. SerencesP. LaureyH. DhaliwalT. van der Jagt2004Evidence against a central bottleneck during the attentional blink: multiple channels for configural and featural processing.Cognitive Psychology4895126
  17. 17. Olivers CN, Nieuwenhuis S (2006) The beneficial effects of additional task load, positive affect, and instruction on the attentional blink. Journal of Experimental Psychology: Human Perception and Performance 32: 364–379.CN OliversS. Nieuwenhuis2006The beneficial effects of additional task load, positive affect, and instruction on the attentional blink.Journal of Experimental Psychology: Human Perception and Performance32364379
  18. 18. Anderson AK (2005) Affective influences on the attentional dynamics supporting awareness. Journal of Experimental Psychology: General 134: 258–281.AK Anderson2005Affective influences on the attentional dynamics supporting awareness.Journal of Experimental Psychology: General134258281
  19. 19. Keil A, Ihssen N (2004) Identification facilitation for emotionally arousing verbs during the attentional blink. Emotion 4: 23–35.A. KeilN. Ihssen2004Identification facilitation for emotionally arousing verbs during the attentional blink.Emotion42335
  20. 20. Anderson AK, Phelps EA (2001) Lesions of the human amygdala impair enhanced perception of emotionally salient events. Nature 411: 305–309.AK AndersonEA Phelps2001Lesions of the human amygdala impair enhanced perception of emotionally salient events.Nature411305309
  21. 21. Most SB, Chun MM, Johnson MR, Kiehl KA (2006) Attentional modulation of the amygdala varies with personality. Neuroimage 31: 934–944.SB MostMM ChunMR JohnsonKA Kiehl2006Attentional modulation of the amygdala varies with personality.Neuroimage31934944
  22. 22. Most SB, Chun MM, Widders DM, Zald DH (2005) Attentional rubbernecking: cognitive control and personality in emotion-induced blindness. Psychonomic Bulletin & Review 12: 654–661.SB MostMM ChunDM WiddersDH Zald2005Attentional rubbernecking: cognitive control and personality in emotion-induced blindness.Psychonomic Bulletin & Review12654661
  23. 23. Reinecke A, Rinck M, Becker ES (2008) How preferential is the preferential encoding of threatening stimuli? Working memory biases in specific anxiety and the Attentional Blink. Journal of Anxiety Disorders 22: 655–670.A. ReineckeM. RinckES Becker2008How preferential is the preferential encoding of threatening stimuli? Working memory biases in specific anxiety and the Attentional Blink.Journal of Anxiety Disorders22655670
  24. 24. Trippe RH, Hewig J, Heydel C, Hecht H, Miltner WH (2007) Attentional Blink to emotional and threatening pictures in spider phobics: electrophysiology and behavior. Brain Research 1148: 149–160.RH TrippeJ. HewigC. HeydelH. HechtWH Miltner2007Attentional Blink to emotional and threatening pictures in spider phobics: electrophysiology and behavior.Brain Research1148149160
  25. 25. de Jong PJ, Martens S (2007) Detection of emotional expressions in rapidly changing facial displays in high- and low-socially anxious women. Behavior Research and Therapy 45: 1285–1294.PJ de JongS. Martens2007Detection of emotional expressions in rapidly changing facial displays in high- and low-socially anxious women.Behavior Research and Therapy4512851294
  26. 26. De Martino B, Kalisch R, Rees G, Dolan RJ (2009) Enhanced processing of threat stimuli under limited attentional resources. Cerebral Cortex 19: 127–133.B. De MartinoR. KalischG. ReesRJ Dolan2009Enhanced processing of threat stimuli under limited attentional resources.Cerebral Cortex19127133
  27. 27. Fox E, Russo R, Georgiou GA (2005) Anxiety modulates the degree of attentive resources required to process emotional faces. Cognitive Affective & Behavioral Neuroscience 5: 396–404.E. FoxR. RussoGA Georgiou2005Anxiety modulates the degree of attentive resources required to process emotional faces.Cognitive Affective & Behavioral Neuroscience5396404
  28. 28. Milders M, Sahraie A, Logan S, Donnellon N (2006) Awareness of faces is modulated by their emotional meaning. Emotion 6: 10–17.M. MildersA. SahraieS. LoganN. Donnellon2006Awareness of faces is modulated by their emotional meaning.Emotion61017
  29. 29. Stein T, Zwickel J, Ritter J, Kitzmantel M, Schneider WX (2009) The effect of fearful faces on the attentional blink is task dependent. Psychonomic Bulletin & Review 16: 104–109.T. SteinJ. ZwickelJ. RitterM. KitzmantelWX Schneider2009The effect of fearful faces on the attentional blink is task dependent.Psychonomic Bulletin & Review16104109
  30. 30. Huang YM, Baddeley A, Young AW (2008) Attentional capture by emotional stimuli is modulated by semantic processing. Journal of Experimental Psychology: Human Perception and Performance 34: 328–339.YM HuangA. BaddeleyAW Young2008Attentional capture by emotional stimuli is modulated by semantic processing.Journal of Experimental Psychology: Human Perception and Performance34328339
  31. 31. Maratos FA, Mogg K, Bradley BP (2008) Identification of angry faces in the attentional blink. Cognition & Emotion 22: 1340–1352.FA MaratosK. MoggBP Bradley2008Identification of angry faces in the attentional blink.Cognition & Emotion2213401352
  32. 32. Miyazawa S, Iwasaki S (2010) Do happy faces capture attention? The happiness superiority effect in attentional blink. Emotion 10: 712–716.S. MiyazawaS. Iwasaki2010Do happy faces capture attention? The happiness superiority effect in attentional blink.Emotion10712716
  33. 33. Arend I, Botella J (2002) Emotional stimuli reduce the attentional blink in sub-clinical anxious subjects. Psicothema 14: 209–214.I. ArendJ. Botella2002Emotional stimuli reduce the attentional blink in sub-clinical anxious subjects.Psicothema14209214
  34. 34. Koster EH, De Raedt R, Verschuere B, Tibboel H, De Jong PJ (2009) Negative information enhances the attentional blink in dysphoria. Depression and Anxiety 26: E16–E22.EH KosterR. De RaedtB. VerschuereH. TibboelPJ De Jong2009Negative information enhances the attentional blink in dysphoria.Depression and Anxiety26E16E22
  35. 35. Amir N, Taylor CT, Bomyea JA, Badour CL (2009) Temporal allocation of attention toward threat in individuals with posttraumatic stress symptoms. Journal of Anxiety Disorders 23: 1080–1085.N. AmirCT TaylorJA BomyeaCL Badour2009Temporal allocation of attention toward threat in individuals with posttraumatic stress symptoms.Journal of Anxiety Disorders2310801085
  36. 36. Ishihara S (1986) Ishihara's Tests for Colour Blindness –24 Plates Edition. Tokyo: Kanehara & Co., LTD. S. Ishihara1986Ishihara's Tests for Colour Blindness –24 Plates Edition.Tokyo: Kanehara & Co., LTD
  37. 37. Lundqvist D, Flykt A, Öhman A (1998) The Karolinska Directed Emotional Faces – KDEF. CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet. D. LundqvistA. FlyktA. Öhman1998The Karolinska Directed Emotional Faces – KDEF.CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet
  38. 38. Brainard DH (1997) The psychophysics toolbox. Spatial Vision 10: 433–436.DH Brainard1997The psychophysics toolbox.Spatial Vision10433436
  39. 39. Pelli DG (1997) The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision 10: 437–442.DG Pelli1997The VideoToolbox software for visual psychophysics: Transforming numbers into movies.Spatial Vision10437442
  40. 40. Chun MM (1997) Types and tokens in visual processing: a double dissociation between the attentional blink and repetition blindness. Journal of Experimental Psychology: Human Perception and Performance 23: 738–755.MM Chun1997Types and tokens in visual processing: a double dissociation between the attentional blink and repetition blindness.Journal of Experimental Psychology: Human Perception and Performance23738755
  41. 41. Chun MM (1997) Temporal binding errors are redistributed by the attentional blink. Perception & Psychophysics 59: 1191–1199.MM Chun1997Temporal binding errors are redistributed by the attentional blink.Perception & Psychophysics5911911199
  42. 42. Vuilleumier P (2005) How brains beware: neural mechanisms of emotional attention. Trends in Cognitive Sciences 9: 585–594.P. Vuilleumier2005How brains beware: neural mechanisms of emotional attention.Trends in Cognitive Sciences9585594
  43. 43. Yiend J (2010) The effects of emotion on attention: a review of attentional processing of emotional information. Cognition & Emotion 24: 3–47.J. Yiend2010The effects of emotion on attention: a review of attentional processing of emotional information.Cognition & Emotion24347
  44. 44. Giesbrecht B, Di Lollo V (1998) Beyond the attentional blink: Visual masking by object substitution. Journal of Experimental Psychology: Human Perception and Performance 24: 1454–1466.B. GiesbrechtV. Di Lollo1998Beyond the attentional blink: Visual masking by object substitution.Journal of Experimental Psychology: Human Perception and Performance2414541466
  45. 45. Grandison TD, Ghirardelli TG, Egeth HE (1997) Beyond similarity: masking of the target is sufficient to cause the attentional blink. Perception & Psychophysics 59: 266–274.TD GrandisonTG GhirardelliHE Egeth1997Beyond similarity: masking of the target is sufficient to cause the attentional blink.Perception & Psychophysics59266274
  46. 46. Seiffert AE, Di Lollo V (1997) Low-level masking in the attentional blink. Journal of Experimental Psychology: Human Perception and Performance 23: 1061–1073.AE SeiffertV. Di Lollo1997Low-level masking in the attentional blink.Journal of Experimental Psychology: Human Perception and Performance2310611073
  47. 47. Jannati A, Spalek TM, Di Lollo V (2011) Neither backward masking of T2 nor task switching is necessary for the attentional blink. Psychonomic Bulletin & Review 18: 70–75.A. JannatiTM SpalekV. Di Lollo2011Neither backward masking of T2 nor task switching is necessary for the attentional blink.Psychonomic Bulletin & Review187075
  48. 48. Taatgen NA, Juvina I, Schipper M, Borst JP, Martens S (2009) Too much control can hurt: a threaded cognition model of the attentional blink. Cognitive Psychology 59: 1–29.NA TaatgenI. JuvinaM. SchipperJP BorstS. Martens2009Too much control can hurt: a threaded cognition model of the attentional blink.Cognitive Psychology59129
  49. 49. Nieuwenstein MR, Potter MC (2006) Temporal limits of selection and memory encoding: A comparison of whole versus partial report in rapid serial visual presentation. Psychological Science 17: 471–475.MR NieuwensteinMC Potter2006Temporal limits of selection and memory encoding: A comparison of whole versus partial report in rapid serial visual presentation.Psychological Science17471475
  50. 50. Ferlazzo F, Lucido S, Di Nocera F, Fagioli S, Sdoia S (2007) Switching between goals mediates the attentional blink effect. Experimental Psychology 54: 89–98.F. FerlazzoS. LucidoF. Di NoceraS. FagioliS. Sdoia2007Switching between goals mediates the attentional blink effect.Experimental Psychology548998
  51. 51. Morton J, Johnson MH (1991) CONSPEC and CONLERN: a two-process theory of infant face recognition. Psychological Review 98: 164–181.J. MortonMH Johnson1991CONSPEC and CONLERN: a two-process theory of infant face recognition.Psychological Review98164181
  52. 52. Farah MJ, Wilson KD, Drain M, Tanaka JN (1998) What is “special” about face perception? Psychological Review 105: 482–498.MJ FarahKD WilsonM. DrainJN Tanaka1998What is “special” about face perception?Psychological Review105482498
  53. 53. Hershler O, Hochstein S (2005) At first sight: a high-level pop out effect for faces. Vision Research 45: 1707–1724.O. HershlerS. Hochstein2005At first sight: a high-level pop out effect for faces.Vision Research4517071724
  54. 54. Reddy L, Reddy L, Koch C (2006) Face identification in the near-absence of focal attention. Vision Research 46: 2336–2343.L. ReddyL. ReddyC. Koch2006Face identification in the near-absence of focal attention.Vision Research4623362343
  55. 55. Whalen PJ, Rauch SL, Etcoff NL, McInerney SC, Lee MB, et al. (1998) Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge. Journal of Neuroscience 18: 411–418.PJ WhalenSL RauchNL EtcoffSC McInerneyMB Lee1998Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge.Journal of Neuroscience18411418
  56. 56. Morris JS, Ohman A, Dolan RJ (1998) Conscious and unconscious emotional learning in the human amygdala. Nature 393: 467–470.JS MorrisA. OhmanRJ Dolan1998Conscious and unconscious emotional learning in the human amygdala.Nature393467470
  57. 57. Weaver MD, Lauwereyns J (2011) Attentional capture and hold: the oculomotor correlates of the change detection advantage for faces. Psychological Research 75: 10–23.MD WeaverJ. Lauwereyns2011Attentional capture and hold: the oculomotor correlates of the change detection advantage for faces.Psychological Research751023
  58. 58. Adolphs R (2010) What does the amygdala contribute to social cognition? Annals of the New York Academy of Sciences 1191: 42–61.R. Adolphs2010What does the amygdala contribute to social cognition?Annals of the New York Academy of Sciences11914261