Recent evidence suggests that while reflectance information (including color) may be more diagnostic for familiar face recognition, shape may be more diagnostic for unfamiliar face identity processing. Moreover, event-related potential (ERP) findings suggest an earlier onset for neural processing of facial shape compared to reflectance. In the current study, we aimed to explore specifically the roles of facial shape and color in a familiarity decision task using pre-experimentally familiar (famous) and unfamiliar faces that were caricatured either in shape-only, color-only, or both (full; shape + color) by 15%, 30%, or 45%. We recorded accuracies, mean reaction times, and face-sensitive ERPs. Performance data revealed that shape caricaturing facilitated identity processing for unfamiliar faces only. In the ERP data, such effects of shape caricaturing emerged earlier than those of color caricaturing. Unsurprisingly, ERP effects were accentuated for larger levels of caricaturing. Overall, our findings corroborate the importance of shape for identity processing of unfamiliar faces and demonstrate an earlier onset of neural processing for facial shape compared to color.
Citation: Itz ML, Schweinberger SR, Kaufmann JM (2016) Effects of Caricaturing in Shape or Color on Familiarity Decisions for Familiar and Unfamiliar Faces. PLoS ONE 11(2): e0149796. https://doi.org/10.1371/journal.pone.0149796
Editor: Marina A. Pavlova, University of Tuebingen Medical School, GERMANY
Received: December 15, 2015; Accepted: February 4, 2016; Published: February 22, 2016
Copyright: © 2016 Itz et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: This work was supported by Deutsche Forschungsgemeinschaft (DFG; http://www.dfg.de/) Grant KA 2997/2-1 to JMK, and Grant KA 2997/3-2 to JMK and SRS. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Accurate face recognition is not only relevant for social interactions on a personal level, it is also important for several occupational fields, in which identifying persons by their faces is crucial, as is the case for instance for cashiers or passport controllers (e.g. [1, 2]). While familiar face identification occurs almost effortlessly for most of us (e.g. ), unfamiliar face identity processing is highly error prone . It has been shown repeatedly in behavioral as well as neural findings that we process unfamiliar and familiar faces in qualitatively different ways (for a review, see ). Thus, a fundamental question is whether there are facial characteristics that facilitate recognition of familiar compared to unfamiliar faces. A related and important issue for applied research (e.g. [1, 2]) is whether face identification performance can be improved in persons with poor face recognition skills, for instance by enhancing particular facial characteristics in the image.
Facial characteristics can be apportioned to two domains: shape and reflectance (see e.g. ). In the 2D image plane, we define shape as referring to the geometrical relationship between facial landmarks. This includes the form of individual features (e.g. eyes and nose); the second-order configuration thereof, e.g. relative metric distances between different features; as well as the overall form of a face. By contrast, reflectance refers to properties stemming from the way the skin surface and tissue underneath reflects light. This includes luminance, hue, and saturation of pixels, i.e. color-based properties. With morphing software, researchers are able to isolate and manipulate shape—by warping—and reflectance—by fading—selectively . For the purpose of the current study, we use the term shape to refer to the geometrical properties of a grid that is spatially defined by the positioning of certain landmarks, whereas we use the term color to refer to the RGB values of pixels within that grid. Whereas shape, and second-order spatial configurations in particular, have long been believed to be crucial for face identification (e.g. ), this view has been challenged more recently (see e.g. ).
One intriguing possibility for examining the roles of shape and color properties is by caricaturing—a method that enhances facial distinctiveness by exaggerating idiosyncratic characteristics of an individual face, either in terms of shape, color, or both, by morphing an individual face away from an average face [10–12]. Early studies using line drawings found higher best-likeness ratings for familiar spatially caricatured compared to veridical faces [13, 14], suggesting that mental familiar face representations correspond to shape caricatures, in line with the “superportrait hypothesis” . However, later studies using photorealistic stimuli found higher best-likeness ratings for slight spatial anti-caricatures, i.e. faces that had been morphed towards the average [16, 17], or for veridicals (e.g. ), compared to shape caricatures. Thus, mental representations of familiar faces most likely do not correspond to spatially caricatured versions.
Results from speeded recognition tasks also exhibited an inconsistent pattern of spatial caricature effects: Using line-drawn faces, Rhodes et al.  found faster reaction times for spatial caricatures compared to veridicals. By contrast, using photographs of both celebrity and personally familiar faces, Kaufmann and Schweinberger  found no differences in reaction times for spatial caricatures and veridicals. In an additional study, Lee and Perrett  found higher accuracies for photographic shape caricatures compared to veridicals only for very short stimulus presentation time, i.e. 33 ms. Lee and Perrett [17, 19] argued that exaggeration of idiosyncratic facial shape is only then beneficial for familiar face recognition when information is compromised somehow, either by unavailable color information, as is the case for line drawings, or when processing is hampered by time constraints such as short presentation times.
Research on color caricaturing is comparatively sparse. Lee and Perrett  found effects similar to those described for spatial caricatures above . Specifically, accuracy advantages for color caricatures compared to veridicals were also limited to short presentation times (67 and 100 ms), albeit slightly longer than those for shape caricature advantages (33 ms). Interestingly however, Lee and Perrett  found higher best likeness ratings for photographs of famous faces caricatured in color compared to veridical counterparts.
Considering these findings, color information may be more diagnostic than shape information for face recognition, at least for familiar faces. For unfamiliar faces, by contrast, evidence suggests a disproportionate role for shape information. In the study by Kaufmann and Schweinberger , spatial caricaturing modulated ERPs for unfamiliar but not familiar faces. In particular, this was the case for the N250, which is associated with the processing of facial identity (e.g. ) and for the N170, a component associated with structural encoding (e.g. [21, 22]). These findings led the authors to hypothesize that caricaturing in shape may facilitate encoding and/or learning of initially unfamiliar faces. Follow-up face learning studies support this hypothesis, finding clear learning advantages for spatial caricatures in accuracy and/or reaction time performance and modulation of face-sensitive ERPs N170, P200, N250, and LPC [11, 23, 24]. The N170 component shows a high degree of sensitivity to faces compared to other stimulus classes , is typically not affected by familiarity [26–28], and is often associated with face detection and structural encoding [21, 22]. Note also that N170 is affected by facial shape [29, 30] and has been shown to be larger for shape caricatures [31, 32]. The subsequent P200 has been found to be smaller for shape caricatures [24, 31] and larger for anti-caricatures , and may thus be a marker of perceived shape typicality. By contrast, the N250, and the N250r (in face priming studies), have been related to the transient activation of facial representations for recognition . Finally, a centro-parietal late positive component (LPC), reflects post-perceptual processing of persons rather than faces, and is typically larger for both familiar versus unfamiliar faces, and for familiar versus unfamiliar names [34, 35]. Note that both N250 and LPC are also larger for caricatured compared to veridical faces (e.g. ). Thus, while N250 and LPC are correlates of familiarity on the one hand, emerging evidence suggests that these components are sensitive to encoding or extraction of distinctive facial information. Moreover, spatial caricaturing effects on these ERP components were generally larger for larger levels of caricaturing (35% vs. 70%; see ). In a recent face learning study, strongest modulation by shape caricaturing was seen for the P200, whereas the most prominent effect of reflectance caricaturing was found for the later N250 . These findings are broadly in line with the notion of earlier neural processing of facial shape than reflectance .
In summary, color properties may be more diagnostic for recognition of familiar faces, whereas shape may be more important for initial encoding of unfamiliar faces and earlier stages of identity-based face processing. Our first aim here was to investigate behavioral effects (in reaction times and accuracies) and underlying neural correlates of caricaturing in shape only or color only on a familiarity decision task using pre-experimentally familiar (famous) and unfamiliar faces. We also included a condition with full (shape + color) caricaturing, to test for possible supra-additive effects of shape and color (see ). Our second aim was to investigate the sensitivity of behavioral and ERP effects within caricature types (shape-only, color-only, or full) for the extent to which faces were caricatured (15% vs. 30% vs. 45%). Considering the previous findings mentioned above, we made the following predictions: For unfamiliar faces, we expected prominent performance benefits for shape caricatured faces, in terms of faster reaction times and higher correct rejections. For familiar faces, by contrast, we expected little or no performance differences between veridicals and caricatures. In the ERP data, we expected effects of shape caricaturing to emerge earlier than effects of color caricaturing, and were interested in whether these effects would be modulated by caricature level.
2. Material and Methods
Data were collected from 31 participants (8 males; 2 left-handed; ) aged 19–31 years (M = 22.8, SD = 3.4), who reported normal or corrected-to-normal vision. Participants received either course credit or financial compensation for their participation in the study. Data from two additional participants were excluded due to insufficient EEG data quality. All participants provided written informed consent. This study, including the consent procedure, was carried out in accordance with the Declaration of Helsinki and was approved by the Ethical Commission of the Faculty of Social and Behavioural Sciences at the Friedrich Schiller University of Jena (approval number FSV 09/01). Moreover, the individual depicted in Fig 1 gave written informed consent (as outlined in PLOS consent form) to publish this figure.
Experimental stimuli comprised full-color, frontal photographs of 96 famous and 96 unfamiliar faces. Famous faces were found on the internet and unfamiliar faces were taken from the Glasgow Unfamiliar Face Database  and the Facial Recognition Technology (FERET) database [39, 40]. Famous and unfamiliar face sets were matched with respect to mean luminance (mean RGB value of images), t(190) = 1.02, p = .311 (Mfamiliar = 125.57, SDfamiliar = 23.39; Munfamiliar = 128.79, SDunfamiliar = 20.46), and contrast (mean standard deviation of RGB values within images), t(190) = 0.24, p = .812 (Mfamiliar = 52.36, SDfamiliar = 13.20; Munfamiliar = 51.92, SDunfamiliar = 11.97), prior to caricaturing. Using Adobe Photoshop™ (CS4, Version 11.0), we cropped faces such that only the face without the neck was visible. Using Psychomorph (; http://users.aber.ac.uk/bpt/jpsychomorph/, Version 6) faces were then caricatured in either shape only, color only, or both (full; shape + color) by either 15%, 30%, or 45%. We used templates provided by Psychomorph and placed the 179 reference points of the template on standardized positions on each face (please see  for details on reference point placement). Caricatures were then generated such that differences with respect to shape only, color only, or both (full; shape + color) between each individual face and a gender-matched average (averages used here were those described in ) were exaggerated by 15%, 30%, or 45%. Note that for caricaturing of shape, color was held constant and thus unchanged, and for caricaturing of color, shape was held constant and thus unchanged; for caricaturing of both, both dimensions were changed. Images were displayed using Eprime™ (Version 2.0) on a black background (RGB: 0) in the center of a 16” monitor (screen resolution of 1280 x 1024 pixels). Using a chin rest, a viewing distance of 90 cm was held constant. Face stimuli size was approximately 10 cm by 7 cm for an approximate visual angle of 6° by 4.5°. Please see Fig 1 for examples of stimuli, and note that the individual depicted in Fig 1 gave written informed consent (as outlined in PLOS consent form) to publish this figure.
2.3. Design & Procedure
The experiment consisted of 768 trials presented in randomized order with self-paced breaks after 96 trials (8 breaks in total). For each set of famous and unfamiliar faces (96 faces each) there were 32 faces in each face type condition (shape only, color only, and full [shape + color]). Each face type condition included four caricature levels (0%, 15%, 30%, and 45%). Note that 0% caricatures were actually veridical versions of faces. Assignment of faces to each face type condition was counterbalanced across participants.
The experimental trials consisted of a white fixation cross on a black screen for 500 ms, followed by a face on a black background (presented until keypress response or for a maximum of 1500 ms), then a blank black screen for 1200 ms. Participants were instructed to indicate via keypress as accurately and quickly as possible whether each presented face was familiar or unfamiliar to them. If responses were given too slowly or not given at all within the 1500 ms time-window, “Too slow!” (“Zu langsam!” in German) appeared on the 1200 ms blank screen that followed stimulus presentation. Hand assignment (left vs. right) for familiar vs. unfamiliar answers was counterbalanced across participants. At the beginning of the experiment there were 48 practice trials with feedback ensuring that participants had understood the task. Participants were encouraged to ask any remaining questions regarding the task after the practice trials. Practice trials were not included in the data analyses.
After the experiment, a short rating procedure followed, in order to ensure participants’ familiarity with the previously seen 96 famous identities. Here faces were presented in their veridical versions coupled with the respective name and semantic information, for instance “[Name of celebrity]; Actor and film producer (Name of Film).” Participants indicated on a 6-point Likert scale (1 = very unfamiliar; 6 = very familiar) their familiarity with each of the famous identities. For each participant, only those famous identities for which at least a “3” was given were included in the analyses below (see Section 2.5 for the average number of trials per condition).
Total duration of the experiment, including EEG preparation and washing of hair afterwards was about one-and-a-half to two hours.
2.4. Behavioral Data
Mean accuracies and mean reaction times for correct responses were recorded and analyzed. Trials for which participants responded within the first 200 ms post-stimulus onset were excluded from the analyses.
2.5. Electrophysiological Recordings and Analyses
The experiment took place in an electrically shielded room. Electroencephalographic (EEG) data were recorded with sintered Ag/AgCl electrodes attached to an EasyCap™ electrode cap (Herrsching-Breitbrunn, Germany), arranged conforming to the extended 10/20 system at scalp positions Fz, Cz, Pz, Iz, Fp1, Fp2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T7, T8, P7, P8, FT9, FT10, P9, P10, PO9, PO10, F9, F10, F9′, F10′, TP9, and TP10. Cz comprised a reference, and AFz, a forehead electrode, comprised ground. Horizontal electrooculogram (EOG) signals were recorded from electrodes (F9′ and F10′) on the outer canthi of both eyes; vertical EOG signals were recorded from electrodes placed above and below the left eye. Data were amplified using SynAmps amplifiers (NeuroScan Labs, Sterling, VA) and signals were recorded with AC (0.05–100 Hz, -6 dB attenuation, 12 dB/octave), with a sampling rate of 500 Hz. Impedances were kept below 10 kΩ.
Ocular artefacts were corrected offline automatically in BESA™ 5.1 (Brain Electromagnetic Source Analysis, version 5.1). Epochs between -200 ms pre-stimulus onset and 1100 ms post-stimulus onset were generated, with the time interval between -200 and 0 ms serving as baseline. Trials contaminated with non-ocular artifacts (amplitude threshold of 120 μV, with a gradient criterion of 75 μV) were excluded from further analyses. Only trials with correct responses (familiar vs. unfamiliar) were analyzed. Averaged ERPs were then low-pass filtered at 20Hz (zero phase shift; 12 db/octave) and recalculated to average reference. Vertical and horizontal EOG electrodes were excluded.
ERPs were calculated relative to the 200 ms prestimulus baseline using mean amplitudes for the occipital P100 (95–135 ms), the occipitotemporal N170 (150–190 ms), P200 (210–250 ms), N250 (250–350 ms), and for a central late positive component, LPC (500–800 ms). Time intervals for P100, N170, and P200 were chosen based on distinct peaks identified in the grand mean averages across all conditions (115, 171, and 229 ms, respectively). Time-windows for N250 and LPC were chosen based on visual inspection of the means. P100 was quantified at O1/O2; N170 and P200 were quantified at PO9/PO10, P9/P10, and P7/P8; N250 was quantified at O1/O2, PO9/PO10, P9/P10, and P7/P8; and LPC was quantified at C3, Cz, and C4. In the order of caricature level (0% vs. 15% vs. 30% vs. 45%), the average numbers of trials used in the analyses were the following: for SC 25.0, 24.5, 24.3, and 24.3 (familiar faces); and 30.0, 29.0, 30.1, and 29.8 (unfamiliar faces); for CC 25.1, 25.0, 25.3, and 25.4 (familiar faces) and 29.5, 29.6, 29.5, and 29.5 (unfamiliar faces); and for FC 24.4, 24.7, 24.6, and 24.6 (familiar faces) and 29.7, 29.9, 30.1, and 30.1 (unfamiliar faces).
We used analyses of variance (ANOVA, i.e. parametric testing) to analyze our results despite violations of normality in some cases. While non-normality in parametric testing can lead to a Type I error (i.e. false positive results), ANOVA has been shown to be robust against violations of normality (see e.g. ). A larger concern for a Type I error in within-subjects ANOVA is heterogeneity of covariances. Thus, where necessary, Epsilon corrections for heterogeneity of covariances were performed throughout according to Huynh and Feldt . Our analysis approach is well in line with current practice and recommendations in the field of EEG research (see e.g. ).
Note that for pairwise comparisons (simple contrasts) of face type (i.e. SC vs. CC, SC vs. FC, & CC vs. FC), the significance level was Bonferroni-corrected to α = .017 . Note also that polynomial trend analyses were used to assess effects of caricature level.
3.1. Behavioral Data
For accuracies and mean reaction times we performed 2x3x4 ANOVAs with repeated measurements on familiarity (familiar vs. unfamiliar), face type (shape-only caricatures [SC] vs. color-only caricatures [CC] vs. full (color + shape) caricatures (FC), and caricature level (0% vs. 15% vs. 30% vs. 45%). For mean reaction times, only correct responses longer than 200 ms post-stimulus onset were analyzed. For signal detection parameters (sensitivity d’ and criterion C), we performed ANOVAs analogous to those for accuracies and RTs but obviously without the factor of familiarity.
Accuracies were highest for 30% and 45% unfamiliar shape caricatures. The ANOVA yielded a main effect of familiarity, F(1,30) = 25.55, p < .001, ηp2 = .460, which interacted with face type and caricature level, F(6,180) = 3.08, p = .014, ηp2 = .093, εHF = .759. Separate ANOVAs for familiar and unfamiliar faces were thus conducted. For familiar faces there was just a trend for the interaction of face type by caricature level, F(6,180) = 2.07, p = .063, ηp2 = .065, εHF = .941, whereas for unfamiliar faces this interaction was significant, F(6,180) = 2.34, p = .047, ηp2 = .072, εHF = .796. Separate analyses for each unfamiliar face type revealed a main effect of caricature level for shape caricatures only, F(3,90) = 3.04, p = .039, ηp2 = .092, εHF = .898, due to a cubic trend, F(1,30) = 6.78, p = .014, ηp2 = .184 (Table 1).
3.1.2. Mean reaction times.
Mean reaction times were fastest for unfamiliar shape caricatures. The ANOVA revealed main effects of familiarity, F(1,30) = 4.28, p = .047, ηp2 = .125, and face type, F(2,60) = 4.07, p = .022. ηp2 = .119, and an interaction between those two factors, F(2,60) = 5.84, p = .005, ηp2 = .163. Separate analyses for both familiar and unfamiliar faces yielded a main effect of face type for unfamiliar faces only, F(2,60) = 9.97, p < .001, ηp2 = .249. Simple contrasts revealed faster reaction times for unfamiliar SC compared to both CC, F(1,30) = 21.74, p < .001, ηp2 = .420, and FC, F(1,30) = 9.73, p = .004, ηp2 = .245, with no difference between CC and FC, F(1,30) = 1.18, p = .287, ηp2 = .038 (Table 1).
Note that in the overall ANOVA there was also a main effect of caricature level, F(3,90) = 2.88, p = .041, ηp2 = .087, due to a quadratic tend, F(1,30) = 4.60, p = .040, ηp2 = .133.
3.1.3. Signal detection measurements (d’ and C).
Participants responded somewhat more conservatively to full (color + shape) caricatures.
Analyses for signal detection measurements yielded no significant main effects or trends for d´, but revealed a main effect of face type for criterion C, F(2,60) = 3.98, p = .030, ηp2 = .117, εHF = .860. Simple contrasts yielded higher criterion C for FC compared to CC, F(1,30) = 14.27, p = .001, ηp2 = .322, with no difference between SC and FC, F(1,30) = 0.55, p = .464, ηp2 = .018, and only a numeric difference between SC and CC, F(1,30) = 3.04, p = .091, ηp2 = .092 (see Table 1). Moreover, there was a trend for the interaction of face type by caricature level for criterion C, F(6,180) = 2.25, p = .066, ηp2 = .070, εHF = .681.
3.2. Electrophysiological Results
For the ERP data, we performed analyses analogous to the behavioral data. For P100, N170, P200, and N250, the additional factors of electrode site and hemisphere were included. For LPC, the additional factor of laterality was included. For readability and stringency we report only those results pertaining to experimental factors of familiarity, face, and caricature level. Thus, main effects and interactions involving solely site and/or hemisphere are not reported.
For P100, a main effect of face type was found, F(2,60) = 5.15, p = .009, ηp2 = .146. Simple contrasts revealed slightly larger amplitudes for SC (mean = 4.74 μV) than FC (mean = 4.47 μV), F(1,30) = 9.96, p = .004, ηp2 = .249, and numerically larger amplitudes for SC than CC (Mean μV = 4.55), F(1,30) = 4.45, p = .043, ηp2 = .129 (see Fig 2). There was no difference between CC and FC, F(1,30) = 0.84, p = .368, ηp2 = .027. There were also trends for main effects of familiarity, F(1,30) = 3.41, p = .075, ηp2 = .102, and caricature level, F(3,90) = 2.21, p = .092, ηp2 = .069, and for the interaction between those two factors, F(3,90) = 2.48, p = .077, ηp2 = .076, εHF = .845.
Gray-shaded areas denote time-windows of interest for N170 (150–190 ms), P200 (210–250 ms), and N250 (250–350 ms).
Gray-shaded areas denote time-windows of interest for N170 (150–190 ms), P200 (210–250 ms), and N250 (250–350 ms). Note decreasing P200 with increasing caricature level for shape-only and full (shape + color) caricatures.
The analysis for N170 yielded a main effect of familiarity, F(1,30) = 18.28, p < .001, ηp2 = .379, due to larger amplitudes for familiar compared to unfamiliar faces. The main effect of caricature level, F(3,90) = 5.33, p = .004, ηp2 = .151, εHF = .817, interacted with site, F(6,180) = 3.63, p = .007, ηp2 = .108, εHF = .682. Also, we found an interaction between site and face type, F(4,120) = 3.24, p = .024, ηp2 = .098, εHF = .772.
Separate ANOVAs for each site were performed to disentangle the aforementioned interactions. No main effects of face type were found, Fs(2,60) < 1.44, ps > .099, ηp2s < .075. Main effects of caricature level were found for sites P9/P10 and PO9/PO10, Fs(3,90) > 5.83, ps < .002, ηp2s > .162 (εHF = .836 for PO9/PO10), due to linear trends, Fs(1,30) > 15.37, ps < .001, ηp2s > .354 (see Figs 3 & 4). No main effect of caricature level was found for P7/P8, F(3,90) = 0.72, p = .530, ηp2 = .023, εHF = .906.
P200 was smaller for familiar compared to unfamiliar faces overall. In terms of face type, P200 was smallest for shape caricatures, although this was restricted to electrode P8 (see Fig 2). Moreover, effects of caricature level were strongest for unfamiliar shape caricatures (see Fig 4).
The ANOVA for P200 yielded main effects of familiarity, F(1,30) = 29.96, p < .001, ηp2 = .500, and caricature level, F(3,90) = 9.92, p < .001, ηp2 = .249. Familiarity was further qualified by interactions, site x familiarity, F(2,60) = 27.33, p < .001, ηp2 = .477, εHF = .797, and hemisphere x familiarity, F(1,30) = 4.84, p = .036, ηp2 = .139. Caricature level was further qualified by the interaction, site x caricature level, F(6,180) = 3.03, p = .023, ηp2 = .092, εHF = .620. These aforementioned effects were then qualified further by the four-way interaction of site x hemisphere x familiarity x caricature level, F(6,180) = 2.37, p = .037, ηp2 = .073, εHF = .091: We thus conducted analyses for each separate electrode to disentangle this four-way interaction. Main effects of familiarity were found for all sites except for P7, Fs(1,30) > 5.21, ps < .029, ηp2s > .147, due to smaller P200 for familiar compared to unfamiliar faces. Moreover, at all sites except for P8 there were main effects of caricature level, Fs(3,90) > 4.35, ps < .010, ηp2s > .126 (εHF = .825 for P7), due to linear trends, Fs(1,30) > 10.26, ps < .004, ηp2s > .254. Finally, at P9 there was a trend for the interaction of familiarity by caricature level, F(3,90) = 2.49, p = .065, ηp2 = .077.
With respect to face type, the three-way interaction of site x hemisphere x face type, F(4,120) = 3.21, p = .015, ηp2 = .097, was significant. Analyses for each separate electrode yielded a main effect of face type at P8 only, F(2,60) = 5.55, p = .006, ηp2 = .156, due to smallest P200 for shape caricatures (see Fig 2): Simple contrasts yielded smaller P200 for SC compared to CC, F(1,30) = 12.29, p = .001, ηp2 = .291, with no differences for FC versus CC, F(1,30) = 3.30, p = .079, ηp2 = .099, and SC versus FC, F(1,30) = 2.31, p = .139, ηp2 = .072.
Moreover, the four-way interaction, hemisphere x familiarity x face type x caricature level, F(6,180) = 2.53, p = .023, ηp2 = .078, was significant: Separate ANOVAs for each familiarity level over each hemisphere were thus performed. For familiar faces over the left hemisphere (LH), there was a trend for the main effect of caricature level, F(3,90) = 2.47, p = .067, ηp2 = .076, and a significant interaction between face type and caricature level, F(6,180) = 2.60, p = .019, ηp2 = .080. Separate analyses were thus conducted for each familiar face type over the LH. For familiar faces over the LH, we found the following: A main effect of caricature level for SC, F(1,30) = 3.32, p = .023, ηp2 = .100, due to a quadratic trend, F(1,30) = 5.64, p = .024, ηp2 = .158 (Fig 3); a trend for the main effect of caricature level for CC, F(3,90) = 2.42, p = .072, ηp2 = .075; and no effect of caricature level for FC, F(3,90) = 1.96, p = .127, ηp2 = .061. For familiar faces over the right hemisphere (RH), there was a main effect of caricature level, F(3,90) = 2.75, p = .047, ηp2 = .084, due to a linear trend, F(1,30) = 8.98, p = .005, ηp2 = .230, and no effects of face type. Over the LH for unfamiliar faces there was a main effect of caricature level, F(3,90) = 8.78, p < .000, ηp2 = .226, εHF = .856, which interacted with face type, F(6,180) = 3.25, p = .005, ηp2 = .098: Main effects of caricature level were found for unfamiliar SC, F(3,90) = 12.74, p < .001, ηp2 = .298, and FC, F(3,90) = 3.35, p = .022, ηp2 = .100, due to linear trends (F[1,30] = 38.48, p < .001 ηp2 = .562 for SC, and F[1,30] = 12.56, p = .001, ηp2 = .295 for FC; Fig 4). Finally, over the RH for unfamiliar faces there was a main effect of caricature level, F(3,90) = 3.43, p = .020, ηp2 = .103, due to a linear trend, F(1,30) = 10.48, p = .003, ηp2 = .259, with no effects of face type.
Note that in the overall ANOVA there were also trends for the interactions, site x face type, F(4,120) = 2.11, p = .096, ηp2 = .066, εHF = .837, and face type x caricature level, F(6,180) = 1.98, p = .071, ηp2 = .062.
For the N250 time-window, amplitudes were larger for familiar compared to unfamiliar faces overall. Moreover, effects of caricature level were strongest for shape and color caricatures over the left hemisphere, and for full (shape + color) caricatures over the right hemisphere (see Figs 3 & 4).
The ANOVA yielded a prominent main effect of familiarity, F(1,30) = 73.35, p < .001, ηp2 = .710, which was further qualified by interactions, site x familiarity, F(3,90) = 38.87, p < .001, ηp2 = .564, εHF = .553, and site x hemisphere x familiarity, F(3,90) = 5.39, p = .006, ηp2 = .152, εHF = .695. Separate ANOVAs for each electrode yielded main effects of familiarity at electrodes, P7, P8, P9, P10, PO9, and PO10, Fs(1,30) > 15.08, ps < .001, ηp2s > .334, due to larger N250 for familiar compared to unfamiliar faces. At both O1 and O2, the main effects of familiarity were just trends (F[1,30] = 3.66, p = .065, ηp2 = .109 for O1, and F[1,30] = 3.21, p = .083, ηp2 = .097 for O2).
There was also a main effect of caricature level, F(3,90) = 10.74, p < .001, ηp2 = .264, which was further qualified by the interactions site x caricature level, F(9,270) = 2.75, p = .008, ηp2 = .084, εHF = .839, and hemisphere x face type x caricature level, F(6,180) = 2.57, p = .021, ηp2 = .079. The two-way interaction between site and caricature level was followed up with separate analyses for each site: All sites yielded main effects of caricature level, Fs(3,90) > 4.84, ps < .005, ηp2s > .138, due to linear trends, Fs(1,30) > 11.05, ps < .003, ηp2s > .268 (Figs 3 & 4). Note that the quadratic trend was also significant for site O1/O2, F(1,30) = 5.01, p = .033, ηp2 = .143.
To disentangle the latter three-way of hemisphere x face type x caricature level, separate analyses were performed for each face type over both hemispheres. Over the left hemisphere, there were main effects of caricature level for SC, F(3,90) = 10.81, p < .001, ηp2 = .265, and CC, F(3,90) = 3.42, p = .021, ηp2 = .102, due to linear trends (F[1,30] = 24.06, p < .001, ηp2 = .445 for SC, and F[1,30] = 11.01, p = .002, ηp2 = .268 for CC; Figs 3 & 4). Note that for SC, the quadratic trend was also significant, F(1,30) = 8.39, p = .007, ηp2 = .219, and there was a trend for a cubic trend, F(1,30) = 3.26, p = .081, ηp2 = .098. Moreover, over the left hemisphere, there was a trend for caricature level for FC, F(3,90) = 2.29, p = .091, ηp2 = .071, εHF = .897. Over the right hemisphere, there was a significant main effect of caricature level for FC only, F(3,90) = 5.53, p = .002, ηp2 = .156, due to a linear trend, F(1,30) = 11.07, p = .002, ηp2 = .270 (Figs 3 & 4). Over the right hemisphere, there were also trends for main effects of caricature level for SC, F(3,90) = 2.36, p = .077, ηp2 = .073, and CC, F(3,90) = 2.53, p = .074, ηp2 = .078, εHF = .839.
LPC amplitudes were larger for familiar compared to unfamiliar faces overall. Moreover, amplitudes were largest for larger levels of caricature level for faces containing caricaturing of shape (i.e. SC and FC; see Fig 5).
The gray-shaded area depicts the time-window of interest (500–800 ms).
The ANOVA for LPC revealed a prominent main effect of familiarity, F(1,30) = 59.11, p < .001, ηp2 = .663, due to stronger positivity for familiar compared to unfamiliar faces. Main effects of face type, F(2,60) = 3.43, p = .039, ηp2 = .103, and caricature level, F(3,90) = 10.52, p < .001, ηp2 = .260, were also found and interacted with one another, F(6,180) = 2.43, p = .028, ηp2 = .075.
Separate analyses for each face type were performed to explore the interacted between face type and caricature level. Main effects of caricature level were found for shape, F(3,90) = 7.89, p < .001, ηp2 = .208, and full (shape + color) caricatures, F(3,90) = 6.24, p = .001, ηp2 = .172, due to linear trends (F[1,30] = 21.75, p < .001, ηp2 = .420 for SC, and F[1,30] = 14.56, p = .001, ηp2 = .327 for FC; see Fig 5). Note that for SC, the quadratic trend also reached significance, F(1,30) = 4.27, p = .047, ηp2 = .125. There was no effect of caricature level for CC, F(3,90) = 0.52, p = .667, ηp2 = .017.
This is the first study to examine effects of selective caricaturing in either shape or color on recognition performance and neural correlates for pre-experimentally familiar and unfamiliar faces. Importantly, our use of pre-experimentally familiar faces allows inference about the recognition of real familiar (as opposed to experimentally familiarized) faces.
Despite earlier claims that caricaturing facilitates the recognition of known faces , we found no performance benefits of caricaturing for familiar faces. This finding is in line with more recent findings on pre-experimentally familiar shape caricatures , a result which in the current study extends to familiar faces caricatured in color. Lee and Perrett  argued that caricatures are advantageous for familiar face recognition when “processing is compromised in some way” (p. 749), e.g. when presentation time is very brief [17, 19]. In the current experiment, stimulus presentation duration was comparatively long, providing more time for participants to observe the stimuli. Moreover, to the extent that familiar (but not unfamiliar) face representations are robust against pictorial characteristics and manipulations (see e.g. [3, 48, 49]) small or absent advantages of caricaturing of pre-experimentally familiar faces can be expected.
In contrast, we found that performance for unfamiliar faces benefited from shape caricaturing. Fastest reaction times for unfamiliar shape caricatures complement previous reports (e.g. ), and the present tendency for highest accuracies for higher levels of unfamiliar shape caricatures (Table 1) is also broadly in line with previous findings . Overall, the present behavioural findings support the conclusion that shape caricaturing facilitates identity-based processing of unfamiliar, but not familiar, faces.
In the following, we will first discuss ERP effects of caricaturing in some detail for each analyzed component before turning to ERP differences between familiar and unfamiliar faces. First, an unexpected finding was the slightly larger P100 for shape caricatures compared to the other face types (see Fig 2). The P100 is known to be highly sensitive to low-level pictorial characteristics, and to contrast in particular . From that perspective, one might have expected—if anything—a slightly larger P100 for color caricatures (which have slightly increased contrasts; see Fig 2). We are currently unable to provide a convincing explanation for this small amplitude effect in the P100. It should be noted however that shape caricaturing did not elicit a P100 modulation in an earlier study . In the absence of a replication of this effect, we therefore refrain from further speculation.
The present finding of slightly but systematically larger N170 for larger levels of caricaturing is in line with a previous finding , and was found here to be independent of the type of caricature. This finding could be interpreted in terms of enhanced structural encoding of caricatured faces, particularly when considering that the N170 has been specifically related to structural face encoding . Note however that effects of caricaturing for the N170 were in previous studies smaller  and less consistent (e.g. [11, 24, 31]), particularly when compared to the large and consistent caricaturing effects in the subsequent P200 component.
Consistent with those previous findings here we found prominent effects of shape caricaturing for P200, which were even stronger for higher caricature levels. P200 has been associated with facial typicality (e.g. [51, 52]), especially in terms of norm or prototype deviation from a “face space” model [24, 53, 54]. Importantly, our finding of smaller P200 for shape caricatures was strongest for unfamiliar faces, complementing further findings on the importance of distinctive shape for identity-based processing of unfamiliar faces (e.g. ).
The present caricaturing effects on the N250 are also well in line with a number of previous findings. First, larger N250 for larger levels of shape caricaturing complements a previous finding , and extends it to faces containing caricaturing of color. The N250 is typically associated with the transient activation of stored mental face representations in memory and priming experiments [28, 55, 56] but has also been associated with the processing of particularly attractive, distinctive, or other-race faces [54, 57]. While the present effects of familiarity on the N250 (see below) replicate the usual finding of larger N250 amplitudes for familiar as compared to similar unfamiliar faces of the same category, it is important to keep in mind that different categories of faces can also affect ERPs in the N250 time-range.
Finally, LPC was larger for larger levels of caricaturing for faces containing shape caricaturing (i.e. SC and FC). Note that this is broadly in line with previous reports on larger LPC for caricatured stimuli [11, 24, 31, 32] and could reflect more efficient semantic processing of those faces compared to veridicals.
In terms of ERP effects for familiarity, the current findings of larger N250 and LPC for familiar compared to unfamiliar faces are in line with several previous reports (e.g. [11, 20, 24, 28, 34). Familiarity effects in terms of more negative earlier occipitotemporal components (N170 and P200) appear to be less consistent, but have also been occasionally reported for famous compared to unfamiliar faces. For instance, a larger N170 for famous compared to unfamiliar faces was found in the present study and another recent study . Given the sensitivity of the N170 to physical stimulus attributes, an unambiguous interpretation of those effects as reflecting familiarity would require a balanced design with the same faces being familiar for one group of subjects but unfamiliar for another group of participants, and vice versa. Although this caveat may be too conservative, in view of the fact that both studies ensured equivalent luminance and contrast and used relatively large numbers of stimuli, it is important to consider when interpreting early ERP effects of familiarity.
Interestingly, we found also smaller P200 for familiar compared to unfamiliar faces. A recent study comparing effects of attractiveness on face learning found smaller P200 for attractive compared to unattractive faces . The current finding of smaller P200 for familiar faces may thus be attributed to potential higher attractiveness of our familiar (i.e. here, famous) facial stimuli. Alternatively, the smaller P200 for familiar faces could reflect an early onset of the well-known N250 familiarity effect, also found in this study, which overlaps in time with the present P200. Further research is needed to refine this aspect.
One last point worth mentioning is that we did not find any supra-additive effects for caricaturing in shape and color. That is, effects of full (shape + color) caricaturing were never largest. This is potentially in contrast to a previous study . However, it appears possible that those differences are related to either different procedures of stimulus manipulation or to the use of different EEG signals as dependent variables. Specifically, the stimuli in that study comprised morphs in which identity information had been changed by means of cross-identity morphing and not enhanced as is the case for caricatured stimuli in the current study. Moreover, Dzhelyova and Rossion’s  study involved an analysis of fast responses to periodic stimulation, whereas we analyzed ERPs to single presentations of faces.
In conclusion, our results complement findings on robust identification of highly familiar faces despite image manipulations (e.g. ). In contrast, and importantly, the current findings highlight the importance of idiosyncratic facial shape for identity-based processing of unfamiliar faces. This finding is particularly interesting for applied areas such as eye-witness testimony or occupational fields in which unfamiliar faces need to be identified (e.g. security- related professions such as passport controllers ). In line with this, McIntyre et al.  could show improved unfamiliar face matching performance for moderate levels of caricaturing (30%). Moreover, caricaturing may also be useful for potential training programs aimed at improving face recognition abilities for both persons in the normal population with poor face recognition skills (see e.g. ), and clinical patients with different varieties of face recognition impairments (see e.g. [60, 61, 62]). Recent work by Irons et al.  could show promising results for similar applications. Moreover, the current finding of earlier ERP modulation by shape than color caricaturing complements previous reports [29, 31]. Overall, the current findings indicate robust recognition for pre-experimentally familiar faces and highlight the importance of distinctive shape for identity-based processing of unfamiliar faces.
S1 Dataset. Spreadsheets including all datasets for behavioral and EEG results.
We gratefully acknowledge our technical assistant Bettina Kamchen as well as our student research assistants Philipp Alt, Carolin Blaser, Albert End, and Jodi Watt for their contributions in setting up the study and in data collection.
Conceived and designed the experiments: MLI SRS JMK. Performed the experiments: MLI. Analyzed the data: MLI. Contributed reagents/materials/analysis tools: SRS JMK. Wrote the paper: MLI SRS JMK.
- 1. Kemp R, Towell N, Pike G. When seeing should not be believing: Photographs, credit cards and fraud. Applied Cognitive Psychology. 1997 Jun;11(3):211–22. WOS:A1997XG45700002.
- 2. White D, Kemp RI, Jenkins R, Matheson M, Burton AM. Passport Officers' Errors in Face Matching. Plos One. 2014 Aug 18;9(8). WOS:000341302700010.
- 3. Bruce V, Young A. Understanding face recognition. British Journal of Psychology. 1986 Aug;77:305–27. WOS:A1986E015300001. English. pmid:3756376
- 4. Bruce V, Henderson Z, Greenwood K, Hancock PJB, Burton AM, Miller P. Verification of face identities from images captured on video. Journal of Experimental Psychology-Applied. 1999 Dec;5(4):339–60. WOS:000084204600001.
- 5. Johnston RA, Edmonds AJ. Familiar and unfamiliar face recognition: A review. Memory. 2009;17(5):577–96. WOS:000267263200009. pmid:19548173
- 6. Russell R, Biederman I, Nederhouser M, Sinha P. The utility of surface reflectance for the recognition of upright and inverted faces. Vision Research. 2007 Jan;47(2):157–65. WOS:000244057100002. pmid:17174375
- 7. Beale JM, Keil FC. Categorical effects in the perception of faces. Cognition. 1995 Dec;57(3):217–39. WOS:A1995TJ91500001. English. pmid:8556842
- 8. Richler JJ, Mack ML, Gauthier I, Palmeri TJ. Holistic processing of faces happens at a glance. Vision Research. 2009 Nov;49(23):2856–61. WOS:000272134000013. English. pmid:19716376
- 9. Burton AM, Schweinberger SR, Jenkins R, Kaufmann JM. Arguments Against a Configural Processing Account of Familiar Face Recognition. Perspectives on Psychological Science. 2015 Jul;10(4):482–96. WOS:000358081000005. pmid:26177949
- 10. Perkins D. A definition of caricature, and caricature and recognition. Studies in the Anthropology of Visual Communication. 1975;2(1):1–24.
- 11. Schulz C, Kaufmann JM, Kurt A, Schweinberger SR. Faces forming traces: Neurophysiological correlates of learning naturally distinctive and caricatured faces. Neuroimage. 2012 Oct;63(1):491–500. WOS:000308770300050. English. pmid:22796993
- 12. Stevenage SV. Can caricatures really produce distinctiveness effects? British Journal of Psychology. 1995 Feb;86:127–46. WOS:A1995QK70700009. English.
- 13. Benson PJ, Perrett DI. Visual processing of facial distinctiveness. Perception. 1994;23(1):75–93. WOS:A1994NN66600008. pmid:7936978
- 14. Rhodes G, Brennan S, Carey S. Identification and ratings of caricatures—implications for mental representations of faces. Cognitive Psychology. 1987 Oct;19(4):473–97. WOS:A1987K384200003. pmid:3677584
- 15. Rhodes G. Superportraits: Caricatures and recognition. Hove: The Psychology Press; 1996.
- 16. Allen H, Brady N, Tredoux C. Perception of 'best likeness' to highly familiar faces of self and friend. Perception. 2009;38(12):1821–30. WOS:000273868200008. English. pmid:20192131
- 17. Lee KJ, Perrett DI. Manipulation of colour and shape information and its consequence upon recognition and best-likeness judgments. Perception. 2000;29(11):1291–312. WOS:000166641900003. pmid:11219986
- 18. Kaufmann JM, Schweinberger SR. Distortions in the brain? ERP effects of caricaturing familiar and unfamiliar faces. Brain Research. 2008 Sep;1228:177–88. WOS:000259512000019. English. pmid:18634766
- 19. Lee KJ, Perrett D. Presentation-time measures of the effects of manipulations in colour space on discrimination of famous faces. Perception. 1997;26(6):733–52. WOS:A1997YC62100006. pmid:9474343
- 20. Schweinberger SR, Neumann M. F. Repetition effects in human ERPs to faces. Cortex, in press.
- 21. Eimer M. The face-sensitive N170 component of the event-related brain potential. In: Calder AJ, Rhodes G., Johnson M., Haxby J., editor. Oxford Handbook of Face Perception. Oxford (GB): Oxford University Press; 2011. p. 329–44.
- 22. Schweinberger SR. Neurophysiological correlates of face perception. In: Calder AJ, Rhodes G., Johnson M., Haxby J., editor. Oxford Handbook of Face Perception. Oxford (GB): Oxford University Press; 2011. p. 345–66.
- 23. Kaufmann JM, Schulz C, Schweinberger SR. High and low performers differ in the use of shape information for face recognition. Neuropsychologia. 2013 Jun;51(7):1310–9. WOS:000321539400020. pmid:23562837
- 24. Schulz C, Kaufmann JM, Walther L, Schweinberger SR. Effects of anticaricaturing vs. caricaturing and their neural correlates elucidate a role of shape for face learning. Neuropsychologia. 2012 Aug;50(10):2426–34. WOS:000307694300006. English. pmid:22750120
- 25. Bentin S, Allison T, Puce A, Perez E, McCarthy G. Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience. 1996 Nov;8(6):551–65. WOS:A1996WF04200007. pmid:20740065
- 26. Bentin S, Deouell LY. Structural encoding and identification in face processing: ERP evidence for separate mechanisms. Cognitive Neuropsychology. 2000 Feb-May;17(1–3):35–54. WOS:000086550900003. English.
- 27. Gosling A, Eimer M. An event-related brain potential study of explicit face recognition. Neuropsychologia. 2011 Jul;49(9):2736–45. WOS:000293611600048. English. pmid:21679721
- 28. Tanaka JW, Curran T, Porterfield AL, Collins D. Activation of preexisting and acquired face representations: The N250 event-related potential as an index of face familiarity. Journal of Cognitive Neuroscience. 2006 Sep;18(9):1488–97. WOS:000240736500006. pmid:16989550
- 29. Caharel S, Jiang F, Blanz V, Rossion B. Recognizing an individual face: 3D shape contributes earlier than 2D surface reflectance information. Neuroimage. 2009 Oct;47(4):1809–18. WOS:000269035100068. English. pmid:19497375
- 30. Vakli P, Nemeth K, Zimmer M, Schweinberger SR, Kovacs G. Altering second-order configurations reduces the adaptation effects on early face-sensitive event-related potential components. Frontiers in Human Neuroscience. 2014 Jun;8. WOS:000339712500001.
- 31. Itz ML, Schweinberger SR, Schulz C, Kaufmann JM. Neural correlates of facilitations in face learning by selective caricaturing of facial shape or reflectance. Neuroimage. 2014 Nov;102:736–47. WOS:000345391700047. pmid:25173417
- 32. Kaufmann JM, Schweinberger SR. The faces you remember: Caricaturing shape facilitates brain processes reflecting the acquisition of new face representations. Biological Psychology. 2012 Jan;89(1):21–33. WOS:000299714500003. English. pmid:21925235
- 33. Kaufmann JM, Schweinberger SR, Burton AM. N250 ERP Correlates of the Acquisition of Face Representations across Different Images. Journal of Cognitive Neuroscience. 2009 Apr;21(4):625–41. WOS:000264359700002. pmid:18702593
- 34. Eimer M. Event-related brain potentials distinguish processing stages involved in face perception and recognition. Clinical Neurophysiology. 2000 Apr;111(4):694–705. WOS:000086506600015. English. pmid:10727921
- 35. Schweinberger SR. How Gorbachev primed Yeltsin: Analyses of associative priming in person recognition by means of reaction times and event-related brain potentials. Journal of Experimental Psychology-Learning Memory and Cognition. 1996 Nov;22(6):1383–407. WOS:A1996VR35200005.
- 36. Dzhelyova M, Rossion B. Supra-additive contribution of shape and surface information to individual face discrimination as revealed by fast periodic visual stimulation. Journal of Vision. 2014;14(14).
- 37. Oldfield RC. The Assessment and Analysis of Handedness: The Edinburgh Inventory. Neuropsychologia. 1971;9(1):97–113. WOS:A1971J199600013. pmid:5146491
- 38. Burton AM, White D, McNeill A. The Glasgow Face Matching Test. Behavior Research Methods. 2010 Feb;42(1):286–91. WOS:000285917600028. English. pmid:20160307
- 39. Phillips PJ, Moon H, Rizvi SA, Rauss PJ. The FERET evaluation methodology for face-recognition algorithms. Ieee Transactions on Pattern Analysis and Machine Intelligence. 2000 Oct;22(10):1090–104. WOS:000165067100003.
- 40. Phillips PJ, Wechsler H, Huang J, Rauss PJ. The FERET database and evaluation procedure for face-recognition algorithms. Image and Vision Computing. 1998 Apr;16(5):295–306. WOS:000073371700001.
- 41. Chen J, Tiddeman B. Multi-cue facial feature detection and tracking under various illuminations. International Journal of Robotics & Automation. 2010;25(2):162–71. WOS:000277996200009.
- 42. Sutherland C. A basic guide to Psychomorph. University of York. 2015.
- 43. Perrett DI, Penton-Voak IS, Little AC, Tiddeman BP, Burt DM, Schmidt N, et al. Facial attractiveness judgements reflect learning of parental age characteristics. Proceedings of the Royal Society B-Biological Sciences. 2002 May;269(1494):873–80. WOS:000175540100001.
- 44. Lix LM, Keselman JC, Keselman HJ. Consequences of assumption violations revisited: A quantitative review of alternatives to the one-way analysis of variance F test. Review of Educational Research. 1996;66(4):579–619. WOS:A1996WG38100006.
- 45. Huynh H, Feldt L S. Estimation of the Box Correction for Degrees of Freedom from Sample Data in Randomized Block and Split-Plot Designs. J Educ Stat. 1976;1:69–82.
- 46. Keil A, Debener S, Gratton G, Junghofer M, Kappenman ES, Luck SJ, et al. Committee report: Publication guidelines and recommendations for studies using electroencephalography and magnetoencephalography. Psychophysiology. 2014;51(1):1–21. WOS:000328071500001. pmid:24147581
- 47. Abdi H. The Bonferonni and Šidák Corrections for Multiple Comparisons. In: Salkind N, editor. Encyclopedia of Measurement and Statistics. Thousand Oaks, CA: Sage; 2007.
- 48. Burton AM. Why has research in face recognition progressed so slowly? The importance of variability. Quarterly Journal of Experimental Psychology. 2013 Aug;66(8):1467–85. WOS:000322866900001. English.
- 49. Hancock PJB, Bruce V, Burton AM. Recognition of unfamiliar faces. Trends in Cognitive Sciences. 2000 Sep;4(9):330–7. WOS:000089185000002. pmid:10962614
- 50. Bach M, Ullrich D. Contrast dependency of motion-onset and pattern-reversal VEPs: Interaction of stimulus type, recording site and response component. Vision Research. 1997 Jul;37(13):1845–9. WOS:A1997XG33300013. pmid:9274769
- 51. Latinus M, Taylor MJ. Face processing stages: Impact of difficulty and the separation of effects. Brain Research. 2006 Dec;1123:179–87. WOS:000243058900020. pmid:17054923
- 52. Stahl J, Wiese H, Schweinberger SR. Expertise and own-race bias in face processing: an event-related potential study. Neuroreport. 2008 Mar;19(5):583–7. WOS:000254372100015. English. pmid:18388743
- 53. Valentine T. A unified account of the effects of distinctiveness, inversion, and race in face recognition. Quarterly Journal of Experimental Psychology Section a-Human Experimental Psychology. 1991 May;43(2):161–204. WOS:A1991FM42200001. English.
- 54. Wiese H, Kaufmann JM, Schweinberger SR. The Neural Signature of the Own-Race Bias: Evidence from Event-Related Potentials. Cerebral Cortex. 2014 Mar;24(3):826–35. WOS:000331845700023. pmid:23172775
- 55. Schweinberger SR, Burton AM. Covert recognition and the neural system for face processing. Cortex. 2003 Feb;39(1):9–30. WOS:000181440400002. pmid:12627750
- 56. Zimmermann FGS, Eimer M. Face learning and the emergence of view-independent face recognition: An event-related brain potential study. Neuropsychologia. 2013 Jun;51(7):1320–9. WOS:000321539400021. pmid:23583970
- 57. Wiese H, Altmann CS, Schweinberger SR. Effects of attractiveness on face memory separated from distinctiveness: Evidence from event-related brain potentials. Neuropsychologia. 2014 Apr;56:26–36. WOS:000335486800004. pmid:24406982
- 58. Barragan-Jason G, Cauchoix M, Barbeau EJ. The neural speed of familiar face recognition. Neuropsychologia. 2015 Aug;75:390–401. WOS:000360596700037. pmid:26100560
- 59. McIntyre AH, Hancock PJB, Kittler J, Langton SRH. Improving Discrimination and Face Matching with Caricature. Applied Cognitive Psychology. 2013 Nov;27(6):725–34. WOS:000327392400005.
- 60. Henke K, Schweinberger SR, Grigo A, Klos T, Sommer W. Specificity of face recognition: Recognition of exemplars of non-face objects in prosopagnosia. Cortex. 1998;34(2):289–96. WOS:000073662800011. pmid:9606594
- 61. Van den Stock J, de Gelder B, De Winter FL, Van Laere K, Vandenbulcke M. A strange face in the mirror. Face-selective self-misidentification in a patient with right lateralized occipito-temporal hypo-metabolism. Cortex. 2012;48(8):1088–90. WOS:000306724700020. pmid:22480403
- 62. Van den Stock J, de Gelder B, Van Laere K, Vandenbulcke M. Face-Selective Hyper-Animacy and Hyper-Familiarity Misperception in a Patient With Moderate Alzheimer's Disease. Journal of Neuropsychiatry and Clinical Neurosciences. 2013;25(4):E52–E3. WOS:000329491700031. pmid:24247893
- 63. Irons J, McKone E, Dumbleton R, Barnes N, He XM, Provis J, et al. A new theoretical approach to improving face recognition in disorders of central vision: Face caricaturing. Journal of Vision. 2014;14(2). WOS:000334064400012.