Fig 1.
The UNSW Face Test contains two tasks. Left: In the recognition memory task participants study studio-quality target faces for 5 seconds each (Study Phase), and then make old/new recognition judgments on ambient test faces (Test Phase). Right: In the match-to-sample sorting task participants memorize a studio-quality target face for 5 seconds and then sort 4 ambient test images according to whether they are the target face. Scores on each task are summed for a maximum score of 120 and then expressed as a percentage.
Fig 2.
Normative distribution of accuracy on the UNSW face test.
Fig 3.
Comparison of the normative, lab and online samples on UNSW face test scores.
The central dotted line indicates the mean and the lower and upper dotted lines indicate 25% and 75% percentiles, respectively.
Table 1.
Mean accuracy, standard deviations, and range of accuracy in each participant group.
Fig 4.
Distribution of accuracy in online samples.
Left: Accuracy distribution of Online Sample 1 (top) and Online Sample 2 (bottom) compared to the normative accuracy distribution (black line). Right: Sample of distribution above the super-recognition threshold (2 SDs above the mean). The long tail of the distribution shows that the UNSW Face Test is sensitive to differences in performance up to 6 SD above the mean.
Fig 5.
Violin plots show how the distribution of performance on each of the online tests varies as a function of the screening criteria.
Each row shows the test used to select individuals (top row: UNSW Face Test; middle row: CFMT+; bottom row: GFMT). Boxes on the right show the number of participants in Online Sample 2 represented in each distribution. These data show that the ability to set stricter screening criteria on the UNSW Face Test provides greater precision for targeting high performing individuals to follow up testing than the CFMT+ or GFMT.
Fig 6.
Scatterplots of the correlations between the UNSW face test, CFMT+ and GFMT for Online Sample 2.
These results show the considerable variability in individual performance across the three tests, demonstrating the importance of repeated testing when establishing super-recognition. They also show that the UNSW Face Test does not suffer from ceiling effects, unlike existing tests and which can aid in the identification of super-recognisers. ** Significant at .01 level.
Fig 7.
Test-retest reliability on the UNSW face test after a one-week delay.
** Significant at .01 level.
Table 2.
Reliable correlations between the UNSW Face Test and the CFMT and GFMT for Lab Sample 1 demonstrate high convergent validity.
Table 3.
Mean accuracy and correlation matrix for all tests in Lab Sample 2, demonstrating discriminant validity.
Table 4.
Mean accuracy (%) and standard deviation by participant age from online samples.
Fig 8.
Average accuracy for each participant age on the UNSW Face Test separated for overall (left), memory task (middle) and sorting task (right). Size and shade of each data point show the number of participants in that age group.