Fig 1.
a, an example of four trials with the happy female face, where the unmasking sequence was stopped as illustrated, and a correct answer was given; b, the weights of all tiles for the happy female face, based on 16 trials of one participant. The weights are visualised with a green-red colour spectrum which is min-max-scaled, so lowest weights are green and highest weights are red.
Fig 2.
a, for each of the faces (except the two neutral ones), action units were hand-drawn as defined in the Facial Action Coding System. For example, the last face (male, surprised) has been labelled with action unit (AU) 1 “inner brow raiser” in dark green, AU 2 “outer brow raiser” in purple, AU 5 “upper lid raiser” in red and a compound AU 25+26+27: “lips part”, “jaw drop”, “mouth stretch” in light green; b, the same action units, as assigned to the 48 tiles into which each face is divided; please refer to S1 Fig and S2 Fig for a comprehensive list of labels for each face. Refer to S5 Code for a visualisation of all tile assignments.
Fig 3.
a, for each condition, the average percentage of revealed tiles needed until a correct response is given is plotted on the x-axis; the average percentage of correct responses in plotted on the y-axis; error bars illustrate 95% confidence intervals; b and c, percentage of all responses for female (b) and male (c) faces, including confusions. Correct responses are plotted in strong colours at the bottom of each bar. Incorrect responses are plotted in muted colours and are at the top of each bar; acronyms: hap, happy; ang, angry; sup, surprised; ntr, neutral; dis, disgust; fea, fear; sad, sad.
Fig 4.
Visual illustration of tile weights for each face.
Tile weights, averaged over the whole participant sample. The weights are visualised with a green-red colour spectrum which is min-max-scaled within each face, so lowest weights are green and highest weights are red. These rescaled data are used for visualisation only.
Fig 5.
Role of upper and lower face half.
Difference score “upper face half” minus “lower face half” for each of the 14 faces used in the experiment, with positive values indicating bigger importance of the upper face half and negative values indicating bigger importance of lower face half; a, for the female face; b, for the male face; error bars represent 95% confidence intervals.
Fig 6.
Role of specific action units for emotion recognition.
Colours of each action unit as drawn on the face correspond to colours of bars, which are labelled with the respective action unit according to the facial action coding system. Values indicate importance of each action unit as compared to the baseline of all non-action unit tiles. Error bars represent 95% confidence intervals.
Fig 7.
Principal component analysis of face weights.
a, percentage of explained variance by the first five principal components; b, visualisation of weights of each principal component in “tile space”, with positive weights in red and negative weights in blue; c, plotting each face in the space defined by the first two principal components, error bars represent 95% confidence intervals; d, same as c, but with the actual stimuli replacing the markers.
Fig 8.
Representational similarity analysis (RSA).
Dissimilarity (1—Pearson correlation) for all faces, projected into distances in 2D-space by means of multidimensional scaling; a, RSA with (greyscale) pixel values of images; b, RSA with tile weights from the emotion recognition task; note that axes are not labelled, as they do not represent distinct dimensions and only the distances in 2D space are interpretable.