The Early Time Course of Compensatory Face Processing in Congenital Prosopagnosia

Background Prosopagnosia is a selective deficit in facial identification which can be either acquired, (e.g., after brain damage), or present from birth (congenital). The face recognition deficit in prosopagnosia is characterized by worse accuracy, longer reaction times, more dispersed gaze behavior and a strong reliance on featural processing. Methods/Principal Findings We introduce a conceptual model of an apperceptive/associative type of congenital prosopagnosia where a deficit in holistic processing is compensated by a serial inspection of isolated, informative features. Based on the model proposed we investigated performance differences in different face and shoe identification tasks between a group of 16 participants with congenital prosopagnosia and a group of 36 age-matched controls. Given enough training and unlimited stimulus presentation prosopagnosics achieved normal face identification accuracy evincing longer reaction times. The latter increase was paralleled by an equally-sized increase in stimulus presentation times needed achieve an accuracy of 80%. When the inspection time of stimuli was limited (50ms to 750ms), prosopagnosics only showed worse accuracy but no difference in reaction time. Tested for the ability to generalize from frontal to rotated views, prosopagnosics performed worse than controls across all rotation angles but the magnitude of the deficit didn't change with increasing rotation. All group differences in accuracy, reaction or presentation times were selective to face stimuli and didn't extend to shoes. Conclusions/Significance Our study provides a characterization of congenital prosopagnosia in terms of early processing differences. More specifically, compensatory processing in congenital prosopagnosia requires an inspection of faces that is sufficiently long to allow for sequential focusing on informative features. This characterization of dysfunctional processing in prosopagnosia further emphasizes fast and holistic information encoding as two defining characteristics of normal face processing.

• random effects allow to incorporate information on correlated observations, which arise e.g. due to repeated measurements taken from the same individual in different conditions Let ν := J j=1 β j X j be the linear predictor. It is assumed that there is a functional relationship, specified by the link function g, between this linear predictor and the expected observed outcome, i.e. ν = g(E[Y ]). For random variables Y which have a distribution from the exponential family, g is mostly chosen such that ν = g(E[Y ]) = θ, where θ is the canonical (or location) parameter of the distribution. In this case, the link-function is called the canonical link-function [2]. Examples for canonical link-functions are the identity function, g(x) = x, for normal distributions and the logit-function, g(x) := log( x 1−x ), for Binomial distributions. Usually, the link-function is fixed a priori and estimation of the model parameters is limited to the coefficients β j and selection and/or transformations of the predictor variables X j . Introducing a differentiation between fixed and random effects, the general form of a GLMM can be written as follows where β 0 is the intercept, β j are the fixed effect coefficients for the observed X j and γ k are the random effects for the observed Z k .

Nested Families of Generalized Linear Mixed Models
Model based comparisons can be used to study whether the influence of fixed effects on the outcome differs between groups. First, construct a nested family of models. Starting from a nullmodel more complicated models are constructed by allowing for interaction effects between predictor variables X j and a grouping variable C. For example, a main-effect (or 0th-order) model can be defined by adding group specific intercept terms β c for all groups c. Similarly, a 1 st -order effect models can be defined by including interactions with all predictor variables X j , i.e. by adding terms β c,j X i . More general, a kth-order model can be defined by including interactions with all interaction terms of length k (β c,j 1 ,...,j k ).
To test whether the influence of predictor variables differs significantly between groups we used likelihood-ratio tests. For two nested models, a null model M 0 and an alternative model M 1 , with respective log-likelihoods l 0 < l 1 , 2(l 1 − l 0 ) was calculated as a test statistic. In general, it is assumed that this test statistic follows χ 2 distribution with degrees of freedom equal to the difference in the number of parameters. However, in most cases, the assumption of a χ 2 distribution is only an approximation and tends to give to small p-values [2]. This shortcoming can be addressed by applying resampling methods (e.g. parametric boostrap) to estimate p-values or Bayesian statistics.

Appendix B -Model Based Normalization of Test Scores
Investigating whether an individual's performance in behavioural tests deviates fundamentally from that of a control population is one of the main aims of a quantitative diagnostic assessment. To conduct a quantitative diagnosis one has to arrange for a matching control population, derive a sufficient statistical description, and decide whether an individual's performance deviates significantly. A commonly used method is to select for each individual a control population that is matched in terms of possible contributing factors (age, gender, education,... ), calculate mean and standard deviation of the matched controls, and derive an abnormality score based on an appropriate test statistic [3]. However, restricting the comparisons to matched controls decreases the number of samples in the control group and often introduces a somewhat arbitrary discretization of continuous variables (e.g. age) into intervals (e.g. age-bands). By applying regression methods to model the influence of contributing factors on test performance one can establish continuous norms [4].
We propose to use generalized linear mixed models to extend simple linear regression methods in accounting for differences in possible contributing factors and deriving continuous norms. The main extension is the possibility to transform outcome variables that don't follow a normal distribution, e.g. Bernoulli or exponential distributed random variables into residuals that are approximately normal distributed. This transformation ensures applicability of standard test statistics [5].
First, a nullmodel is fitted to the observed control data. This initial process of fitting or constructing a nullmodel establishes which of the possible factors actually contributes to control performance. Only those are included as predictor variables in the construction of a continuous norm. Second, for each control individual i with observed outcome y i and predictors (contributing factors) x i residuals are calculated as the difference in actual performance y i and expected performance under an individualized nullmodelŷ −i (x i ). The individualized nullmodel is obtained by estimating the parameter values of the nullmodel based on all control observations except those of individual i. Third, for each new individual j, residuals can be calculated similarly, this time based on difference between observed performance y j and expected performance under the nullmodel using all control observationsŷ(x j ).
The additional step of calculating controls' residuals based on individualized nullmodels reduces the risk of fitting model parameters too closely to the data, thereby modeling the idiosyncrasies of each individuals' performance and underestimating the variability in control performance (cf. to a leave-one-out crossvalidation, see [6]).