Biased Recognition of Facial Affect in Patients with Major Depressive Disorder Reflects Clinical State

Cognitive theories of depression posit that perception is negatively biased in depressive disorder. Previous studies have provided empirical evidence for this notion, but left open the question whether the negative perceptual bias reflects a stable trait or the current depressive state. Here we investigated the stability of negatively biased perception over time. Emotion perception was examined in patients with major depressive disorder (MDD) and healthy control participants in two experiments. In the first experiment subjective biases in the recognition of facial emotional expressions were assessed. Participants were presented with faces that were morphed between sad and neutral and happy expressions and had to decide whether the face was sad or happy. The second experiment assessed automatic emotion processing by measuring the potency of emotional faces to gain access to awareness using interocular suppression. A follow-up investigation using the same tests was performed three months later. In the emotion recognition task, patients with major depression showed a shift in the criterion for the differentiation between sad and happy faces: In comparison to healthy controls, patients with MDD required a greater intensity of the happy expression to recognize a face as happy. After three months, this negative perceptual bias was reduced in comparison to the control group. The reduction in negative perceptual bias correlated with the reduction of depressive symptoms. In contrast to previous work, we found no evidence for preferential access to awareness of sad vs. happy faces. Taken together, our results indicate that MDD-related perceptual biases in emotion recognition reflect the current clinical state rather than a stable depressive trait.


Introduction
Current concepts of depression are largely based on cognitive theories [1] according to which depression is characterized by a negative bias in perception. In line with this notion, there is substantial empirical evidence showing that perception in patients with major depressive disorder (MDD) is characterized by blunted responsiveness to emotionally positive information as well as an increased tendency to perceive emotionally neutral visual information as negative [2][3][4][5][6][7].
A critical point for the understanding of the etiological and developmental aspects of MDD is the question whether such a negative bias represents a stable vulnerability factor which persists beyond a depressive episode. In this case the trait-like characteristic of a negative bias could prove useful for the identification of persons at risk [8,9]. If, in contrast, a negative bias is confined to the depressive episode, it could serve as a state marker for MDD, for instance to objectively monitor treatment responses [10]. Previous research has yielded heterogeneous results regarding temporal stability of such a negative perceptual bias. In some studies biased emotion recognition was observed even after recovery from major depressive episodes [11,12], while other studies reported a reduced negative perceptual bias and improved emotion discrimination after symptom remission [13][14][15][16].
Two important factors may primarily account for the inconsistencies between previous studies. Firstly, the ability to differentiate between state and trait markers of MDD crucially depends on the experimental design. In a number of previous studies perceptual biases in remitted MDD patients were compared to never-depressed healthy controls [11][12][13]15,16]. A more direct assessment of the temporal stability of perceptual biases would be provided by a repeated-measures design in which patients with MDD are tested during a depressive episode and after remission. With this approach, the development of a perceptual bias can be directly related to the development of depressive symptomatology. A second important factor is the stimulus material used to probe biased perceptual processing of emotional information in MDD. In several previous studies, participants were exposed to face stimuli displaying emotional expressions at full intensities [12,16], to schematic faces [14] or to drawings of facial expression [15]. Misclassifications of emotional expressions of such stimuli may lack the sensitivity to capture altered emotion processing in MDD. Greater sensitivity can be achieved by varying the intensity of the facial expression including rather subtle changes in emotional expressions. Recognition of such subtle expressions is more closely related to emotion recognition in everyday life, since emotions displayed by others are usually less intense than in standard face stimulus sets. The use of morphed emotional faces can yield relevant information on the nature of biased emotion perception along a particular dimension (e.g. for the transition from happy to sad expressions). Moreover, such an approach can also help to differentiate whether impairments in emotion recognition are due to misclassification of ambiguous expressions, that is, a shift in categorical emotion recognition [5,17], or rather a general uncertainty in emotion recognition. The latter would be reflected by a flattened response pattern. To the best of our knowledge, the stability of the MDD-related perceptual bias in the recognition of subtle changes in emotional face expressions has not yet been tested in a repeated-measures design.
In the current study we investigated the temporal stability of negative perceptual bias in patients with MDD. We used a repeated-measures design, in which patients were tested during a depressive episode (T1) and three months later (T2). In a forced-choice task, participants were asked to indicate the valence of expressions of face stimuli that varied with respect to their degree of expressed happiness or sadness. In line with previous research, we expected biased emotion recognition in patients with MDD in comparison to healthy participants at T1 [5]. For the comparison between the two time points, we hypothesized that an unchanged perceptual bias at T2 would represent a stable trait marker of MDD, whereas a reduction in negative perceptual bias from T1 to T2 would argue for a state marker.
A second focus of our study was the question whether a negative perceptual bias in emotion recognition may be related to previously reported biases in automatic emotion processing [18]. In line with this notion we recently found that faces with sad expressions had privileged access to awareness compared to happy faces in patients with MDD, indicating an automatic bias towards negative emotional stimuli in MDD [19]. In the present study we therefore included a second task that probed the effects of emotional expressions on the potency of face stimuli to gain access to awareness. The purpose of this task was to investigate whether a negative bias in the recognition of morphed emotional expressions-reflecting the conscious evaluation of the stimuli-would correlate with preferential access of negative emotional information to awareness-reflecting automatic stages of visual information processing.

Materials and Methods
Participants 31 patients with MDD and 28 healthy control participants matched for age, gender and educational status were tested. We included patients diagnosed by a trained psychiatrist as having moderate or severe MDD according to DSM-IV criteria. Diagnoses were made based on the Hamilton Depression Rating Scale (HAMD) [20] and Beck's Depression Inventory (BDI) [21] performed by the treating physician. Eleven patients were inpatients at the Department for Psychiatry and Psychotherapy at Charité -Universitätsmedizin Berlin, Campus Charité Mitte (Berlin, Germany). Eleven patients were under treatment in the day-clinic and three were treated as outpatients of the same department. The remaining six patients were outpatients recruited through internet advertisement. Control participants were recruited through internet advertisement or the department's volunteer data base. All participants had normal or corrected-to-normal vision. None of the participants had a history of brain injury, neurological disorders, or current substance abuse. The Structured Clinical Interview for the DSM-IV (SCID) was used by specially trained medical students to screen for psychiatric illnesses [22]. None of the patients with MDD had any psychiatric comorbidity according to DSM-IV axis I [22] except for anxiety disorder. There was no evidence of past or present psychiatric disorders in any of the control participants. Severity of depression was assessed with the Hamilton Depression Rating Scale (HAMD) [20] and the Beck's Depression Inventory (BDI) [21] by trained medical students on the day of testing. Patients were included in the study if they had a HAMD score of 18 points or higher. According to these criteria excluding patients with above mentioned neurological or psychiatric history, the final sample sizes comprised 26 patients with MDD and 28 healthy participants.
Patients with MDD and healthy controls performed the experiments at two time points: The first measurement was performed when patients were acutely depressed (T1). A follow-up measurement (T2) was scheduled for three months later. Control participants were re-tested after the same time interval of three months. Five patients and four healthy controls who took part in the experiment at T1 could not be contacted for T2. Therefore, repeated-measures analyses including T1 and T2 were based on a final sample of 21 patients with MDD and 24 healthy participants. To estimate intelligence level, we assessed total years of training and highest educational achievement on a six-point-scale reflecting the three possible school degrees in Germany and the subsequent professional training (1 = nine-year school degree; 2 = nine-year school degree plus apprenticeship; 3 = 10-year-school-degree; 4 = ten-year school degree plus apprenticeship; 5 = 12 or 13-year-degree; 6 = 12 or 13-year-degree plus university degree). Additionally, all participants performed the WST (Wortschatztest), a verbal intelligence test [23]. Handedness was assessed using the Edinburgh Handedness Inventory [24].
For one patient (not included in T2) data on medication are missing. Patients under regular treatment with benzodiazepines were only included when they reported no drowsiness. One patient underwent electroconvulsive therapy (ECT) that had started one week before testing. At T2, medication was as follows: serotonin reuptake inhibitors (7 patients), bupropion (6), venlafaxine (4), tricyclic antidepressants (2), atypical antipsychotics (3), anticonvulsants (2) and lithium (3). Four patients received no antidepressants, seven patients received combined pharmacological treatment, and ten received one single drug. Among the four patients who did not receive treatment at T2, two had stopped taking antidepressant drugs between the two time points of testing and the two remaining had not received any pharmacological treatment at all. Except for the latter two patients, all patients tested at both time points underwent pharmacological treatment.

Ethics statement
All patients and control participants gave informed and written consent prior to their participation in the study, which was approved by the local ethics committee.

Stimuli
Both experiments were performed in a dimly lit room with sound absorption. Participants were seated in front of a 19 inch Samsung CRT monitor (resolution: 1024 x 768; frame rate: 60 Hz). A stable effective viewing distance of 50 cm was secured by a chin-and-head-rest frontmounted to a custom-made mirror stereoscope. For stimulus presentation we used MATLAB (The MathWorks, USA) with the Cogent 2000 toolbox (http://www.vislab.ucl.ac.uk/cogent. php), running on a Pentium 4 computer.

Morphing technique in Experiment 1
Stimuli were pictures of four female and four male actors from the Ekman and Friesen series "Pictures of Facial Affect" (http://www.paulekman.com) displaying neutral, happy, and sad expressions. Each of the faces underwent a morphing process using FantaMorph (version 4.1, January 2009 [http://www.fantamorph.com]). The morphed pictures were generated using three genuine photographs of faces with a sad, a happy, and a neutral emotional expression. We used each respective neutral picture as a starting point and morphed it towards either happiness or sadness in five steps, resulting in a set of 11 pictures for each face identity. Pilot tests were performed with 15 healthy participants to determine adequate morph steps. We found that 10% increments, as used in previous studies (e.g. [25], resulted in a steep switch from sad to happy judgments. To maximize sensitivity in the near-neutral range, we therefore used morphing steps that were based on a logarithmic scale (Fig 1). In our pilot experiments, we observed that faces with a neutral expression had a slight tendency to be judged as sad rather than happy. In contrast, faces containing a proportion of 12% of the happy expression were equally often judged as being happy and sad across participants. We thus defined these faces as the neutral midpoint between the two emotional expressions.

Procedure
Experiment 1: Emotion recognition. Each trial began with the presentation of a face on a grey background in the center of the screen. After 1000 ms the face was removed from the screen. Participants were instructed to indicate via key press whether they judged the face to display a happy or a sad expression. They were informed that there might be ambiguous expressions, but that they should always decide whether it was a rather happy or rather sad face.
Right and left arrow keys on the keyboard were marked with a happy and a sad schematic face, respectively. Participants were instructed to press the button as soon as they had made a decision, but without time pressure. After the participant had pressed a key and an additional inter-stimulus-interval of 2000 ms, the next trial followed. If no key was pressed, the next trial followed after 20 s. Each of the eleven morph increments was presented 16 times, i.e., each face identity twice in one morph increment which amounts to a total of 176 trials. Participants were excluded if their hit rate was below 75% in one of the 100% conditions. Four control participants and none of the depressed patients were excluded according to this criterion, resulting in a final sample of n = 26 patients and n = 24 healthy control participants at T1. For T2, this exclusion criterion led to a final sample size of n = 21 patients and n = 20 control participants. Experiment 2: Access to awareness. The experimental design of experiment 2 largely resembled the design of a previous study by [19]. It was originally based on a behavioral study in healthy volunteers [26]. Stimuli were displayed on a grey background. During the experiment, two white-line squares (8.5°x 8.5°) were presented side by side on the screen and were viewed through the mirror stereoscope such that only one square was visible to each eye. In the center of each square a white fixation cross (0.5°x 0.5°) was displayed. Participants were asked to maintain stable fixation during the experiment. The experiment comprised a continuous-flashsuppression (CFS) condition, in which high-contrast Mondrian-like pattern masks [27,28] measuring 8.3°x 8.3°were flashed to one eye at a frequency of 10 Hz, while a face stimulus (2.5°x 3.6°) was faded-in to the other eye in one of the four quadrants of the display. The contrast of the face stimulus was ramped up slowly from 0% to 100% within a period of 2 s to ensure invisibility of the face at the beginning of each trial, and then remained constant until the participant made a response on a computer keyboard (keys F, J, V and N) indicating the face's location. The high-contrast Mondrian-like pattern mask was faded out by linearly decreasing its contrast from 2 s until the end of each trial [29] unlike the study by [19] where the mask was stopped abruptly at the end of each trial. Observers were instructed to respond as fast and as accurately as possible as soon as any part of the face became visible. Yet another difference to the study by [19] was that no fearful faces were included, as no effect had been shown for fearful faces in this earlier study. In addition to CFS, a control condition was used that did not involve binocular rivalry. Control trials started with the presentation of only a flashing Mondrian pattern to one eye. A face stimulus was then shown in one of the four possible locations as in the CFS condition, but with full contrast and to both eyes at a random time between 2 and 8 s after trial onset. Note that the control condition was not designed to match the CFS condition perceptually, but merely to control for possible systematic between-group differences in reaction times to the appearance of faces with different emotional expressions. Both CFS and control trials ended after the participant's key press. CFS trials were discarded if no key was pressed within 10 s [29]. The inter-trial interval was 2 s. The experiment started with a short training block. The whole experiment comprised 216 trials (144 CFS and 72 control trials) split up into six blocks. CFS and control trials were intermixed randomly within each block. For a more detailed description be referred to [19].

Data analysis
Sample characteristics. In order to test for between-group differences in age and verbal intelligence, two-sample t-tests were performed. Chi-square-test was used to probe differences in the proportion of male and female participants and Mann-Whitney U-tests for differences in years of education, training degree and score on the Edinburgh Handedness Inventory (EHI).
Experiment 1: Model selection. The following models were fitted to each individual's responses to identify the model providing the best fit to the observed behavior. For all models, participants' behavior was operationalized as the proportion of 'happy' responses for each respective morph step. Firstly, logistic functions of the following form were fitted to each participant's individual manual responses on the basis of a non-linear least squares approach: In this equation, a denotes the slope of the function and b denotes the point of subjective equality (PSE), that is, the location of the function on the continuum of the morphed faces from fully expressed sadness to happiness. The two parameters allow for a specific characterization of participants' response profiles: The steepness of the slope (a) indicates participants' behavior in the transition from sad to happy faces. A higher value for slope reflects a more abrupt switch from sad to happy. The PSE (b) indicates the location of the inflection point of the fitted function, that is, the morphing step at which faces are equally often judged as happy and as sad. It thus reflects each participant's criterion for the discrimination between happy vs. sad expressions. Hence, the greater the PSE, the greater the proportion of the happy expression that the individual needs to judge the face as being happy.
Secondly, we fitted logistic functions with four parameters to participants' behavioral responses [30]. In addition to the two-parameter model explained above, the lower (c) and upper asymptotes (d) of the logistic curves are included in the model as free parameters: Thirdly, we performed linear regressions between the logit of participants' responses and the corresponding morph steps with β 0 being the intercept and β 1 being the slope of the regression line: To identify the most appropriate model, the goodness of fit of each model was assessed on the basis of R² values adjusted for degrees of freedom, that is, for the number of parameters of the respective model. To this end, individual R² values were transformed to z-values using Fisher's z transformation, averaged across all participants and time points, and finally transformed back to R² values. According to this procedure, the linear regression model (Eq (3), R² = 0.844) was clearly outperformed by the logistic models, for which the four-parameter model (Eq (2), R² = 0.949; Fig 2A) provided a slightly better fit than the logistic model comprising two parameters (Eq (1), R² = 0.945). All subsequent analyses were thus performed on the parameters estimated on the basis of the four-parameter logistic model. Importantly, single-subject goodness of fit values for this model did not differ between groups (two sample t-test: t(48) = 0.64; p = 0.52) or time points (paired-sample t-test: t(40) = 1.32, p = 0.19).
Experiment 1: Statistical analysis of model parameters. Firstly, we analyzed group differences for slope and PSE at T1. To this end, two-sample t-tests were performed for slope and PSE separately. Secondly, we subjected slope and PSE values to 2 x 2 repeated-measures ANO-VAs with factors group and time (T1, T2).
Moreover, for MDD patients we performed a Pearson correlation to test whether a reduction in depressive symptoms from T1 to T2, indicated by changes in BDI score, is related to an improvement in emotion recognition from T1 to T2, indicated by changes in the PSE of the logistic function.
Experiment 1: Mixed-effects models. The two-level approach described above, comprising the estimation of function parameters on the first level and statistical inference of these parameters on the second level, does not take into account the correlation between function parameters. This approach can thus produce spurious findings, especially for categorical outcomes [31]. We therefore performed additional corroborative analyses to overcome these shortcomings by modelling our data in the framework of a mixed-effects approach, in the context of which fixed effects are estimated separately from random effects [32]. In order to model psychophysical data, generalized linear mixed models (GLMMs) have been proposed allowing for the specification of the relation between predictor and outcome variables [33]. To fit GLMMs to our data we used the lme4 package provided for the statistical software R [34]. For data analysis at T1, the model had the following structure: In this equation Y denotes the binomial response of each participant in each single trial such that Y = 0 for trials in which a face was judged as being sad and Y = 1 for trials in which a face was judged as expressing happiness. X 1 and X 2 are the fixed-effects parameters morph level (i.e. proportion of happiness) and group, respectively. X 1 X 2 represents the interaction between the two fixed-effects parameters and β 1 to β 3 are the fixed-effects coefficients. Subjects were included in the model as random effect, denoted by Z and the coefficient b. For the analysis of the data at T1 and T2, time point (X 3 ) was added as a fixed effect to the model in Eq (4) as well as the interactions of time point with the other fixed effects: Experiment 2. For each trial, suppression times in the CFS condition and reaction times in the control condition were defined as the interval from face presentation onset until participant's button press. Only trials with correct responses were included in the analyses. For each participant, outlier responses, defined as response times beyond 1.5 times the inter-quartile-interval below the first quartile or above the third quartile [35] were discarded. The proportion of outliers was low, 2.5% in the patient group and 1.3% in the control group. Mean response times were calculated for each emotion in the CFS-condition and control condition, respectively. Emotion-specific mean reaction times in the control condition were subtracted from the respective mean suppression time in the CFS condition to control for possible systematic reaction time differences. To reduce the influence of between-subject differences in overall suppression time and thereby increase sensitivity for within-subject differences, we analyzed the suppression time for happy and sad expression in relation to the neutral expression [19]. For statistical analysis we performed a repeated-measures ANOVA with the between-subject factor group and the within-subject factor emotion. One healthy control participant had to be excluded from analysis at T2 due to reaction times of more than 4 seconds in the control condition which was designed as a task controlling for reaction time.

Sample characteristics
There were no differences in the demographic variables age, gender, years of education, training degree, intelligence and handedness (see Table 1). Scores of BDI and HAMD were significantly higher in the patient group compared to the healthy control group (two-sample t-tests, p<0.001). A significant reduction of BDI and HAMD scores in the patient group after three months was observed (paired t-tests, p<0.001). None of the differences in the demographic variables became significant after exclusion of participants as stated in the Data analysis section. For participants included at time point T2, the difference in years of education between the two groups approached significance (p = 0.053).

Experiment 1
MDD is associated with negative perceptual bias. Logistic functions were fitted to each participant's responses (Fig 2A). Mean slope and PSE were calculated for each group.
For the data assessed at T1, two-sample t-tests yielded a difference in PSE between the two groups with higher PSEs for patients with MDD (Fig 2B and 2C), but no significant difference in slope (t(48) = 0.591; p = 0.558).
A further test of participants' responses at T1 was provided by modelling the data using GLMMs (Eq (4)). This analysis yielded a significant main effect of group (β 2 = -1.54, z = -4.55, p < 0.001) and, most importantly, a significant interaction between group and the morph step of the face (β 3 = 2.26, z = 4.74, p < 0.001). This indicates that the judgments about the emotional expressions of the faces differed between groups and that this difference was dependent on the proportion of happiness expressed by the faces.
Perceptual bias is affected by change in depressive symptoms. The analysis including the second measurement was restricted to participants who had taken part in the experiment   Fig 3A). Post-hoc paired t-tests within each group showed a significant reduction of the PSE in the MDD group (t(20) = -2.362, p = 0.028) and an increase in the control group (t (19) = 2.669, p = 0.015). In order to examine the sensitivity of the statistical analysis, we additionally performed a post-hoc power analysis for the control group [36]. This analysis yielded a statistical power of P = 1 -β = 0.757 and an effect size of d = 0.626, which approximately corresponds to the commonly recommended statistical power of P = 0.8 [37]. Furthermore, we analyzed whether the observed time-related difference in the control group was caused by statistical outliers. Outliers were neither detected at time point T1 nor at T2. However, there was one outlier regarding the difference in PSE between T1 and T2. The interaction of time  Fig 3B).
The effect of time was confirmed by the mixed-effects analysis (Eq (5)). This analysis revealed a significant interaction between group and time (β 5 = 2.10, z = 5.04, p < 0.001) as well as a significant three-way interaction between group, time, and morph level of the faces (β 6 = -2.74, z = -3.74, p < 0.001). To investigate whether reduced depression severity from T1 to T2 was related to the observed shift in perceptual bias in depressed patients, a Pearson correlation between the change in BDI and change of the PSE was performed. This analysis yielded a positive correlation (r = 0.501; p = 0.024; see Fig 4) indicating that a greater change in perceptual bias was associated with a greater reduction of depressive symptoms from T1 to T2.

Experiment 2
After ruling out differences between groups in overall suppression times (t(52) = 1.318; p = 0.193), overall reaction times (t(52) = 0.796; p = 0.430) and suppression times for the neutral expression (t(52) = 1.152; p = 0.169) on the basis of two-sample t-tests, we compared suppression time modulation [19]. Modulation of suppression time (suppression time for emotional divided by suppression time for neutral faces) for both groups at T1 is depicted in

Perceptual bias and access to awareness
We investigated whether the perceptual bias observed in Experiment 1 is related to automatic processing of emotional expressions as assessed with CFS in Experiment 2 in patients with MDD. Since differences between groups in Experiment 1 were only found for the PSEs of the logistic functions, we only included the PSEs in this analysis as a measure for perceptual bias. We performed a Pearson correlation between perceptual biases assessed in Experiment 1 and access to awareness assessed in Experiment 2, which was computed as the difference between

Discussion
We showed a negative bias in recognition of facial affect in patients with a current depressive episode. In a forced choice task, the criterion for the distinction of sad and happy expressions was shifted towards sad in patients with MDD relative to the control group. Patients with MDD classified morphed facial expressions with neutral or near-neutral emotional expression more frequently as sad. For the categorical shift from the perception of sad to happy to occur, patients needed higher intensities of happiness expressed by the face, compared to healthy control participants. After three months, we observed the opposite pattern with now a reduction of the previously observed perceptual bias in MDD patients and an even slightly greater bias towards sad faces in the control group. A reduction in perceptual bias correlated with the reduction of depressive symptoms. In the present study we found no evidence for preferential access to awareness of negative information.
Our results at T1 are in accordance with a number of previous studies that also reported a negative perceptual bias in patients with MDD [2,5,7]. A time-and severity-dependent reduction of this negative perceptual bias in response to subtle emotional face expressions has, in contrast, not been observed previously. In the present experiments we used photographs of faces, as opposed to schematic faces [4,14,[38][39][40][41]. While schematic faces have the advantage of allowing for a reliable comparison between studies, photographs of faces enable more subtle graduations of expression. We exploited this advantage by using finer modulations of emotional expression, especially in the near-neutral range, than several previous studies [2,5,7,13]. Another possibility to present varying intensity levels of emotional expressions would have been the use of dynamic facial expressions [8], which are easier to detect due to motion signals [42]. The naturalness of a dynamic expression, however, depends on its rate of change over time, which differs between emotions [43]. Thus, different emotions would have to be presented at a different rate of change to keep the level of naturalness constant across emotions. We aimed to Emotion Recognition in Major Depressive Disorder avoid this problem, and also potentially confounding effects of recognition or response time differences between groups. Therefore, and based on evidence which suggests that the dynamic component of human faces does not play a decisive role in the recognition of the emotions [44], we decided that the presentation of morphed static in the context of a forced-choice task would be best suited for the purpose of our study.

Negative perceptional bias is related to depressive state
Regarding the question whether a negative perceptual bias reflects a state or trait marker of depression has been addressed in several previous studies, which yielded heterogeneous results [12][13][14][15]45]. However, in these studies emotion recognition during the depressive episodes was not compared to emotion recognition in a later phase within the same individuals. It is possible that reductions of perceptual bias reflect differences between individuals rather than a reduction of perceptual biases over time. In the present study, we employed a repeated-measures design, which can be regarded an optimal design to study the temporal stability of such a negative perceptual bias in depressive patients. We found a reduction of perceptual bias in depressive patients from the initial test session to the following test session three months later. Furthermore, this reduction of an emotion-related negative bias was related to a reduction of depressive symptoms. Thus, to the best of our knowledge, we here show for the first time that a negative perceptual bias in emotion recognition changes with clinical improvement. It can thus be concluded that the change in perceptual bias reflects a change in clinical state rather than a trait marker of depression.
Of note, the significant time-by-group interaction was not only driven by a reduction of perceptual bias in MDD patients, but also by a slight increase in the control group. The interpretation of the latter finding must currently remain speculative. Possibly, the repetition of the task may have had differential effects on patients and control participants, respectively. It has indeed been shown before that sub-clinical mood changes, as elicited for instance by mood inductions, affect observers' accuracy in emotion recognition [46,47]. However, since we did not assess participants' current state of mood in the situation of testing, we cannot make any strong conclusions regarding the possible influence of short-term mood fluctuations on participants' responses. Interestingly, the repeated presentation of faces with varying intensities of emotional expressions influences healthy observers' sensitivity to the evaluation of facial expressions [48]. It remains a topic for future investigations to what extent such repetition-related changes in the recognition of emotional expressions are related to observers' changes in mood. Importantly, our data clearly show that repeated presentation of emotional faces had a different impact on patients compared to healthy participants. We can thus rule out that the effect observed in the patient group is a mere artifact of stimulus repetition. Moreover, by assessing current depressive symptoms both at T1 and T2 in patients as well as healthy controls we could rule out the possibility that the change in emotion recognition in the control group was related to a change in mood state in the sense of the occurrence of a depressive episode in formerly healthy control participants.

Access to awareness
In contrast to our previous results [19] there was no difference in suppression times for happy versus sad facial expressions in the present study. The results presented in [19] concur with several other studies that reported evidence for automatic biases in emotion processing in MDD [41,49,50]. Patients with MDD have been suggested to preferentially attend to negative stimuli [41,51,52]. It seems more likely that the failure to detect a group difference in access of emotional stimuli to awareness in the present study is related to other factors. In addition to differences in sample characteristics that are beyond the influence of the experimenter, the discrepancy between the present results and the earlier findings could be due to differences in study design. In the previous study, fearful faces were included, which might have had an indirect effect on the processing of other emotional expressions. Another possibly more important difference was that the dynamic Mondrian patterns were gradually faded out in the current study, similar to previous work [29], but in contrast to our earlier study [19], in which mask contrast was kept constant. With a constant mask contrast, suppression time is only determined by the properties of the target stimulus (in addition to endogenous factors, such as a depressive episode), while a gradual fade-out of the mask will inevitably lead to a break-through of the target once the mask contrast falls below a critical threshold. This could result in a substantially reduced sensitivity to detect suppression time effects caused either by target stimulus differences or by inter-individual differences that potentially affect access to awareness. While this explanation is speculative at the current stage, future studies should clarify how changes in mask contrast and other variables related to CFS task design influence sensitivity for the detection of intra-and inter-individual differences in access of visual stimuli to awareness.

Conclusions
This study shows an association of clinical symptoms of depression and a negative cognitive bias in emotion recognition. This finding contributes to our understanding of depressive symptomatology as it shows a clear relationship between current clinical state and emotion perception, suggesting that perceptual biases may play an important role in the pathophysiology of depression. Our findings, while in the context of the current study only meaningful at the group level, may help the future development of tools for the objective assessment of treatment response that may even aid the prognostic evaluation of patients with MDD.