Figures
Abstract
Cognitive biases have been studied in relation to schizophrenia and psychosis for over 50 years. Yet, the quality of the evidence linking cognitive biases and psychosis is not entirely clear. This umbrella-review examines the quality of the evidence and summarizes the effect sizes of the reasoning and interpretation cognitive biases studied in relation to psychotic characteristics (psychotic disorders, psychotic symptoms, psychotic-like experiences or psychosis risk). It also examines the evidence and the effects of psychological interventions for psychosis on cognitive biases. A systematic review of the literature was performed using the PRISMA guidelines and the GRADE system for 128 analyses extracted from 16 meta-analyses. Moderate to high-quality evidence with medium to large effect sizes were found for the following interpretation biases: externalization of cognitive events and self-serving bias, when people with psychotic symptoms were compared to control conditions. Regarding reasoning biases, moderate to high quality evidence with medium to large effect sizes were found for belief inflexibility when linked to delusion conviction and global severity in people with active delusions, although measures from the MADS, overlapping with symptoms, may have inflated effect sizes. Moderate quality evidence with medium to large effect sizes were found for jumping to conclusion biases when clinical samples with psychosis were compared to controls, when using data-gathering tasks. Other cognitive biases are not supported by quality evidence (e.g., personalizing bias, belief about disconfirmatory evidence), and certain measures (i.e., IPSAQ and ASQ) systematically found no effect or small effects. Psychological interventions (e.g., MCT) showed small effect sizes on cognitive biases, with moderate-high-quality evidence. This umbrella review brings a critical regard on the reasoning and interpretation biases and psychotic symptoms literature—although most biases linked to psychotic symptoms are supported by meta-analyses in some way, some have only demonstrated support with a specific population group (e.g., aberrant salience and hostility attribution in healthy individuals with psychotic-like experiences), whereas other biases are currently insufficiently supported by quality evidence. Future quality studies, particularly with clinical populations with psychotic symptoms, are still warranted to ascertain the psychosis-cognitive bias link for specific biases.
Citation: Samson C, Livet A, Gilker A, Potvin S, Sicard V, Lecomte T (2024) Reasoning and interpretation cognitive biases related to psychotic characteristics: An umbrella-review. PLoS ONE 19(12): e0314965. https://doi.org/10.1371/journal.pone.0314965
Editor: Veysel Temel, Karamanoglu Mehmetbey University: Karamanoglu Mehmetbey Universitesi, TÜRKIYE
Received: July 18, 2024; Accepted: November 20, 2024; Published: December 27, 2024
Copyright: © 2024 Samson et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript and its Supporting Information files.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Introduction
According to several theoretical models, cognitive biases, particularly reasoning and interpretation biases, are essential in psychotic symptom development and maintenance. A recent systematic review by Gaweda, encompassing 40 years of research on cognitive biases, describes several theories linking specific cognitive biases to psychotic symptoms or phenomena [1]. For instance, Freeman links reasoning biases to delusions in general, and persecutory delusions in particular [2], whereas Bentall has included attributional styles in his model of persecutory delusion formation [3]. These models, as well as the wealth of studies, including systematic reviews, on cognitive biases as linked to delusions, lead us to take for granted that the quality of the evidence is solid and that the effect size is substantial. Yet, a recent meta-analysis explored the link between childhood trauma and cognitive biases in those with psychosis [4], and concluded that the quality evidence of the studies was insufficient to currently determine which biases emerge following trauma, and whether they act as mediators in the relationship between childhood trauma and psychosis. To date, and to our knowledge, there is no umbrella review (i.e., a systematic review of meta-analyses) reporting the quality of the evidence and summarizing the effect sizes regarding interpretation and reasoning cognitive biases studies in psychosis, either linking biases to psychotic characteristics or looking at the effects of psychological intervention on cognitive biases. While systematic reviews or meta-analyses summarize the information found in individual studies, an umbrella review looks at the quality of the evidence across multiple meta-analyses and can help answer the question “How sure are we about this link or result?”. As such, the objective of this umbrella review is to examine the quality of the evidence, as well as the size of the association between interpretation and/or reasoning cognitive biases and psychotic characteristics (psychotic disorders, psychotic symptoms, psychotic-like experiences, or psychosis risk), and the effect of psychological interventions on changes in cognitive biases.
Why only focus on interpretation and reasoning biases? Beck [5] defined cognitive biases as systematic errors in thinking underlying psychopathology. Beck and Clark [6] proposed a three-stage model of information processing: 1) perception and attention, 2) interpretation and 3) recall. Rector and Beck [7] later conceptualized a cognitive explanation for delusions, auditory hallucinations, negative symptoms and thought disorders using the framework of cognitive biases. According to them, cognitive biases act in concert with the content of the individual’s belief system, increasing the individual’s psychological vulnerability to delusions. This model greatly influenced the development of cognitive behaviour therapy for psychosis [8], as well as metacognitive training (MCT [9]).
Many authors, such as Mathews and MacLeod [10], mention three categories of cognitive biases: attentional biases, interpretation biases and memory biases (or recall biases), reflecting Beck and Clark’s [6] three-stage model of information processing. It is important to consider that today, most authors distinguish reasoning biases from interpretation biases, reflecting Blanchette and Richards’ [11] differentiation of these complex cognitive processes (interpretation, reasoning, judgment and decision-making). Over the years, several published studies on cognitive biases in psychosis partly support this conceptualization, enabling several systematic reviews and meta-analyses. Nevertheless, these reviews do not all cover the same cognitive biases or report the same studies–making it difficult to ascertain if the cognitive models linking biases to psychosis hold true across multiple biases. Although a recent systematic review has attempted to include many cognitive biases measured via performance tasks [1], and therefore offer a more comprehensive review of the topic with possible links between several cognitive biases and symptoms, it was not a meta-analysis and therefore did not offer aggregated effect sizes, nor did it assess the quality of the studies. Still, all of these reviews have in common to have mostly documented reasoning and interpretation biases.
Therefore, in order to largely cover the recent literature while keeping a focus on the biases that have mostly been studied in relation to psychotic symptom development, maintenance and treatment, we chose to only focus on interpretation and reasoning biases in this umbrella review.
Materials and methods
Literature search strategy
This systematic review adhered to the 2020 Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines [12] and was initiated on October 26, 2021, with no time span specified regarding the initial date of publication. An updated search was conducted on June 25th 2024, and revealed three more meta-analyses, two that did not fit inclusion criteria and one that was added. The four following databases were systematically searched: PsyNet (PsycINFO and PsycARTICLES), Medline and Web of Science. The following keywords were used: (schizophren* or psychotic or psychosis or schizotyp* or “psychotic like” or “psychosis like” or “psychosis risk” or “psychotic risk” or paranoia or hallucination* or delusion*) and (bias* or distortion or “dysfunctional attitude” or “dysfunctional thought*” or “dysfunctional thinking” or distortion* or “distorted attitude*” or “distorted thinking” or “distorted thought*”) and (“meta-analysis”).
Inclusion and exclusion criteria
The decision to include or exclude reviews were made by consensus between authors.
Inclusion criteria
- We included meta-analyses that investigate the association between reasoning or interpretative cognitive biases and psychotic features. Specifically, studies were selected if they:
- Compare cognitive bias scores between individuals with psychotic disorders, psychotic symptoms, psychotic-like experiences, or psychosis risk and a control group.
- Examine correlations between reasoning or interpretative cognitive bias and symptom levels.
- Assess the impact of psychological interventions on cognitive biases.
- To further define the scope, included meta-analyses met these additional criteria:
- Metanalyses containing systematic reviews.
- Full-text peer-reviewed publications in either English or French, with books and conference abstracts excluded.
- Studies involving adult populations, defined as those with a mean participant age over 18 years.
- Analogue studies (i.e., experimental designs in which the procedures or participants used are similar but not identical to the situation of interest) in which participants did not have psychotic symptoms but could have other psychotic-like experiences, were at risk for psychosis or were rated on a psychotic-like experiences scale were included.
Exclusion criteria
- Meta-analyses about emotion recognition were excluded as these are largely considered a social cognitive deficit rather than a cognitive bias [13–16].
- Aggression bias was excluded because it refers more to a score of behavioral intention of aggressiveness than to a cognitive bias.
- Need for closure was also excluded, since it refers to the motivation to achieve finality and absoluteness in decisions, judgments, and choices, often prematurely and is typically considered an underlying motivation that can explain the jumping to conclusion bias [17], or a personality trait [17, 18].
Data extraction
Information on nine factors were retrieved when available, namely: 1) study design, 2) outcomes (e.g., type of cognitive biases, psychotic characteristics studied), 3) effect size, 4) confidence interval, 5) consistency (homogeneity, I squared, Cochran’s Q statistic), 6) number of studies and number of participants, 8) publication bias and, 9) consideration of confounding factors (e.g., age, sex, IQ).
Conclusions about effect sizes
The magnitude of the relation between a cognitive bias and a psychotic feature or symptom, as well as the effects of psychological interventions on cognitive biases, were determined based on the effect size estimated by the investigators [19]. In the current umbrella-review, effect size estimates were calculated using Cohen’s d, Hedge’s g, Pearson’s correlation coefficients or odds ratio. By convention [20], d/g estimates of 0.8, 0.55 and 0.2 were considered as large, moderate and small. Specifications about conclusions on effect sizes, R-values estimates included, are presented in supplementary material (S1 Table).
Quality assessment using the GRADE system
The GRADE system was used to assess the quality of the evidence following the data extraction [21]. Exact criteria used for the Quality assessment using the GRADE system are presented in supplementary material (S2 Table). According to this assessment system, the quality of the evidence of the meta-analyses’ results can be judged based on various factors, namely: 1) the size of the sample (the larger the better, ideally over 1000); 2) the precision of effects (i.e., the confidence interval is not too wide—as in Matheson and colleagues [22], 25% higher or lower than the effect size was considered ideal); 3) homogeneity of effects across studies (i.e., consistency of results from one study to the next), and 4) publication bias (if analyzed and if present). We checked for follow-up data (if any and the length of time) for meta-analyses regarding the effects of psychological treatments on cognitive biases, but not for the studies on the relation between biases and psychotic characteristics.
We also added a specific section for confounding factors considered, which could include controlling for study biases, trial quality, as well as other variables that could influence the results. Following the recommended guidelines, no points were given for the effect size. As such, a point is given for each element of the GRADE system measured (e.g., sample size, precision, consistency, publication bias and confounding factors; and follow-up data in the case of trials), with meta-analyses being rated from poor, poor to moderate, moderate, moderate to high, high–quality evidence, to very high-quality evidence, for a maximum of 6 points. If information was missing in the article, the authors of the meta-analysis were contacted and if they did not respond within two months, we considered the information as missing and no point was given for this factor. For each meta-analysis, four of the authors rated the different components with the GRADE system. Differences were discussed and a consensus was made for a final decision. Given the stringent criteria involved, consensus was easily reached (over 95% initial agreement). The authors of this article already used the GRADE system in previous umbrella and meta-reviews [23, 24].
Results
Included and excluded studies
Overall, our search generated 1753 potential articles, from which 748 duplicates were removed, 848 articles were excluded based on titles and 69 were excluded based on abstract. Based on full text-read, 62 articles were excluded because they were not linked to interpretation or reasoning cognitive biases or were not reporting statistics specific to cognitive biases (e.g., some studies reported a general score of social cognition), six articles were excluded because they were not meta-analyses, one was removed because participants did not have any psychoticfeatures, three were removed because of substantial overlap in studies with another selected meta-analysis. Overall, 128 results were extracted and graded from the 16 meta-analyses remaining (Fig 1).
Nomenclature.
In order to extract the results, a careful analysis of definitions, terms and measures used to measure the cognitive biases was conducted. This led to Table 1, which synthesizes the concepts measured in the meta-analyses under a single nomenclature, enabling us to aggregate together studies measuring the same concepts but that might have used different terminologies (Table 1).
Evidence for cognitive biases studied in relation to psychotic characteristics
The full results of the GRADE analysis are presented in supplementary material (S3 Table). Reasoning and interpretation biases can be broken down into smaller concepts, with specific biases studied. In Fig 2, we have attempted to synthesize the cognitive biases studied in meta-analyses on these two categories (reasoning and interpretation biases). As can be seen, attributional biases are a subtype of interpretation biases, and can be broken down into externalisation of cognitive events, or source monitoring, (with their valence–positive, neutral or negative), a mix of specific attributional biases, self-serving bias, personalizing bias, hostility attribution bias, and aberrant salience. Some authors also studied specific attributional biases regrouped together. Reasoning biases hold two larger categories, jumping to conclusions (JTC) and/or data gathering bias, and belief inflexibility being broken down into bias against disconfirmatory evidence (BADE), bias against confirmatory evidence (BACE), and liberal acceptance (LA) (Fig 2).
Interpretation biases.
One meta-analysis [29] addressed different interpretation biases together. Large to very-large effect sizes were found for individuals with clinical and subclinical paranoia versus controls, with moderate quality evidence. Small to small-medium effect sizes were found for the correlation between severity of paranoid symptoms in clinical and non-clinical populations, with moderate to moderate-high-quality evidence (overall). Sample sizes were acceptable, confounding variables were addressed. Half of the analyses had consistent results and the other half received the point for the verification and absence of publication bias.
Subgroup comparisons of individuals with paranoia versus without paranoia (in clinical or non-clinical populations only) and association of interpretation biases with paranoia severity showed respectively small-medium and very large effect sizes with poor to poor-to-moderate quality evidence, mostly because results were not precise, samples sizes were too small, publication biases were not reported and results were not consistent across studies.
Attributional biases. Two meta-analyses [30, 31] addressed attributional biases in general, without differentiating them. The effect sizes ranged from no-effect or very small effect to medium-large effects for the association of attributional biases with disorganization, thought disorder, alogia [30] and reality distortion [31]. These studies had overall poor-quality evidence since all of them were imprecise, had small sample sizes and publication bias was not reported. Most of the results were also inconsistent and confounding variables were not addressed.
One meta-analysis [33] addressed a mix of four specific attributional biases (external-personal attribution, personalizing bias, internal attribution for negative Events, externalizing bias) in the objective to evaluate the tendency to hold others responsible for negative events. A medium effect size was found for the comparison between individuals with psychosis and persecutory delusions versus healthy controls, with moderate quality evidence. A small effect size was found for the association between attributional biases and paranoia severity in psychosis, with moderate quality evidence. These analyses had acceptable sample sizes, addressed confounding variables and publication biases were reported and absent.
Other comparisons (individuals with psychosis and persecutory delusions versus individuals with depression; individuals with psychosis and persecutory delusions versus individuals with psychosis without persecutory delusions) showed small-medium to very large effect sizes but the quality of evidence was poor to moderate. The results were imprecise, inconsistent and sample sizes were too small.
Externalisation of cognitive events. Only one meta-analysis [32] addressed attributional biases related to the externalization of cognitive events. A medium-large effect size was found for clinical populations with hallucinations and/or non-clinical hallucinations-prone populations compared to clinical populations without hallucinations and/or non-clinical populations not prone to hallucinations, with moderate-high-quality evidence. Results were consistent, sample size was acceptable, confounding variables were considered and publication bias was reported and absent. Sub-analyses by population groups (clinical only or subclinical only) showed medium-large to large effect sizes, but the quality evidence was poor to moderate. Results were not precise, sample sizes were too small, publication bias was present or not reported. Sub-analyses by stimulus valence (positive, negative or neutral) showed small to medium-large to large effect sizes, but the quality evidence was also poor. Results were not precise, inconsistent, sample sizes were too small and publication biases were not reported.
Self-serving bias/externalizing bias. Two meta-analyses specifically addressed self-serving biases [34, 36]. When comparing individuals with persecutory delusions to individuals with remitted persecutory delusions [34], there was a small effect size with moderate-high-quality evidence. A small-medium effect size was found for people with persecutory delusions when compared to non-clinical controls, with moderate quality evidence [34], and a very large effect size for schizophrenia spectrum disorders versus major depression, with moderate quality evidence [34]. Confounding variables were considered, publication biases were verified and absent, most of the sample sizes were acceptable or good. Moreover, comparisons between persecutory delusions versus remitted persecutory delusions showed precise and consistent results.
Comparison between individuals with schizophrenia and controls showed no effect or very-small effect size, with moderate quality evidence [34]. Sample size was good, confounding variables were considered, publication bias was verified and absent. However, results were not precise and were inconsistent. Other comparisons between groups (persecutory delusions versus schizophrenia spectrum disorders without persecutory delusions; persecutory delusions versus remitted persecutory delusion) [34] showed medium-large to large effect sizes, but the quality evidence was poor-to-moderate. The association of self-serving bias with measures of positive psychotic-like experiences in healthy samples and in ultra-high-risk samples, in healthy samples only or in ultra-high-risk samples only, or with measures of negative psychotic-like experiences in healthy samples only [36] showed medium-large effect sizes, and the quality of evidence was poor to poor-to-moderate. Results were inconsistent, confounding variables were not considered, and most of the results were imprecise.
Regarding the type of measures used for the self-serving bias, a large effect size was found for persecutory delusions compared to non-clinical controls, but only when measures other than the Attributional Style Questionnaire (ASQ) [50], and the Internal Personal and Situational Attribution Questionnaire (IPSAQ) [66] and investigators ratings were used [34]. Quality evidence was moderate because sample size was acceptable, confounding variables were considered and publication bias was verified and absent. However, results were not precise and were inconsistent.
Analyses using ASQ, IPSAQ or investigators ratings for the same group comparisons showed no effect or very small effect sizes, and/or did not have good enough quality evidence to conclude about the results [34]. A medium-large effect size was found between persecutory delusion versus non-clinical controls when measured with ASQ with poor to moderate quality evidence. Results were not precise, inconsistent et publication bias was present. When measured with IPSAQ, results showed no effect or very small effect size, with moderate quality evidence. Sample size was good, confounding variables were considered and publication bias was verified and absent. However, results were not precise and were inconsistent. When measured with investigators ratings based on utterances, results showed no effect or very small effect sizes, with poor quality evidence. Results were not precise, inconsistent, sample size was too small and publication bias was present.
Personalizing bias. Two meta-analyses [35, 36] specifically addressed the personalizing bias. Quality evidence for these results was insufficient to conclude on the effect. For instance, the comparison of individuals with schizophrenia versus controls [35] showed no effect or very-small effect sizes with poor quality evidence based on the results being imprecise, inconsistent, sample size too small and publication bias not reported.
Effect sizes varied between no effect or very small to large for the association between personalizing bias and measures of positive-psychotic-like experiences in healthy samples and ultra-high risk samples; in healthy samples only; and in ultra-high-risk samples only [36]. The reported results showed poor quality evidence with confounding variables not reported, and for most, the results were imprecise and the publication bias was not reported.
Hostility attribution bias. One meta-analysis [36] specifically addressed the hostility attribution bias. A medium effect size was found for the association with measures of negative psychotic-like experiences in healthy samples only, with moderate quality evidence. Results were precise, consistent and sample size was good. Other analyses (association with measures of positive psychotic-like experiences in all studies, in healthy samples only and in ultra-high-risk samples only) showed small-medium to very large effect size, but quality evidence was poor-to-moderate since confounding variables were not verified. Most of the results were inconsistent, and publication biases were not reported for most of the analyses.
Aberrant salience. One meta-analysis [36] addressed the Aberrant Salience bias. Association with positive psychotic-like experiences in healthy samples showed a very-large effect size, with moderate quality evidence. Results were precise, sample size was good and publication bias was verified and absent. Association with negative psychotic-like experiences in healthy samples showed a small effect size, with poor quality evidence. Results were not precise, inconsistent, confounding variables and publication bias were not reported.
Reasoning biases.
Belief inflexibility. Two meta-analyses [36, 37] addressed belief inflexibility. A medium effect size was found for the relationship between belief inflexibility and global delusion severity in patients with psychotic disorders (patients with or without delusions), and a medium-large effect size was found for the relationship between belief inflexibility and delusional conviction in patients with psychotic disorders (patients with delusions only) [37], with moderate-high-quality evidence. Results were consistent, sample sizes were acceptable, confounding variables were considered and publication biases were verified and absent.
With moderate quality evidence, there was a medium effect size for the relationship between belief inflexibility and global delusion severity in patients with psychotic disorders (only participants with active delusions), a small effect size for the relationship between belief inflexibility and delusional preoccupation in patients with psychotic disorders (patients with delusions only) and a small effect size for the relationship between belief inflexibility and delusional distress in patients with psychotic disorders (patients with delusions only) [37]. Results were consistent, sample sizes were acceptable and confounding variables were considered.
When looking at analogue studies [36], the association of belief inflexibility with measures of positive psychotic-like experiences showed small effect sizes with poor quality evidence. The same effect size and quality evidence was shown for the healthy sample subgroup.
The association with measures of negative psychotic-like experiences in healthy samples only [36] also showed small effect sizes with poor quality evidence. Results were imprecise, inconsistent, and publication bias and confounding variables were not reported.
Regarding measures used for belief inflexibility, subgroup analyses for specific tasks showed no effect or very small to large effect sizes for interview-based measures and small-medium effect sizes for Bias against disconfirmatory evidence tasks (BADE-task) [83], with quality evidence from poor to poor-to-moderate [37]. Results were not precise, inconsistent, sample sizes were too small for half of the analyses and publication biases were not reported.
Bias against disconfirmatory evidence. One meta-analysis [38] specifically addressed the Bias Against Disconfirmatory evidence. Comparisons (schizophrenia with current delusions compared to healthy controls; schizophrenia with current delusions compared to schizophrenia without current delusions, schizophrenia without current delusions vs healthy controls; schizophrenia with current delusions compared to other psychiatric illnesses without current delusions) showed small-medium to medium-large effect sizes, with poor to poor-to-moderate quality evidence. Results were imprecise, sample sizes were too small and confounding variables were not considered. Half of the analyses did not report publication bias.
Bias against confirmatory evidence. One meta-analysis [38] specifically addressed the bias against confirmatory evidence. Comparisons (schizophrenia with current delusions versus healthy controls; schizophrenia with current delusions versus schizophrenia without current delusions; schizophrenia without current delusions versus healthy controls; schizophrenia with current delusions compared to other psychiatric illnesses without current delusions) showed small to medium effect sizes, but the quality evidence was poor. Results were imprecise, with small sample sizes, and publication bias and confounding variables were not reported.
Liberal acceptance. One meta-analysis [38] addressed the liberal acceptance bias. All of the analyses (comparisons between schizophrenia with current delusions vs healthy controls; schizophrenia with current delusions vs schizophrenia without current delusions; schizophrenia without current delusions vs healthy controls; and schizophrenia with current delusions vs other psychiatric illnesses without current delusions) had small-medium to large effect sizes, with poor quality evidence. Results were not precise, sample sizes were too small, confounding variables and publication biases were not reported.
Jumping to conclusions (JTC) and/or data gathering bias. Five meta-analyses [36, 38–40, 42] addressed the Jumping to conclusions and/or Data gathering bias. A medium-large effect size was found for the comparison between individuals with psychosis versus individuals with other mental health problems [40], with moderate-high-quality evidence. The results were precise, the sample size was moderate, confounding variables were considered, and the publication bias was verified. However, if only extreme responding was considered, the same comparison between individuals with psychosis versus individuals with other mental health problems for extreme responding showed large effect size [40], with poor quality evidence. The results were imprecise, the sample size was small, the confounding variable was considered and the publication bias was not verified.
A small effect size was found for the association of JTC with Peters et al’s. Delusions Inventory [84] scores overall (i.e. in the general population, those with current delusions, with previous delusions, with anxiety or depression, who are at risk of developing psychosis, with obsessional-compulsive disorder, and those in a new religious movement, grouped together), with moderate-high-quality evidence [42]. Samples sizes were good or acceptable, confounding variables were considered, publication bias was verified and absent. Results from the comparisons between individuals with psychosis versus individuals with other mental health problems were precise but inconsistent, whereas results for the association with Peters et al’s. Delusions Inventory scores, overall, were consistent but not precise. However, subgroup analyses (general population, current delusions, previous delusions, anxiety or depression, at-risk, obsessive-compulsive disorder, new religious movement) for the association between JTC and the Peters et al.’s Delusions Inventory showed no effect or extra-small to medium effect sizes, but the quality evidence was poor to poor-to-moderate. Most results were not precise and inconsistent, sample sizes were too small for most analyses and publication bias was not reported [42].
A medium effect size was found for the comparison between individuals with psychosis versus healthy individuals, and a large effect size was found for the same comparison when only extreme responding was considered, with moderate quality evidence [40]. Both sample sizes were large, confounding variables were considered and publication bias was verified. A medium-large effect size was found for the comparison between individuals with a psychotic disorder versus controls, with a moderate quality evidence [39]. The sample size was big, confounding variables were considered and publication bias was verified and absent.
A medium-large effect size was present when looking at the association between extreme responding in JTC and the presence or severity of delusions in psychosis with moderate quality evidence [40]. The results were consistent, the sample size was moderate and the publication bias was verified. However, when not addressing extreme responding, results for the association of JTC with delusion severity in people with psychosis and delusions showed no effect or very small effect size [40], with moderate quality evidence. The sample size was moderate, the confounding variables were considered and the publication bias was verified. A small-medium effect size was found for the comparison between schizophrenia with current delusions versus schizophrenia without current delusions [38] with moderate quality evidence. The results are consistent, the sample size is moderate and verification bias was verified. However, another meta- [40], with similar group comparison (comparison between individuals with psychosis and delusions versus individuals with psychosis without delusions) showed small effective size, with poor to moderate quality evidence. The results are imprecise, the sample size was small and the publication was not mentioned.
When looking at analogue studies, associations of JTC (measured with a self-report questionnaire) and measures of psychotic-like experiences (positive psychotic-like experiences in healthy and ultra-high-risk studies, and in healthy samples only, association with measures of negative psychotic-like experiences in healthy samples) [36] showed no effect to small effect sizes, but quality evidence was poor based on the results being not precise, inconsistent, confounding variables not considered and publication biases not reported.
Others group or subgroup comparisons studied by McLean et al. [38] (schizophrenia with current delusions versus healthy controls; other psychiatric illnesses with current delusions versus healthy controls; schizophrenia with current delusions versus other psychiatric illnesses with current delusions; schizophrenia with current delusions versus other psychiatric illnesses without current delusions; other psychiatric illnesses with current delusions versus other psychiatric illnesses without current delusions) showed small to large effect size, with poor to poor to moderate quality evidence. All of these analyses did not consider confounding variables and publication bias was not reported. Other group comparisons studied by So et al. [39] (patients with schizophrenia spectrum disorder versus healthy controls, patients with delusions versus controls) showed medium-large effect sizes, with quality evidence poor-to-moderate. The results were imprecise, inconsistent and the publication bias was not reported.
Evidence for the effects of interventions on cognitive biases
In terms of the quality of the evidence for the effects of interventions on cognitive biases, psychological interventions in general [25] and Meta-Cognitive Training (MCT) in relation to the data-gathering bias more specifically [41] have been documented in meta-analyses. When analyzed together, the effects of psychological interventions MCT [9], and MCT adaptations, Maudsley review training programs (MRTP [85], MRTP in combination with the Interpretative Bias Modification (CBM-I; [86] or its adaptation, Thinking Well (TW) program [87], Reasoning Training (RT; [88], Cognitive Bias Correction (CBC; [89] and Cognitive Bias Modification (CBM; [90]) had small effect sizes on cognitive biases, with moderate-high overall quality evidence [25].
MCT had a small-medium effect size regarding the effect on jumping to conclusions (data-gathering bias) with poor overall quality evidence, since the results were not precise, inconsistent, the sample size was small, and publication bias was not considered [41]. MCT’s effect on cognitive biases as a large category, based on 19 studies (and 931 participants) reports a small effect size (g = 0.16) with moderate quality of evidence, based on good consistency, fairly large sample size and control for confounding variables, although when only RCTs were included (14 studies, 658 participants), the effect size became non-significant and the quality of the evidence also decreased to poor [26]. Furthermore, Penney et al. compared follow-up scores (less than 1 year, 10 studies, 658 participants) to both post-intervention and pre-intervention scores, revealing either small or very small effect sizes. Similarly, a very small effect size was observed between follow-up scores (more than 1 year, 3 studies, 328 participants) and post-intervention scores, but the quality of the evidence in these instances was poor or poor to moderate [26].
Discussion
Cognitive biases and psychotic characteristics
By grouping cognitive biases studied in meta-analyses under similar concepts, we can conclude that there is good evidence regarding the presence of the following alongside psychotic characteristics.
Interpretations biases
Individuals with paranoia (clinical and subclinical regrouped) tend to show large interpretation biases when compared to controls. We also see small to small-medium correlations between the severity of paranoia and interpretation biases [29]. This concurs with existing theoretical models (e.g., Freeman).
Attributional biases. Two meta-analyses that studied multiple attributional biases [30, 31] grouped together found no effect to medium-large effect sizes, with insufficient quality evidence. However, Murphy et al. [33] showed that individuals with persecutory delusions had a greater tendency for a mix of specific attributional biases (external-personal attribution, personalizing bias, internal attribution for negative events, externalizing bias) (effect size was medium), with moderate quality of evidence. These contradictory results could be an artifact of studies looking at several biases together, given the amalgam of different concepts measures, and often small sample sizes.
Externalisation of cognitive events. Individuals with hallucinations and hallucination-prone individuals (clinical and non-clinical grouped together) have a greater tendency (effect size was medium-large) to misattribute internally generated cognitive events to external sources [32]. Although sub-analyses on stimuli valence were inconclusive and more quality studies are warranted, the current results seem congruent with the literature, whereby negative situations are often attributed to external causes to protect self-esteem [34].
Self-serving bias/Externalizing bias. Individuals with schizophrenia-spectrum disorders with persecutory delusions demonstrate a stronger self-serving bias than non-clinical controls, and then individuals with schizophrenia-spectrum disorders with remitted persecutory delusions (with small to small-medium effect sizes). There is no effect or a very-small effect size when comparing individuals with schizophrenia (with and without persecutory delusions) and controls, meaning the self-serving bias would be specific to persecutory delusions.
However, these results should be interpreted with caution, since sub-analyses reveal an absence of effect when using the ASQ [50] or IPSAQ [66], both widely used questionnaires for this bias. It is unclear at this moment if the other measures used are flawed, or measure the concept differently, or if the IPSAQ and ASQ have methodological issues, but studies using other measures report large effect sizes. Results also show that individuals with schizophrenia spectrum disorders have a greater tendency for this type of attribution compared to individuals with major depression, with a very large effect size. This is consistent with the literature on major depression, as depressed individuals lack the self-serving bias often found with controls [91].
Personalizing bias. The evidence was too poor to conclude on the relation between the personalizing bias and schizophrenia (with no effect or very small effect size) [35] and positive psychotic-like experiences in healthy samples and ultra-high risk samples (with no effect or very small effect sizes to large effect sizes) [36]. This bias was only measured with the IPSAQ. None of the meta-analyses in our review addressed the specific relation between the personalizing bias and paranoia or the severity of paranoia within clinical populations.
Hostility attribution bias. Hostility attribution bias was associated with negative-psychotic-like experiences in healthy samples (with a medium effect size), but quality evidence was poor-to-moderate [34]. Although these results are consistent with Rector and Beck’s theory [92] which suggests that negative symptoms of psychosis may be linked to social aversion and defeatist attitudes, the limited quality of evidence, the restricted sample (comprising only individuals with positive psychotic-like experiences in healthy or ultra-high-risk groups), and the lack of meta-analyses addressing this specific attributional bias in individuals with psychotic disorders collectively constrain our ability to confirm the role of hostility attribution bias in psychotic symptoms. The hostility attribution literature in schizophrenia and psychosis appears limited at this point, with a paucity of studies, mostly from the same team, suggesting links between hostility attribution bias and suspiciousness [93, 94].
Aberrant salience bias. Aberrant Salience bias was associated with positive psychotic-like experiences in healthy samples (with a very-large effect size), but with poor quality evidence. No meta-analysis in our review addressed this cognitive bias in individuals with psychotic disorders. This was particularly surprising as the aberrant salience literature has exploded, with over 200 studies in the past decade alone. For instance, a recent study [95] found a strong link between the aberrant salience bias and delusions, specifically ideas of reference. A meta-analysis on aberrant salience bias in psychosis is definitely warranted.
Reasoning bias. Belief inflexibility bias, bias against disconfirmatory evidence, bias against confirmatory evidence, and liberal acceptance bias. Belief inflexibility is associated with global delusion severity in patients with psychotic disorders and does not seem specific to delusions. It is also associated with delusion conviction, delusional preoccupation and delusional distress. Belief inflexibility biases show a medium effect size on global severity of delusion and a medium-large effect size delusional conviction, but only a small effect on delusional preoccupation and delusional distress suggesting that belief inflexibility biases bring more conviction in beliefs, but not much more distressed about them. These studies however did not look at the content of the delusions, which might further explain this result as paranoid delusions and grandiose delusions can trigger different levels of distress. Regarding studies on healthy and ultra-high-risk samples, the quality of the evidence is too poor at the moment to conclude.
As mentioned by Zhu and colleagues [37], the choice of measure can greatly influence the results here as some studies used items from the Maudsley Assessment of Delusions Schedule (MADS [81]), which overlap with measures of psychotic symptoms. and thus increase the effect size, particularly when compared to studies where belief inflexibility is measured with a task unrelated to the individual’s delusions (e.g., the BADE task [83]). As such, we found a large effect size between delusional conviction and MADS subscales but only a small-medium effect size between delusional conviction and BADE tasks, although the quality evidence was insufficient to clearly conclude about the effects of specific tasks on the results.
Regarding the Bias Against Disconfirmatory evidence [37, 38], insufficient overall quality evidence limits our ability to confirm or not the link with psychotic symptoms.
Overall quality evidence was also insufficient regarding the Bias against confirmatory evidence and the Liberal acceptance bias, both measured with scores derived from the BADE task. Methodologically sound studies using the BADE task, perhaps using emotion-triggering as well as neutral information, are warranted before we can conclude if this measure captures these reasoning biases well in individuals with psychotic characteristics.
Jumping to conclusions (JTC) and/or data gathering bias. Individuals with psychosis show a larger jumping to conclusions bias (Data-Gathering bias) than healthy individuals, both when comparing numbers of draws to decision and when considering only extreme answers (i.e. deciding after 1 or 2 beads), with medium to large effect sizes [40] and moderate to high quality evidence (Table 2). Individuals with psychosis also show a larger JTC (Data-Gathering) bias than individuals with other mental health problems, when comparing number of draws before making a decision, with a medium-large effect size [39, 40].
Only extreme responding in JTC (i.e., deciding after a single or two beads/fish) is associated with the presence or severity of delusions in psychosis, with a medium-large effect size [40]. It is not clear however if extreme responding reveals more a lack of understanding of the task, a ‘need for closure’ (motivation to achieve absoluteness in judgements) [17], greater impulsivity or a stronger reasoning bias [40].
Comparisons between individuals with schizophrenia or a psychotic disorder with delusions versus those without delusions show a small effect size, with high quality evidence, which appears to support the idea that JTC might measure proneness to delusions but not necessarily delusions themselves [96].
When using the Peters et al’s Delusions Inventory [84] to assess delusional thinking, JTC biases were found in people from the general population, individuals with current delusions, individuals with previous delusions, individuals with anxiety or depressive disorders, individuals with ultra-high-risk for psychosis, individuals with obsessive-compulsive disorder and individuals in a new religious movement), with small effect sizes. There is no evidence that JTC, as measured with a self-report questionnaire, is related to positive- or negative-psychotic-like experiences in healthy and ultra-high-risk for psychosis individuals [36]. Validation studies on the Cognitive Bias Questionnaire [56] do suggest more important biases, including JTC bias, in people with psychosis when compared to the non-clinical controls but do not show convergent validity with JTC or other cognitive bias tasks. A meta-analysis that addresses JTC using self-report questionnaires in clinical populations is required.
Effects of psychological interventions on cognitive biases
Regarding the effects of psychological interventions on cognitive biases, we found moderate-high-quality evidence and a small effect size when grouping together interventions targeting cognitive biases (Table 3). MCT was the intervention used in the majority of studies in Sauve et al.’s meta-analysis [25] and all of the studies in Penney et al’s meta-analysis [26]. When looking at the effects of MCT on the data gathering bias only, the quality of evidence is weaker, given that the results were not precise, not consistent, the sample size was small and publication bias was not considered. However, these results were part of a larger meta-analysis on the effects on MCT on cognition, which explains the small sample size in the sub analyses and why publication bias was not verified in this particular analysis. Overall, we find a small effect size on cognitive biases, with moderate quality of the evidence overall, but no effect and poor or poor to moderate quality of the evidence when including follow-up data or only high-quality studies. This might appear surprising as meta-analyses show that MCT improves psychotic symptoms (small-medium effect size), but cognitive biases that are theorized and targeted in the training to improve these symptoms, only show a small effect size in terms of improvements. Are we faced with a sensitivity-measure issue, whereby changes in biases are not detected by the measures used in the trials, or are other mediating variables at play? Future studies need to explore this question as the theoretical underpinnings of MCT could otherwise be questioned.
It is also important to mention that no meta-analysis specifically addressing the effects of cognitive behaviour therapy for psychosis (CBTp) on cognitive biases was found in our review. CBTp is a cognitively oriented intervention that targets cognitive biases in order to modify beliefs underlying hallucinations and delusions [97]. CBTp is a well-recognized evidence-based treatment [98], considered effective in reducing psychotic (positive and negative) symptoms as well as depressive and social anxiety symptoms, and in improving functioning and mood [99].
Relations between cognitive biases
While this umbrella-review investigated the associations between cognitive biases and psychotic characteristics, it does not allow us to extrapolate on the relationships between cognitive biases, as was theoretically hypothesized in a recent review [1]. Several cognitive models of psychopathology suggest roles for more than one cognitive bias in the maintenance of different disorders, with bidirectional effects. This phenomenon has been named the combined effect of cognitive biases hypothesis [101].
For instance, Broyd et al.’s [102] model on the formation and maintenance of delusional beliefs suggests that some cognitive biases (and other cognitive factors) would be related to the formation of delusions, whereas others would be more related to the maintenance of delusions. Cognitive factors such as JTC (which, according to the authors, would be mediated by the Liberal acceptance bias and by Salience) would interfere with the individual’s ability to process the information encountered using a “top down” process, i.e. by considering the information already known to interpret new stimuli. A study using a Bayesian mathematical model suggested that not using top-down processes prevents new information from being appropriately integrated [103]. Other cognitive vulnerabilities such as difficulties related to metamemory [104] (i.e. metacognitive knowledge and processes that necessitate memory [105], confirmation bias and BADE would prevent the person from invalidating their delusional beliefs. It would be relevant to further clarify which cognitive biases are linked to the formation and/or maintenance of delusions, since these could allow us to adjust cognitive interventions according to the specific phases of psychosis (e.g., prodromal/at-risk phase or residual phase).
Limitations
Our umbrella-review is limited by the meta-analyses it included. Several reviews reported inconsistent results. This could be partly explained by the lack of consensus in the literature regarding the nomenclature, the classification and the measures used regarding cognitive biases. As such, cognitive bias names are used by different authors to designate different concepts. The term “externalizing bias” for instance is used by Brookwell et al. [32] to designate “misattribution of internally generated cognitive events to an external source”, whatever their valence (neutral, positive and negative), by Savla et al. [35] and Livet et al. [36] to designate “attributing external causes to negative situations and/or attributing internal causes to positive situations”, and by Murphy et al. [33] to designate a group of attributional biases (including the external-personal attribution score, the personalizing bias, the internality attribution score for negative events and the externalizing bias score). Several other small distinctions between concepts and nomenclature can be found across the meta-analyses used in our review. The plethora of different measures for the same concepts can also explain the important discrepancies in effect sizes across studies.
There are likely multiple studies on specific cognitive biases in psychosis or schizophrenia that were not described here because they were not included in a meta-analysis. For instance, we did not find meta-analyses on other specific interpretation or reasoning biases such as the catastrophizing bias, the dichotomous thinking bias or the emotional-based reasoning bias [56]. This could be explained by the paucity of individual studies on some of these biases in psychosis, as well as the few well-validated instruments available to measure them. However, there are some individual studies [106, 107] reporting results on catastrophizing bias, using the catastrophizing interview procedure [108], as well as studies using the catastrophizing scale of Peters and al.’s Cognitive Bias Questionnaire for psychosis [56]–they have yet to be included in a meta-analysis. We did not find meta-analyses regarding cognitive biases in people with substance-induced psychosis, nor meta-analyses looking at CBTp’s effects on cognitive biases. Furthermore, meta-analyses on cognitive biases and psychotic features measured longitudinally were absent from our search. Moreover, other cognitive biases (attention to threat, aberrant salience) were only reported in Livet et al. [36] within analogue studies involving individuals with psychotic experiences, and have not been reviewed in meta-analyses that include clinical populations. Finally, results specifically on the personalizing bias were also only reported in Livet et al. [36] in analogue studies, and in Savla et al. [35] as a sub analysis of a larger study on social cognition, making the conclusion we can draw from the results limited. It would be important to address the personalizing bias in a future meta-analysis, and to compare it with results linked to the self-serving bias (often called externalizing bias) and the hostility attribution bias to determine if the personalizing bias, or the hostility attribution bias is more associated with persecutory delusions than the self-serving bias.
Strengths
An important strength of this umbrella-review is the use of the GRADE system to assess the quality of meta-analyses investigating relationships between cognitive biases and psychotic characteristics, including both healthy individuals and individuals with psychotic disorders. This enabled us to clearly identify the links with psychotic symptoms and comparisons between populations on biases supported by strong evidence, as well as document gaps in the literature and the need for future studies. This umbrella-review covered several specific biases, under two larger categories of biases, namely reasoning and interpretation biases. These biases are currently the most studied cognitive biases in schizophrenia, although a growing literature is developing on other cognitive biases such as memory and attention biases [109, 110]. Our umbrella-review was also quite thorough, carefully reporting sub-analyses as well as main analyses and considering several definitions and measures.
Conclusion
Our umbrella-review highlights the need to conduct more and better studies in order to improve the quality evidence for certain research questions regarding cognitive biases and psychosis. As we can see in supplementary materials many meta-analyses lost quality evidence points (using the GRADE system) because the results were imprecise, or heterogeneous. Indeed, effect sizes are not constant across individual studies because of the variety of sample sizes used, or the small sample sizes, to assess the same biases.
This umbrella-review aimed to examine the quality of the evidence and importance of the association between psychotic characteristics and interpretation and reasoning cognitive biases. Associations between biases and psychotic characteristics had good quality evidence with medium to large effect sizes for interpretation biases when grouped together, as well as for the externalization of cognitive events, and self-serving bias, when people with clinical or subclinical symptoms were compared to control conditions. Similar effect sizes and quality evidence were found for belief inflexibility bias (when measured with MADS), and the jumping to conclusions bias when studied with data-gathering experimental methods (extreme score and number of draws to decision). The personalizing bias, the bias against disconfirmatory evidence, the bias against confirmatory evidence, and the liberal acceptance bias still need further quality research before we can conclude on their relation with psychotic characteristics. The hostility attribution bias and the aberrant salience bias need to be reviewed with clinical samples (and not just at-risk or non-clinical samples). High heterogeneity of effect sizes across studies negatively impacted the overall quality of most studies, and could reflect issues with the variety of measures used for the same concepts. Imprecise results also reflect the small or varying sample sizes involved in studying specific biases in several studies. Psychological interventions targeting cognitive biases appear to have a small effect on cognitive biases, which dissipate over time. New meta-analyses, on CBTp for instance, are necessary to conclude with sufficient evidence the effects of the interventions on the targeted cognitive biases.
In conclusion, this umbrella review gives caution toward taking for granted the quality and importance of the evidence supporting our cognitive models of psychosis, and the mediating role of cognitive biases in psychological treatments for psychosis. We have solid proof for some links and biases, but more rigorous studies and meta-analytical reviews are needed. A consensual nomenclature and selection of common measures for cognitive biases would definitely improve the quality of future systematic reviews.
Supporting information
S1 Table. Conclusions on effect sizes.
Note. d = Cohen’s d, g = Hedges’g, R/RS = Pearson correlation coefficients, OR = Odds ratio.
https://doi.org/10.1371/journal.pone.0314965.s001
(DOCX)
S2 Table. Criteria for the quality assessment using the GRADE system.
Note. If I2 and Q was not conclusive, Q was used for bigger sample size.
https://doi.org/10.1371/journal.pone.0314965.s002
(DOCX)
S3 Table. Description of included studies and their statistical and GRADE characteristics.
Note. Conclusions on effect size. d/g: < 0.2 = No effect or very small; 0.2 to < 0.3 = Small; 0. to < 0.45 = Small-Medium; 0.45 to < 0.55 = Medium; 0.55 to < 0.75 = Medium-Large; 0.75 to < 1 = Large; > 1 = Very Large. R or RS: < 0.1 = No effect or very small; 0.1 to < 0.2 = Small; 0.2 to < 0.3 = Medium; 0.3 to < 0.4 = Medium-Large; 0.4 to < 0.5 = Large; > 0.5 = Very Large. OR: < 1 = No effect or very small; 1 to < 1.25 = Small; 1.25 to < 1.50 = Medium; 1.50 to < 2.50 = Medium-Large; 2.50 to < 10 = Large Size of the sample. < 500 = 0 point; 500 to < 1000 = 0.5 point, > 1000 = 1 point. Precision of effects. Large CIs > 0.25 in either direction = 0 point; Tight CIs < 0.25 in either direction = 1 point. Homogeneity of effects across studies. I2> 30% or Q is significant = 0 point; I2 < 30% or Q is not significant = 1 point. Follow-up data. Absence of follow-up data = 0 point; Presence of follow-up data (less than six months) = 0.5 point; Presence of follow-up data (six months or more) = 1 point. Publication bias. Not verified or not reported or verified and presence of publication bias = 0 point; Verified and absence of bias = 1 point. Confounding factors. No verified = 0 point; Verified = 1 point. Overall Quality. Total points for all elements of the GRADE system measured: < 1 = poor; 1 to < 2 = Poor to Moderate; 2 to < 3 = Moderate; 3 to < 4 = Moderate-High; 4 to 6 = High.
https://doi.org/10.1371/journal.pone.0314965.s003
(DOCX)
S5 Table. List of all screened records and reasons of exclusion.
https://doi.org/10.1371/journal.pone.0314965.s005
(XLSX)
References
- 1. Gawęda Ł, Kowalski J, Aleksandrowicz A, Bagrowska P, Dąbkowska M, Pionke-Ubych R. A systematic review of performance-based assessment studies on cognitive biases in schizophrenia spectrum psychoses and clinical high-risk states: A summary of 40 years of research. Clin Psychol Rev. 2024;108:102391. https://doi.org/10.1016/j.cpr.2024.102391.
- 2. Freeman D, Garety P. Advances in understanding and treating persecutory delusions: a review. Social psychiatry and psychiatric epidemiology. 2014;49(8):1179–89. pmid:25005465
- 3. Bentall RP, Corcoran R, Howard R, Blackwood N, Kinderman P. Persecutory delusions: a review and theoretical integration. Clin Psychol Rev. 2001;21(8):1143–92. pmid:11702511
- 4. Croft J, Martin D, Madley-Dowd P, Strelchuk D, Davies J, Heron J, et al. Childhood trauma and cognitive biases associated with psychosis: A systematic review and meta-analysis. PLoS One. 2021;16(2):e0246948. pmid:33630859
- 5. Beck AT. Thinking and Depression. I. Idiosyncratic Content and Cognitive Distortions. Archives of General Psychiatry. 1963;9:324–33. 10.1001/archpsyc.1963.01720160014002. pmid:14045261
- 6. Beck AT, Clark DA. An information processing model of anxiety: automatic and strategic processes. Behav Res Ther. 1997;35(1):49–58. pmid:9009043
- 7. Rector NA, Beck AT. Cognitive therapy for schizophrenia: from conceptualization to intervention. Can J Psychiatry. 2002;47(1):39–48. pmid:11873707
- 8. Garety PA, Fowler D, Kuipers E. Cognitive-behavioral therapy for medication-resistant symptoms. Schizophr Bull. 2000;26(1):73–86. pmid:10755670
- 9. Moritz S, Woodward TS. Metacognitive training in schizophrenia: from basic research to knowledge translation and intervention. Current Opinion in Psychiatry. 2007;20(6):619–25. pmid:17921766
- 10. Mathews A, MacLeod C. Cognitive vulnerability to emotional disorders. Annu. 2005;1:167–95. pmid:17716086
- 11.
Blanchette I, Richards A. The influence of affect on higher level cognition: A review of research on interpretation, judgement, decision making and reasoning: Psychology Press; 2010.
- 12. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. International journal of surgery. 2021;88:105906. pmid:33789826
- 13. Addington J, Addington D. Facial affect recognition and information processing in schizophrenia and bipolar disorder. Schizophr Res. 1998;32(3):171–81. pmid:9720122
- 14. Edwards J, Jackson HJ, Pattison PE. Emotion recognition via facial expression and affective prosody in schizophrenia: a methodological review. Clin Psychol Rev. 2002;22(6):789–832. pmid:12214327
- 15. Mandal MK, Pandey R, Prasad AB. Facial expressions of emotions and schizophrenia: a review. Schizophr Bull. 1998;24(3):399–412. pmid:9718632
- 16. Pinkham AE, Penn DL, Perkins DO, Lieberman J. Implications for the neural basis of social cognition for the study of schizophrenia. Am J Psychiatry. 2003;160(5):815–24. pmid:12727681
- 17. Evans NJ, Rae B, Bushmakin M, Rubin M, Brown SD. Need for closure is associated with urgency in perceptual decision-making. Memory & Cognition. 2017;45(7):1193–205. pmid:28585159
- 18. McKay R, Langdon R, Coltheart M. Need for closure, jumping to conclusions, and decisiveness in delusion-prone individuals. J Nerv Ment Dis. 2006;194(6):422–6. pmid:16772859
- 19. Balshem H, Helfand M, Schunemann HJ, Oxman AD, Kunz R, Brozek J, et al. GRADE guidelines: 3. Rating the quality of evidence. J Clin Epidemiol. 2011;64(4):401–6. pmid:21208779
- 20.
Cohen J. Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers.; 1988.
- 21. Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, et al. Grading quality of evidence and strength of recommendations. Bmj. 2004;328(7454):1490. pmid:15205295
- 22. Matheson SL, Shepherd AM, Laurens KR, Carr VJ. A systematic meta-review grading the evidence for non-genetic risk factors and putative antecedents of schizophrenia. Schizophr Res. 2011;133(1–3):133–42. pmid:21999904
- 23. Lecomte T, Potvin S, Samson C, Francoeur A, Hache-Labelle C, Gagné S, et al. Predicting and preventing symptom onset and relapse in schizophrenia-A metareview of current empirical evidence. J Abnorm Psychol. 2019;128:8. pmid:31343181
- 24. Lecomte T, Potvin S, Corbière M, Guay S, Samson C, Cloutier B, et al. Mobile Apps for Mental Health Issues: Meta-Review of Meta-Analyses. JMIR Mhealth Uhealth. 2020. pmid:32348289
- 25. Sauve G, Lavigne KM, Pochiet G, Brodeur MB, Lepage M. Efficacy of psychological interventions targeting cognitive biases in schizophrenia: A systematic review and meta-analysis. Clin Psychol Rev. 2020;78:101854. pmid:32361339
- 26. Penney D, Sauvé G, Mendelson D, Thibaudeau E, Moritz S, Lepage M. Immediate and Sustained Outcomes and Moderators Associated With Metacognitive Training for Psychosis: A Systematic Review and Meta-analysis. JAMA Psychiatry. 2022;79(5):417–29. pmid:35320347
- 27. Moritz S, Goritz AS, Balzan RP, Gaweda L, Kulagin SC, Andreou C. A new paradigm to measure probabilistic reasoning and a possible answer to the question why psychosis-prone individuals jump to conclusions. J Abnorm Psychol. 2017;126(4):406–15. pmid:28277733
- 28. Garety PA, Hemsley DR, Wessely S. Reasoning in deluded schizophrenic and paranoid patients. Biases in performance on a probabilistic inference task. J Nerv Ment Dis. 1991;179(4):194–201. pmid:2007889
- 29. Trotta A, Kang J, Stahl D, Yiend J. Interpretation bias in paranoia: A systematic review and meta-analysis. Clinical Psychological Science. 2021;9(1):3–23.
- 30. de Sousa P, Sellwood W, Griffiths M, Bentall RP. Disorganisation, thought disorder and socio-cognitive functioning in schizophrenia spectrum disorders. Br J Psychiatry. 2019;214(2):103–12. pmid:30139394
- 31. Ventura J, Wood RC, Hellemann GS. Symptom domains and neurocognitive functioning can help differentiate social cognitive processes in schizophrenia: a meta-analysis. Schizophr Bull. 2013;39(1):102–11. pmid:21765165
- 32. Brookwell ML, Bentall RP, Varese F. Externalizing biases and hallucinations in source-monitoring, self-monitoring and signal detection studies: a meta-analytic review. Psychol Med. 2013;43(12):2465–75. pmid:23282942
- 33. Murphy P, Bentall RP, Freeman D, O’Rourke S, Hutton P. The paranoia as defence model of persecutory delusions: a systematic review and meta-analysis. Lancet Psychiatry. 2018;5(11):913–29. pmid:30314852
- 34. Muller H, Betz LT, Bechdolf A. A comprehensive meta-analysis of the self-serving bias in schizophrenia spectrum disorders compared to non-clinical subjects. Neurosci Biobehav Rev. 2021;120:542–9. pmid:33148471
- 35. Savla GN, Vella L, Armstrong CC, Penn DL, Twamley EW. Deficits in domains of social cognition in schizophrenia: a meta-analysis of the empirical evidence. Schizophr Bull. 2013;39(5):979–92. pmid:22949733
- 36. Livet A, Navarri X, Potvin S, Conrod P. Cognitive biases in individuals with psychotic-like experiences: A systematic review and a meta-analysis. Schizophrenia Research. 2020;222:10–22. pmid:32595098
- 37. Zhu C, Sun X, So SH. Associations between belief inflexibility and dimensions of delusions: A meta-analytic review of two approaches to assessing belief flexibility. Br J Clin Psychol. 2018;57(1):59–81. pmid:28805246
- 38. McLean BF, Mattiske JK, Balzan RP. Association of the Jumping to Conclusions and Evidence Integration Biases With Delusions in Psychosis: A Detailed Meta-analysis. Schizophrenia Bulletin. 2017;43(2):344–54. pmid:27169465
- 39. So SH, Siu NY, Wong HL, Chan W, Garety PA. ’Jumping to conclusions’ data-gathering bias in psychosis and other psychiatric disorders—Two meta-analyses of comparisons between patients and healthy individuals. Clinical psychology review. 2016;46:151–67. pmid:27216559
- 40. Dudley R, Taylor P, Wickham S, Hutton P. Psychosis, Delusions and the "Jumping to Conclusions" Reasoning Bias: A Systematic Review and Meta-analysis. Schizophrenia Bulletin. 2016;42(3):652–65. pmid:26519952
- 41. van Oosterhout B, Smit F, Krabbendam L, Castelein S, Staring AB, van der Gaag M. Metacognitive training for schizophrenia spectrum patients: a meta-analysis on outcome studies. Psychol Med. 2016;46(1):47–57. pmid:26190517
- 42. Ross RM, McKay R, Coltheart M, Langdon R. Jumping to Conclusions About the Beads Task? A Meta-analysis of Delusional Ideation and Data-Gathering. Schizophrenia Bulletin. 2015;41(5):1183–91. pmid:25616503
- 43. Adolphs R, Tranel D, Damasio AR. The human amygdala in social judgment. 1998;1(6684):470–4. pmid:9624002
- 44. Couture SM, Penn DL, Losh M, Adolphs R, Hurley R, Piven J. Comparison of social cognitive functioning in schizophrenia and high functioning autism: more convergence than divergence. Psychol Med. 2010;40(4):569–79. pmid:19671209
- 45. Cicero DC, Kerns JG, McCarthy DM. The Aberrant Salience Inventory: a new measure of psychosis proneness. Psychol Assess. 2010;22(3):688–701. pmid:20822281
- 46. Combs DR, Penn DL, Wicher M, Waldheter E. The Ambiguous Intentions Hostility Questionnaire (AIHQ): a new measure for evaluating hostile social-cognitive biases in paranoia. Cognitive Neuropsychiatry. 2007;12(2):128–43. pmid:17453895
- 47. Holmes EA, Mathews A, Dalgleish T, Mackintosh B. Positive interpretation training: effects of mental imagery versus verbal training on positive mood. Behav. 2006;37(3):237–47. pmid:16942975
- 48. Nowicki S, Duke MP. A locus of control scale for noncollege as well as college adults. Journal of Personality Assessment. 1974.
- 49. Fornells-Ambrojo M, Garety PA. Attributional biases in paranoia: the development and validation of the Achievement and Relationships Attributions Task (ARAT). Cogn. 2009;14(2):87–109. pmid:19370434
- 50. Peterson C, Semmel A, Von Baeyer C, Abramson LY, Metalsky GI, Seligman ME. The attributional style questionnaire. Cognitive therapy and research. 1982;6(3):287–99.
- 51. Brunstein J. Attributionsstil und Depression; erste Befunde zur Reliabilität und Validität eines deutschsprachigen Attributionsstilfragebogens. Zeitschrift für Differentielle und Diagnostische Psychologie. 1986;7:45–53.
- 52. Moritz S, Woodward TS. A generalized bias against disconfirmatory evidence in schizophrenia. Psychiatry Res. 2006;142(2–3):157–65. pmid:16631258
- 53. Gudjonsson GH, Singh KK. The revised Gudjonsson blame attribution inventory. Personality and Individual Differences. 1989;10(1):67–70.
- 54. Huq SF, Garety PA, Hemsley DR. Probabilistic judgements in deluded and non-deluded subjects. Q J Exp Psychol [A]. 1988;40(4):801–12. pmid:3212213
- 55. Peterson C, Luborsky L, Seligman ME. Attributions and depressive mood shifts: a case study using the symptom-context method. J Abnorm Psychol. 1983;92(1):96–103. pmid:6833639
- 56. Peters ER, Moritz S, Schwannauer M, Wiseman Z, Greenwood KE, Scott J, et al. Cognitive Biases Questionnaire for psychosis. Schizophrenia Bulletin. 2014;40(2):300–13. pmid:23413104
- 57. van der Gaag M, Schutz C, Ten Napel A, Landa Y, Delespaul P, Bak M, et al. Development of the Davos assessment of cognitive biases scale (DACOBS). Schizophrenia Research. 2013;144(1–3):63–71. pmid:23332365
- 58. Freeman D, Garety PA, Fowler D, Kuipers E, Bebbington PE, Dunn G. Why do people with delusions fail to choose more realistic explanations for their experiences? An empirical investigation. J Consult Clin Psychol. 2004;72(4):671–80. pmid:15301652
- 59. Peterson C, Villanova P. An Expanded Attributional Style Questionnaire. J Abnorm Psychol. 1988;97(1):87–9. pmid:3351118
- 60. Woodward TS, Munz M, LeClerc C, Lecomte T. Change in delusions is associated with change in "jumping to conclusions". Psychiatry Res. 2009;170(2–3):124–7. pmid:19906443
- 61. White TP, Borgan F, Ralley O, Shergill SS. You looking at me?: Interpreting social cues in schizophrenia. Psychol Med. 2016;46(1):149–60. pmid:26338032
- 62. Peyroux E, Strickland B, Tapiero I, Franck N. The intentionality bias in schizophrenia. Psychiatry Res. 2014;219(3):426–30. pmid:25042425
- 63. Turkat ID, Keane SP, Thompson-Pope SK. Social processing errors among paranoid personalities. Journal of Psychopathology and Behavioral Assessment. 1990;12(3):263–9.
- 64. Alloy LB, Abramson LY. Judgment of contingency in depressed and nondepressed students: sadder but wiser? J Exp Psychol Gen. 1979;108(4):441–85. pmid:528910
- 65. Green CE, Freeman D, Kuipers E, Bebbington P, Fowler D, Dunn G, et al. Paranoid explanations of experience: a novel experimental study. Behavioural & Cognitive Psychotherapy. 2011;39(1):21–34. pmid:20846468
- 66. Kinderman P, Bentall RP. A new measure of causal locus: The Internal, Personal and Situational Attributions Questionnaire. Personality and Individual Differences. 1996;20(2):261–4.
- 67. Kinderman P, Bentall RP. Causal attributions in paranoia and depression: internal, personal, and situational attributions for negative events. J Abnorm Psychol. 1997;106(2):341–5. pmid:9131855
- 68. Stratton P, Munton AG, Hanks HGI, Heard DH, Davidson C. Leeds Attributional Coding System (LACS) Manual. Leeds: LFTRC1997.
- 69. Holt DJ, Titone D, Long LS, Goff DC, Cather C, Rauch SL, et al. The misattribution of salience in delusional patients with schizophrenia. Schizophr Res. 2006;83(2–3):247–56. pmid:16540291
- 70. Winters KC, Neale JM. Delusions and delusional thinking in psychotics: A review of the literature. Clin Psychol Rev. 1983;3(2):227–53.
- 71. Jack A, Egan V. Paranoid thinking, cognitive bias and dangerous neighbourhoods: Implications for perception of threat and expectations of victimisation. Int J Soc Psychiatry. 2016;62(2):123–32. pmid:26290397
- 72. Mathews A, Mackintosh B. Induced emotional interpretation bias and anxiety. J Abnorm Psychol. 2000;109(4):602–15. pmid:11195984
- 73. Salemink E, van den Hout M. Validation of the "recognition task" used in the training of interpretation biases. J Behav Ther Exp Psychiatry. 2010;41(2):140–4. pmid:19962127
- 74.
Kahneman D, Slovic SP, Slovic P, Tversky A. Judgment under uncertainty: Heuristics and biases: Cambridge university press; 1982.
- 75. Melo SS, Bentall RP. ’Poor me’ versus ’Bad me’ paranoia: the association between self-beliefs and the instability of persecutory ideation. Psychol Psychother. 2013;86(2):146–63. pmid:23674466
- 76. Nurmi J-E, Salmela-Aro K, Haavisto T. The strategy and attribution questionnaire: Psychometric properties. European Journal of Psychological Assessment. 1995;11(2):108.
- 77.
Roberts DL, Fiszdon J, Tek C, editors. Ecological validity of the social cognition screening questionnaire (scsq). Schizophr Bull; 2011: OXFORD UNIV PRESS GREAT CLARENDON ST, OXFORD OX2 6DP, ENGLAND.
- 78. Eysenck MW, Mogg K, May J, Richards A, Mathews A. Bias in interpretation of ambiguous sentences related to threat in anxiety. J Abnorm Psychol. 1991;100(2):144–50. pmid:2040764
- 79. Wenzlaff RM. The mental control of depression: Psychological obstacles to emotional well-being. 1993.
- 80. Wenzlaff RM, Bates DE. Unmasking a cognitive vulnerability to depression: how lapses in mental control reveal depressive thinking. J Pers Soc Psychol. 1998;75(6):1559–71. pmid:9914666
- 81. Wessely S, Buchanan A, Reed A, Cutting J, Everitt B, Garety P, et al. Acting on delusions. I: Prevalence. Br J Psychiatry. 1993;163:69–76. pmid:8353703
- 82. Freeman D, Slater M, Bebbington PE, Garety PA, Kuipers E, Fowler D, et al. Can virtual reality be used to investigate persecutory ideation? J Nerv Ment Dis. 2003;191(8):509–14. pmid:12972853
- 83. Woodward TS, Moritz S, Cuttler C, Whitman JC. The contribution of a cognitive bias against disconfirmatory evidence (BADE) to delusions in schizophrenia. Journal of Clinical & Experimental Neuropsychology. 2006;28(4):605–17.
- 84. Peters ER, Joseph SA, Garety PA. Measurement of delusional ideation in the normal population: introducing the PDI (Peters et al. Delusions Inventory). Schizophrenia Bulletin. 1999;25(3):553–76. pmid:10478789
- 85. Waller H, Freeman D, Jolley S, Dunn G, Garety P. Targeting reasoning biases in delusions: a pilot study of the Maudsley Review Training Programme for individuals with persistent, high conviction delusions. J Behav Ther Exp Psychiatry. 2011;42(3):414–21. pmid:21481815
- 86. Hurley J, Hodgekins J, Coker S, Fowler D. Persecutory delusions: effects of Cognitive Bias Modification for Interpretation and the Maudsley Review Training Programme on social anxiety, jumping to conclusions, belief inflexibility and paranoia. J Behav Ther Exp Psychiatry. 2018;61:14–23. pmid:29883776
- 87. Waller H, Emsley R, Freeman D, Bebbington P, Dunn G, Fowler D, et al. Thinking Well: A randomised controlled feasibility study of a new CBT therapy targeting reasoning biases in people with distressing persecutory delusional beliefs. J Behav Ther Exp Psychiatry. 2015;48:82–9. pmid:25770671
- 88. Ross K, Freeman D, Dunn G, Garety P. A randomized experimental investigation of reasoning training for people with delusions. Schizophr Bull. 2011;37(2):324–33. pmid:19520745
- 89. Moritz S, Mayer-Stassfurth H, Endlich L, Andreou C, Ramdani N, Petermann F, et al. The Benefits of Doubt: Cognitive Bias Correction Reduces Hasty Decision-Making in Schizophrenia. Cognitive Therapy and Research. 2015;39(5):627–35.
- 90. Steel C, Wykes T, Ruddle A, Smith G, Shah DM, Holmes EA. Can we harness computerised cognitive bias modification to treat anxiety in schizophrenia? A first step highlighting the role of mental imagery. Psychiatry Res. 2010;178(3):451–5. pmid:20553826
- 91. Cui G, Wang Y, Wang X, Zheng L, Li L, Li P, et al. Static and dynamic functional connectivity of the prefrontal cortex during resting-state predicts self-serving bias in depression. Behav Brain Res. 2020;379:112335. pmid:31697986
- 92. Rector NA, Beck AT, Stolar N. The negative symptoms of schizophrenia: a cognitive perspective. The Canadian Journal of Psychiatry. 2005;50(5):247–57. pmid:15968839
- 93. Buck BE, Pinkham AE, Harvey PD, Penn DL. Revisiting the validity of measures of social cognitive bias in schizophrenia: Additional results from the Social Cognition Psychometric Evaluation (SCOPE) study. Br J Clin Psychol. 2016;55(4):441–54. pmid:27168196
- 94. Buck B, Hester NR, Pinkham A, Harvey PD, Jarskog LF, Penn DL. The bias toward intentionality in schizophrenia: Automaticity, context, and relationships to symptoms and functioning. J Abnorm Psychol. 2018;127(5):503–12. pmid:30010368
- 95. Ceballos-Munuera C, Senin-Calderon C, Fernandez-Leon S, Fuentes-Marquez S, Rodriguez-Testal JF. Aberrant Salience and Disorganized Symptoms as Mediators of Psychosis. Front Psychol. 2022;13:878331. pmid:35496226
- 96. Garety PA, Freeman D, Jolley S, Dunn G, Bebbington PE, Fowler DG, et al. Reasoning, emotions, and delusional conviction in psychosis. Journal of Abnormal Psychology. 2005;114(3):373–84. pmid:16117574
- 97. Samson C, Achim AM, Sicard V, Gilker A, Francoeur A, Franck N, et al. Further validation of the Cognitive Biases Questionnaire for psychosis. BMC Psychiatry. 2022;22(1). pmid:35986316
- 98. Norman R, Lecomte T, Addington D, Anderson E. CPA treatment guidelines on psychosocial treatment of adults. Canadian Journal of Psychiatry. 2017;62(9):617–23. pmid:28703017
- 99. Wykes T, Steel C, Everitt B, Tarrier N. Cognitive Behavior Therapy for Schizophrenia: Effect Sizes, Clinical Models, and Methodological Rigor. Schizophrenia Bulletin. 2008;34(3):523–37. pmid:17962231
- 100. Turner R, Hoppitt L, Hodgekins J, Wilkinson J, Mackintosh B, Fowler D. Cognitive bias modification in the treatment of social anxiety in early psychosis: a single case series. Behavioural & Cognitive Psychotherapy. 2011;39(3):341–7. pmid:21320359
- 101. Hirsch CR, Clark DM, Mathews A. Imagery and interpretations in social phobia: support for the combined cognitive biases hypothesis. Behav. 2006;37(3):223–36. pmid:16942974
- 102. Broyd A, Balzan RP, Woodward TS, Allen P. Dopamine, cognitive biases and assessment of certainty: A neurocognitive model of delusions. Clin Psychol Rev. 2017;54:96–106. pmid:28448827
- 103. Corlett PR, Murray GK, Honey GD, Aitken MRF, Shanks DR, Robbins TW, et al. Disrupted prediction-error signal in psychosis: evidence for an associative account of delusions. Brain. 2007 Sep 1;130(9):2387–400. pmid:17690132
- 104. Moritz S, Woodward TS. Metacognitive control over false memories: a key determinant of delusional thinking. Current psychiatry reports. 2006;8(3):184–90. pmid:19817068
- 105. Le Berre A-P, Eustache F, Beaunieux H. La métamémoire: théorie et clinique. Revue de neuropsychologie. 2009(4):312–20.
- 106. Freeman D, Pugh K, Garety P. Jumping to conclusions and paranoid ideation in the general population. Schizophr Res. 2008;102(1–3):254–60. pmid:18442898
- 107. Startup H, Freeman D, Garety PA. Persecutory delusions and catastrophic worry in psychosis: developing the understanding of delusion distress and persistence. Behav Res Ther. 2007;45(3):523–37. pmid:16782048
- 108. Davey GCL. The Catastrophising Interview Procedure. Worry and its Psychological Disorders2006. p. 157–76.
- 109. Savulich G, Shergill S, Yiend J. Biased cognition in psychosis. Journal of Experimental Psychopathology. 2012;3(4):514–36.
- 110. Taylor JL, John CH. Attentional and memory bias in persecutory delusions and depression. Psychopathology. 2004;37(5):233–41. pmid:15383713